ANTHONY CORNETTO | SIMULATION

Lou sits down with Anthony Cornetto to discuss the state of computer simulation, acquiring HVE, building a Blender-based super tool, and the integration of AI.

You can also find an audio only version on your favorite podcast platform.

A rough transcript can be found below.



Timeline of Topics:

00:01:23 Tony’s background

00:09:14 The purchase and expansion of HVE

00:34:48 Blender and its application in collision reconstruction

00:54:01 Creating 3D models

00:58:41 The future of Blender and HVE - from simulation to beautiful visualization

01:04:21 What’s in Tony’s current toolkit?

01:13:15 Best tool under $5,000

01:17:02 360 cameras

01:23:24 Resources for continued education

01:27:23 What does the future of HVE look like?

01:37:229 Low-Poly modeling

01:43:04 All-in-one super tool?

01:55:57 How will AI integrate into recon?

02:09:19 Most used tool in Tony’s current arsenal

02:10:58 What’s inside the future toolkit of a reconstructionist, and what isn’t?


Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.

Lou (00:00:19):

This Episode is brought to you by Lightpoint, of which I'm the principal engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analyses. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, you know it takes hours of searching for the perfect model, awkward conversations with dealers, and usually some cash to grease the wheels. Then back at the office. It takes a couple more hours to stitch and clean the data, and that eats up manpower and adds a lot to the bottom line of your invoice.

(00:00:49):

Save yourself the headache so you can spend more time on what really matters, the analysis. Lightpoint has already measured most vehicles with a top of the line scanner Leica's RTC360, so no one in the community has to do it again. The exemplar point cloud is delivered in PTS format, includes the interior, and is fully cleaned and ready to drop into your favorite program such as Cloud Compare 3ds Max, Rhino, Virtual CRASH, PC CRASH, among others. Head over to lightpointdata.com/datadriven to check out the database and receive 15% off your first order. lightpointdata.com/datadriven

(00:01:23):

Hello everyone. My guest today is Mr. Anthony Cornetto. Tony Cornetto is a forensic engineer and collision reconstructionist holding bachelor's and master's degrees in engineering from Virginia Tech, as well as a professional engineering license. Tony has 25 years of experience in the forensic industry, starting his career with FTI, which then became SEA in 1999, and forming Momenta, a consulting engineering firm, in 2016. He has authored numerous publications on the use and validity of computer simulations and became the principal at Engineering Dynamics Corporation, creators of the esteemed simulation suite HVE in 2020. Tony has also conducted research and authored papers regarding mechanical engineering, vehicle performance, electronic data recorders, and demonstrative evidence. He is consistently pushing the boundaries in photogrammetry, 3D modeling, and simulation. He has testified in trials on both state and federal levels and has been kind enough to make some time to talk shop today at no charge from what I hear. So thanks so much for being here, Tony.

Anthony (00:02:26):

Sure, thank you. I appreciate it.

Lou (00:02:28):

So one of the interesting things to me for most forensic engineers is just how the heck they got into the industry. If you're anything like me, when you were in undergrad, you had no idea what forensic engineering was. So what was your path from undergrad to grad school to engineering, and how'd you first hear about forensic engineering?

Anthony (00:02:49):

So I kind of consider it a hidden industry in a way. It's not very much talked about. You don't see it at job fairs, maybe more today, but back then I didn't see it at any type of job fair. So I did my undergrad in engineering science mechanics. I had an interest in biomechanics, so I concentrated in that area, went on to do my master's degree in engineering mechanics with a concentration in biomechanics, biomaterials, bioengineering, and then afterwards was looking for what to do next and ended up getting a position at Johns Hopkins Hospital, which I thought was perfect coming out of grad school. Ended up working for a retinal surgeon, Eugene Dewan. It was kind of the perfect timing for me because he had just transferred from Duke to Johns Hopkins and he had set up a lab basically in the basement of Wilmer Eye Institute.

(00:03:52):

No windows, nothing. We were down in a dungeon and we had our own machine shop, and basically he had come up with a new surgery for macular degeneration called macular translocation, and he needed instruments for eye surgery. So we developed the instruments there and actually had the machine shop make them and we'd take them up into the surgical room and put them in the autoclave and then he could use them. As long as it wasn't anything like lasers, those we had to go through a whole process. So while I'm there for about a year, year and a half, we're doing all of that. And one of my coworkers was a Johns Hopkins graduate. He had seen a presentation by Joe Reynolds who was the co-founder of FTI consulting, also a Hopkins graduate, and we had been talking about what to do next in our lives careers, and I was looking for somebody that was doing biomechanical type work, but I didn't really want to go to a big industry.

(00:04:58):

And he's like, oh, you should check out FTI Consulting. So that was my introduction to forensic engineering was because somebody saw a presentation by someone else and I started doing research into it. FTI Consulting happened to be a half hour down the road from me. They had some positions open I wanted to get in, so they had a position in animation and they had a position in mechanical engineering. So I applied for both.

Lou (00:05:28):

Animation back then too. I mean, that's pretty early on to be into animation. That's impressive.

Anthony (00:05:34):

My introduction to that area started in the college level, but while at Hopkins when we were making instruments for this new surgery, we also had to create a demonstration on how to do the surgery so that we could then supply that to the doctors that were going to be using it. So we had a little department that was creating demonstrations, demonstratives, basically using Macromedia Director back then, or Flash way back.

Lou (00:06:05):

Which got bought out by Adobe or something like that, if I remember correctly.

Anthony (00:06:11):

So I had an introduction to that world and I thought, okay, I'm going to get in either through mechanical engineering or through animation. The guy who ran the animation department, he said, take the mechanical engineering job if they give it to you and I'll use you as I can. I ended up getting a position as a mechanical engineer, and yeah, that's how I ended up at FTI.

Lou (00:06:32):

That's pretty cool. And then so they were eventually bought out by SEA or maybe they were part and parcel for a little bit, and you stayed there for what, 17 years or so?

Anthony (00:06:42):

So they actually right around the time when I started at FTI, they had purchased SEA, they had bought SEA and then there was after they were together for a while and then SEA was basically sold back off. But yeah, I was there for 17 years. Again, I started off in mechanical engineering doing forensic analysis of all kinds of mechanical failures, and at the time, Fauzi Bion had started the accident reconstruction group in Annapolis. He had come over from failure analysis at the time and he had a lot of work. And so I started talking to Fauzi and working my way and doing work with him and next thing you know, I'm in the vehicle group. And then I stayed there for a long time.

Lou (00:07:31):

Yeah. And is that the primary makeup of your casework at this point? Do you still do mechanical failures and whatnot, or is it almost all recon traffic related things?

Anthony (00:07:41):

It's almost all vehicle with some fall type cases.

Lou (00:07:46):

Yep. Okay.

Anthony (00:07:47):

And the group at SEA, Fauzi's group that he had done while he was at failure, that's what he had kind of learned to do was vehicles and falls. And so I followed in his footsteps, so to say.

Lou (00:08:04):

I had a little bit of that as well. I worked with Eric Deyerl at Dial Engineering in Culver City for a while, and he did a lot of that as well. Slip and fall, also big, huge sophisticated collision reconstructions and then also failure analyses. And I think potentially, I mean maybe this is something we'll talk about as well as we go down the rabbit hole a little bit farther, but back then I think it might've been a little bit easier to do a lot of different things and be good at them. Whereas now things are just becoming more and more sophisticated. It's very difficult to be good at everything if you're not really focused.

Anthony (00:08:42):

It's funny when new engineers would ask me if I had any recommendations for them, and I'd say find something to make it your focus, for example, motorcycles for you. When I was at SEA, simulation became one of my focuses. I was known for being able to do simulation work and then also night visibility. And so those are kind of the two areas that I continued to do research.

Lou (00:09:14):

And then that brings me to it. Next thing, we'll probably jump back and forth a little bit in your background because there's a lot there. But obviously looking at your publication list, which is 30 long or something like that, a bunch of that revolves around high-end sophisticated computer simulation analysis of car crashes. And I imagine that you were using HVE at the time, and then eventually you became the owner of HVE, which is just amazing because that program's been around since 1984, Terry Day, founder and just kind of a staple of our entire industry, and to be the owner of that pretty amazing. So how did that come about?

Anthony (00:09:58):

It's an interesting story. So after my time at SEA, Jeff Suway had recently gone off on his own, just to bring Jeff into it. He and I, we worked together at SEA, he was out in California and we were talking about night visibility type work and started doing some research papers together. And we authored, I guess it was four papers over two years, but when, I'm trying to think if it was the first set or the second, but we were presenting them at SAE, World Congress. And while I'm up there, I knew Terry was going to be in town and Terry and I had a relationship through HVE and he had asked me to instruct at the advanced class at the HVE forum. So I called him up and said, Hey, we're going to do dinner. Do you want to go out to dinner? And so the first night it was kind of cool. It was Jeff Suway and Jeff Muttart and Terry Day and myself and Eric Deyerl and I'm sorry, not Eric Deyerll, it was

Lou (00:11:09):

From Principia, Erica Rostea, maybe?

Anthony (00:11:12):

There was a bunch of us at the table and I'm sitting here thinking, wow, this is impressive. I'm with some pretty smart people here and well recognized people.

Lou (00:11:24):

Was that the pizza place in Detroit?

Anthony (00:11:27):

Oh, I can't remember what place it was, but we had a nice big table.

Lou (00:11:33):

And some beer probably.

Anthony (00:11:35):

Yeah. And so that was a great night too though. I asked Terry if he wanted to go out to dinner, and so Terry and I went out to dinner and I basically said, Hey, I'm interested in what the future of HVE is when you decide you want to retire.

Lou (00:11:54):

Yeah, it had been on all power users' mind at that point because Terry's so brilliant and knows the program so well and helped develop it. And obviously Terry can't be at the helm forever. So I think all of us were wondering who has the capability to take this over? So you weren't alone with that sentiment.

Anthony (00:12:16):

He said that basically he didn't have an immediate plan, but he reached out to me, I guess not quite a year later, and it was November, he called me up and he said he had a plan that he was looking for someone who was a user of HVE, longtime user, engineer, interested in the future, so on and so forth. And I'm like, yeah, I'm your guy Terry. And then next thing you know, here we are. Little did we know at that time that four months later the world was going to shut down. So yeah, my experience started with the HVE Forum right after the Forum in Austin. It was the end of February, March 1st. I took over, what March 18th or so, COVID shut the world down. So it was a-

Lou (00:13:12):

Surprise.

Anthony (00:13:13):

It was an interesting start, yeah.

Lou (00:13:15):

Good luck growing it. And one of the things that I had known your reputation over the years, but you and I never really interacted until IPTM maybe a year and a half or two ago, and then we just kind of hit it off pretty quickly. But I was always a little bit hesitant about who could take it over after Terry, just because he's a bonafide genius, but in developing a relationship with you, it's clear he picked the right guy.

(00:13:46):

And part of that is just like your engineering intelligence, but then the other part of that is you have your finger on the pulse of the community, you know what's important to everybody in the community. And then you also have these big chops in the fields of 3D modeling and rendering and using Blender and 3D renderers like Unreal Engine and things. And it seems like there's eventually going to be some integration of those things just in that the physics of HVE are obviously cream of the crop, then the graphics have never been cream of the crop. They haven't been intended to be cream of the crop, but it seems like there's an opportunity there for you to blend your expertises and bring it to the next level. I'm not sure if that's even on your horizon. How do you think about that?

Anthony (00:14:42):

Yeah, so absolutely. I kind of look at it like Terry was focused on physics first, which he should be because a physics program. And so that was the main focus. And it's great. The physics is great, and I come at it from a little bit, a different point of view because I was a long time user. So I'm looking at it from a user point of view where Terry hasn't been in the field for a long time, so he didn't see it the same way that I see it. So my focus became user interface, trying to improve areas like number of mouse clicks to get to various things, improving how you see the data, the graphs, things of that nature. And then I've been thinking for a long time about how to move to a better graphics engine in some way. And that's definitely on the horizon. It's not easy to choose a graphics engine, and we can get into that, but that kind of goes to, I think that users are interested in improved graphics today. There's an expectation today that you can get good graphics easily.

Lou (00:16:10):

From the simulator, which is kind of novel, in the past 10 years ago, I don't think anybody expected great demonstratives to be spit out by their simulation package. They kind of expected to take that data and bring it over to 3ds Max or Blender or something like that. But now, yeah, it's shifting that way. And one thing that, I'm an HVE user as well, and I've been to the forums and I think it's really cool that you get all of these interested users together annually, post covid and pre covid and everybody exchanges ideas, there's workshops, there's white papers. People are always kind of pushing the boundaries of what you can do with HVE. And I know you just hopped out of, in Fort Myers, you just had the HVE Forum a couple of weeks back. So how did that go and what's on the horizon there? What kind of new things are you seeing the community developing?

Anthony (00:17:08):

So the forum went really well. It wasn't the biggest forum that we've had. I think post COVID, we've been virtual for the last two years, so this is the first in-person post COVID. What was great is we had a lot of new users, which is always good for the future. And so that was exciting to see because I know that moving forward we're going to see those users coming back. And as far as big changes, for me, the biggest one, and a lot of people we're talking to may not be aware of HVE's physics, but SIMON and DyMESH are the, I guess the top end physics package within HVE where you're doing full three-dimensional simulation of vehicle dynamics and also the impact portion.

(00:18:03):

And one of the things that was added is the ability for the wheels, the DyMESH wheels to interact with the environment. And it seems kind of like, oh, not a huge deal, but for me it was a huge deal. So for example, you can drive over a curb and the wheel, the DyMESH wheel will actually interact with that curb. And in the advanced course I had everybody set up different vehicles and we all use the radial spring tire model, which has springs coming in whatever span, and then which runs pretty slow because you have a whole bunch of springs.

Lou (00:18:46):

I remember that. Yeah, I remember that very distinctly. Yeah.

Anthony (00:18:48):

Yeah, it runs really slow. So we did that side by side with the new DyMESH wheel to environment impact model and showed that they match really well. And DyMESH is significantly faster than the radial spring tires.

Lou (00:19:06):

Oh, that's awesome.

Anthony (00:19:06):

So for me, that's a big, huge new feature within HVE.

Lou (00:19:12):

Yeah, that's awesome. The ability to iterate quickly is imperative in these simulations. Nobody ever gets it right the first go around.

Anthony (00:19:20):

And then also for doing complex rollovers, anytime you have the wheels hit the ground with a full three-dimensional vehicle dynamics, you know that if you only are modeling it as a point on the bottom of the tire, you can get weird things happening. But now because the wheel can actually interact with the ground when you're doing rollovers, those wheels will contact on all sides. It's a full mesh wheel that can now hit the ground. So rollovers and then vehicle to vehicle impacts, you have more control over how the wheels can interact with each other.

Lou (00:19:54):

Yeah, sometimes that's huge. Sometimes that's a huge part of the collision dynamics, so that's cool.

Anthony (00:20:01):

And then the other big one for me is, and I think you and I have talked about it a bit in the past, but using simulation to match EDR data and how that's being done more often today. And I think HVE can really shine in that area.

Lou (00:20:23):

Yeah, I was thinking about that. I mean, 20 years ago, 2003, there was still a lot of vehicles on the roadway that didn't have EDRs. And now I think I saw recently in one of Rick Ruth's presentations, it's something like 95 to 99% of vehicles rolling off the assembly line have an EDR there, have a black box on board. So it is just more and more common that we get data and we want to match that via some simulation to make sure that we have the dynamics right, so that now, that is the future. And I guess that brings up something that I know you have thoughts on, which is the complexity of varying simulations. There's a bunch of different simulation platforms out there, even within the HVE suite, and it kind of depends on what you're after and how much time you can spend and how much money you have to spend on your simulation package in general, what tool you should be using. So how do you think about that and I guess what's available within HVE and then how do you look at what's available in the whole community and what tool is right when?

Anthony (00:21:37):

HVE has a crash three module called EDCRASH, it's got a smack module called edsmack. It's got edsmack four, and I'll kind of go into them. And then it has various other single vehicle simulators, and then it's got SIMON with DyMESH. And so EDCRASH would be to do energy momentum. It's not a simulation, it's a reconstruction tool. Smack is based on the original smack model. So Edsmack, you're doing some vehicle dynamics with some impacts. Edsmack four allows you to do the same thing but drive on a three-dimensional road. And then SIMON is full three-dimensional vehicle dynamics. And the impact can actually take place in three-dimensional also. So it's mesh to mesh interaction as opposed to what I would call it like shoebox to shoebox interaction

Lou (00:22:35):

As opposed to two planer vehicles interacting with each other. Once you get to SIMON, now it's got three dimensional crush, three dimensional interactions, the vehicles are rolling and you can simulate the rollover aspect of it as well. Or in smack you're getting weight transfer. Weight transfer is being modeled. So there is some component that is being considered like CG height and how that affects tire loading and things like that.

Anthony (00:23:02):

There is a load transfer coefficient that's used.

Lou (00:23:06):

But it's not actually looking at the dynamics of that vehicle and how it might roll over or lift from the ground.

Anthony (00:23:12):

Right. There's no sprung mass, so to say.

Lou (00:23:16):

Okay.

Anthony (00:23:16):

Where in SIMON, you actually have a suspension so that the body can actually is sprung relative to the wheels and axles.

Lou (00:23:27):

And then you have other simulations that you're looking at. I mean, I see you're doing a lot of light simulation, which is a different tool that does something different. And then you've got vehicle dynamic simulators in general. And so SIMON obviously has a more sophisticated tire model as well, which then kind of goes back to if you need to simulate the vehicle dynamics alone, what are the best tools for that?

Anthony (00:23:53):

Well, I guess I'll start at when I think of a vehicle simulator for accident reconstruction, we have the tire model, we have the vehicle model, and we have the impact model. And there's a whole range of programs out there that you can use depending on what your application is. And they all have their purpose and some are more useful at different times. But for example, Virtual CRASH, PC-Crash, HVE, Msmack 3D, CarSim, TruckSim, BikeSim, LS-Dyna, you probably aren't going to use LS-Dyna on your intersection accident with two vehicles that hit in the intersection. But you may use LS-Dyna when there's a question about a pillar crushing in or something that is very complex that you need a full finite element analysis. And so I think each one has its place, and that's why they're all used in the industry. Depending on what type of accident you're trying to model, you're going to select the appropriate tool for the job. Sometimes the appropriate tool is an Excel spreadsheet. I mean we do that, a lot or mathcad or whatever your program of choice is.

Lou (00:25:14):

It doesn't always have to be this $25-30,000 tool. And if we were all infinitely rich, we would probably just have all of these things. But the other thing is of course, that you need to get comfortable with that platform. You need to spend the time learning it. And I just bought BikeSim and I've been messing around with that and trying to learn it, and I think it'll be quite some time before I'm really good at it. So there is something to be said for picking a tool that you think will be effective most of the time and learning that because even the best tool if it's misused is not as good as an okay tool that you're really comfortable with and you're using it properly.

Anthony (00:25:53):

And I would say that's absolutely true, where you start on a certain tool set, you become an expert in that tool set and you continue to use that for your career. I kind of think of it like Apple and Samsung or something like that. I like my iPhone.

Lou (00:26:13):

Yeah, exactly. You're not very likely to make this switch.

Anthony (00:26:17):

Yeah, it's harder to switch when you've been using a program for so long, but there are times where certain programs just can't do what you're trying to model. And BikeSim's a perfect example. I don't know anybody else that can model motorcycle dynamics the way that BikeSim models it if you're trying to look at the dynamics of how a motorcycle is moving.

Lou (00:26:42):

Yeah, exactly. They've done a great job at figuring that out and setting up a system that's relatively easy to use, unless you want to build your own multi body simulator, it's best to just buy their package that's already fine-tuned, but it's going to cut off at impact, so you're not going to be modeling any impulse exchange. So at that point it's kind of you need one sim for pre-impact dynamics, potentially. Not always, of course. And then another simulator for the impact itself, and there's a lot of papers for HVE and modeling motorcycle impacts by Eric Deyerl and Todd Frank, and I think Charles Funk wrote one and Stein Husher wrote one. So I have these papers, I could put them in my back pocket, show up at depo, show up at trial and say, this is what I'm using the tool for and it's validated.

(00:27:34):

So there's a lot to be said for that. And one of the things that is getting attention right now a lot is matching EDR data. And I know before we started recording, you and I were talking about a case study that you had where you had the lateral acceleration of the target vehicle, you had the role of the target vehicle, and with the more sophisticated sims like the 3D simulation, and if you have a good vehicle model including suspension characteristics, you can vary impact speeds, vary dynamics until you're matching a multitude of things. And once you get that, that's the best feeling and you know that you're really close to the true answer.

Anthony (00:28:20):

Yeah. If you hit in the impact location and then you look at the role dynamics and the lateral acceleration and they're following the same, they line up on the graph, you're getting similar peaks, you're getting similar time between peaks, it's rewarding in that way. It's like, okay, yeah, I'm close. One of the things that came up, I think it might've been last year or two years ago, there was a whole bunch of papers or articles about offset location of the accelerometers in vehicles and the fact that they're not at the CG. And then figuring out how to calculate the effect of the offset of the accelerometer. And that's another place where simulation can be helpful because you can place an accelerometer in the vehicle at a location where the actual box is located in the vehicle, and then when you're looking at the results of your simulation, you can look at the acceleration at that location and compare that to your CDR download and say, okay, the lateral acceleration at the CG is not always the same as it is in front of the CG, especially when we're talking about a motorcycle impact or something like that where you're hitting at the front and you're just getting a rotation out of the vehicle. If the accelerometer is forward, we can look at that and match it in the correct location. So I think that's a pretty powerful use of a simulation tool. Then you were discussing the roll dynamics and the fact that that takes into account suspension characteristics...

(00:30:03):

... and the fact that takes into account suspension characteristics and where do we get that information from? At EDC, when we document a vehicle, we're measuring suspension characteristics. Vehiclemetrics, who builds a database for EDC, when they document a vehicle, they're measuring suspension characteristics. Those are included in the database of vehicles.

Lou (00:30:23):

Yeah, that's awesome.

Anthony (00:30:27):

Yeah. And you can go out and measure your own vehicles and we have presentations on how to do that if you want to document your own vehicle, to build it into...

Lou (00:30:38):

I'm curious, how do you measure, I imagine spring rate and damping coefficient? Those are the two big things for each...

Anthony (00:30:47):

So we're getting spring rate. The damping coefficient is based on specifications that we can get. So you can measure it by actually compressing and hanging the vehicle. You could send it to somewhere like I think like SEA has different capabilities or Exponent where they might have machines, or the manufacturers, where they can put the vehicle on a machine and take measurements. There's other ways of doing it, you can do an oscillation of the suspension and measure the frequency and back into a ride rate doing that.

Lou (00:31:27):

Yeah, I was talking with Damian Harty a couple of weeks back and I don't know if you know that name, but he's a very astute vehicle dynamics expert and has worked at a lot of the manufacturers and very talented when it comes to simulation. And I was talking to him a little bit about what I was hoping to equip my shop with, which is shock dynamometer so that we can get the damping coefficients, and he's like, yeah, I mean that would be awesome. If you can't do that, at times what we've done is just take a motorcycle and roll it off of a foot high drop and video it in high speed and then see what the response is. And then go into sim and match that same response by varying the damping coefficient, and you're pretty darn close at that point. I was like, that's brilliant there. There's a lot of ways to skin a cat.

Anthony (00:32:12):

Yeah. Can we drop vehicles? We could do that in HVE, we could raise a vehicle up five feet off the ground, let it hit.

Lou (00:32:21):

Yeah, that's a good idea. I'd say one of my good advice, pieces of advice I should say, to Hertz and Enterprise is never rent your car to a recon. It's a bad idea, the things they're going to do to it, between brake tests and crash tests and dropping it off of a foot high cliff.

Anthony (00:32:38):

I was joking at the HVE forum, I was asking who had a rental car so we could go out and curb test it.

Lou (00:32:45):

And did somebody volunteer?

Anthony (00:32:47):

No, I think Wes had one, Wes Grimes, but we didn't use his.

Lou (00:32:50):

Yeah, you got to get the full coverage. If you're a recon going to a recon conference, get the full coverage just in case. You never know what you're going to be convinced to do.

(00:32:58):

I wanted to talk a little bit about Blender. I don't have any experience with Blender, but I've talked with you a little bit about it in the past. It's amazing, open source stuff in general is super exciting to me. One of the big open source, just to go on a tiny bit of an open source tangent, we're using CloudCompare right now, and it's really cool because there's all these composable tools that one person can develop and then they get plugged in and this guy develops it because he needs it and it gets plugged in. And then you eventually have this extremely powerful piece of software that is free and has been funded by the gen pop, so to speak, for CloudCompare, Lightpoint just worked with one of the developers to build the tool, and now when the next beta release comes out, it's everybody's. We footed the bill and just put it in because we wanted to actually be able to use it and then everybody can use it.

(00:33:54):

And I think open source has a huge advantage because of that composability. And Blender is the same mentality, of course they're open source and it's being developed by all sorts of random people and plugins are coming in. And it seems to be similarly, if not more powerful than 3ds Max. I think a lot of people that I've spoken with are switching over to it. And I know you're using it in all sorts of unique ways from diagramming evidence, to photogrammetry, to high-end renderings, to animations. So I'd love to hear you just talk about what your thoughts are on Blender, how you got introduced to it, how you think reconstructionists can use it in their analysis? Then I'll probably have follow-up questions because I'm super interested in the photogrammetry aspect of it and how you're handling photogrammetry and modeling of roadway evidence and things like that.

(00:34:48):

But a big intro to a simple question is, what's up with Blender? What's your experience with it, and how should we all be thinking about it and maybe integrating it into our toolkit?

Anthony (00:34:59):

I started with Blender kind of as a, I needed a tool and I wasn't going to pay for the Autodesk suite at the time, and so I was looking for something open source and Blender popped up. And I spent the time, and there's plenty of YouTube videos to teach you how to do anything you want in Blender. And at first, it was like, okay, I can use it for diagramming, I can use it for 3D modeling and animation. And then you realize that it's got a video editor built in, it's got a video compositing system built in, now it has a real-time render engine, it's got a ray tracing render engine. It's got a motion tracking system built in.

Lou (00:35:51):

Oh, wow.

Anthony (00:35:52):

So people who use PFTrack or something like that, I mean, Blender has its motion tracking capabilities.

Lou (00:35:57):

So you can use it like PFTrack or you'd export a script from PFTrack and bring it in?

Anthony (00:36:02):

No, you can track inside of Blender.

Lou (00:36:05):

Wow.

Anthony (00:36:06):

Yeah, you can bring in a video and then track points in the video and then solve that like you would in PFTrack, and then send it right over to the 3D environment since it's all in the same program, it's integrated. It's pretty impressive.

Lou (00:36:22):

Yeah, that's nice, because I know PFTrack isn't that expensive, it's like $1,000 to $1,500 or something like that. But some of it's competitors are more like $10,000 and of course you're learning a whole new platform. So if you can stick in Blender and really bolster your skills there, then you can just stick in that suite.

Anthony (00:36:43):

The thing with Blender is there's so much that some people I think get overwhelmed, because-

Lou (00:36:48):

Yeah, that's me, right here. Yeah, I look at what is there, and for those that don't know, which is probably everybody listening, I'm trying to convince Tony to develop a class to teach all of us how to use Blender because it seems totally overwhelming, but super useful. Okay, so are you analyzing collision videos in Blender, and if so, how?

Anthony (00:37:16):

So for example, if you're going to do a video from a surveillance camera, like you got a still camera, and you can create a camera inside of Blender, which has whatever camera properties. So if you know the camera properties, it's a lot easier, and if it's a still camera, it's a lot easier. But you can place that camera in a 3D world. So if you go out and 3D scan your scene and you can create a simple model or bring the scan points into Blender, place the camera in the scene, and then you can actually view through the camera both your video and your scene through the camera. So when you're in your camera view, you set your transparency of your video to 50% or something, and then you have your actual model in the background. And then you can do what people do in the other programs where they just drive their vehicles through, or whatever you're trying to model, a pedestrian walking through, things like that.

(00:38:11):

I made it sound simple, but it is.

Lou (00:38:15):

Yeah. And are you solving? So the way that I would traditionally do that is I'd go into PhotoModeler, I'd bring a point cloud from my collision site into PhotoModeler, and I would tell PhotoModeler, hey, the 3D coordinates of this pixel are this, of that pixel are that, and I'd do that 20 or 30 times, so that it knew exactly where the camera was, it knew exactly what the focal length of the camera was, and it knew the distortion parameters of the camera. Are you still doing that in PhotoModeler and then exporting camera parameters from PhotoModeler, or can you do that in Blender?

Anthony (00:38:47):

So you can do that in Blender. Sometimes if you have a camera, like I said, sometimes you can get the settings of the camera and you don't have to. But inside of what's called the movie clip editor, they also have where the motion tracking is. In there, you can actually solve your camera, and it'll solve the distortion parameters, it'll solve for a focal length. There's a bunch of settings that you can try to solve for.

(00:39:15):

What's also nice is if you have a moving camera, it'll solve that also. So for example, if you have a camera driving down the road and you have all the white lines and you have signs and you have all that, and you have a 3D model of that scene, and you can pick those points and solve the camera, and then it'll place the camera in the 3D world and you can align that solution to your scan of your roadway or whatever you have for the roadway. You can then align it and make sure it's scaled properly and everything, and now your camera's actually driving through your scene based on a video.

(00:39:56):

And at some point, maybe we'll talk about the night visibility stuff. But that's something that Jeff and I do, Jeff Suway and I do quite often, where he captures a video of a nighttime scene and we want to add CG to it. And so it's not necessarily for reconstructing an accident where we have video, but we actually want to create a CG inside of a video that he's captured, and that's the technique that we use. We actually track his video inside of Blender and then place the CG object in the 3D scene, and then we have that camera motion so we can composite the two together.

Lou (00:40:38):

Yeah, that looks so pro too. I've seen some of your work on that front and I've seen some other people do something similar. And one of the really cool things about that is of course the whole scene is photorealistic because it's from a video camera, and then you just put in a nice piece of CGI that is tough to detect as being CGI. It just gives the viewer a really fidelic view of what was happening at the time.

(00:41:05):

But of course, that takes a bunch of other skill to be able to put something in to that scene and have it be true to what the driver was potentially seeing, which brings us to those four papers you were talking about that you wrote with Jeff Suway in 2019 and 2020. It seems like the thrust of that was trying to figure out a way to scientifically do exactly that, recreate what the driver or a rider, anybody in an environment, would see under those lighting conditions. Instead of just saying like, well, this is consistent with what I saw when I was out there, it seems like you made it more of a quantitative approach using some interesting tools.

Anthony (00:41:50):

So the idea was how do you create in the computer environment real world lighting? And can you create real world lighting in the computer environment? And so what you produce on your screen or you get from your camera is a display space lighting. You take real world lighting and you compress it down to what you can actually display. But in the computer inside of Blender or inside 3ds Max or other programs, you can have real world lighting where the range is zero to whatever floating point number you can get to. And that gets into HDR imagery and things like that.

(00:42:39):

So inside of Blender, you actually create light sources that have real world lighting. And then when you use the Cycles render engine, it is a physics-based renderer. So it's actually bouncing light, uses ray tracing, and it calculates the amount of light that would be coming to the camera, and it's a real world value. Now to display it, you have to then convert it to a display value, which is the last step in the process. But what we were trying to show with these papers, at least the first paper, was that Cycles is a validated physics-based render engine and that you can create real world luminance inside of the computer environment. And you can actually create HDR imagery, open that up in an HDR program and take luminance measurements that aren't 0 to 255, they're zero to a really high number.

Lou (00:43:49):

That's crazy. And then Cycles is the rendering engine within Blender, and is that developed by the Blender team or who makes that?

Anthony (00:43:59):

I'd have to look and see who originally developed it, but I would say it's probably the Blender team that did it. Because it's always been in Blender, but now I think you can use it in other programs too. You may be able to use Cycles in Rhino, I think might be able to use Cycles.

Lou (00:44:14):

And I'll put it in the show notes. I'll put a link to Cycles in the show notes and some Blender stuff as well. And are they using that? I mean, why does that exist in the first place? It sounds like you are using a tool in a novel fashion, but they had it in there to begin with maybe for merging CGI stuff with video to begin with for movies and things like that?

Anthony (00:44:35):

Yeah. So I think that in the animation world or in the visual effects world, they're interested in creating realistic visuals, and the easiest way to create realistic visuals is to use real world values. So instead of just making it look right, they can actually put in a light with a certain wattage and it actually will calculate the correct reflections off the wall. Your surfaces have reflectance values that get applied. And so if you want something to look realistic and if you use a physics-based render engine, it normally is going to look better than if you're trying to just make it look right.

(00:45:24):

So it's really the visual effects industry that did all of this, and you have big companies like Disney behind it.

Lou (00:45:33):

They generally are a little bit better funded than us, peasley reconstructionists. And then does that ultimately, and this might be a question better suited for Suway, so if you can't answer it, totally understandable, but does your HDR camera then essentially become your light meter when you're out there at nighttime trying to see what the site looked like?

Anthony (00:45:55):

So Jeff has some other papers that he authored on using video and calibrating your camera to make luminance measurements from your video or from photographs. So if you imagine you have this huge range of luminance values, but when you compress them down onto no longer film, but digital film, like a digital image, there's a compression that's used, and instead of it being linear, it's kind of got an S shape to it. And he has a way of backing that out to the real world luminance values. Now, you're limited to the range that's in the linear region, not so much out on the toes, because it gets so flat out there that you couldn't distinguish between values on the ends. But as long as you're in the middle range, you can back out of your video luminance values. And he has some papers that describe how to do that.

(00:47:06):

And so what we wanted to be able to do is say, okay, we have a video. We can reverse and get the luminance values. Now we have a CG and we can create real world luminance values, now we want to put those two together. And so part of the process is not just adding CG to your video, but making sure that when you then display them, that they're in the same display space. And in order to do that, you have to go from the videos display space to real world values. In the computer, we've already shown that you're starting with real world values, and then when you render it back out, you're going back to a display space. But as long as they're aligned, it works and it looks right, because it is, it's correct.

Lou (00:47:56):

Yeah, it's scientifically founded. And then you have those papers to lean on. I imagine that you guys have testified to this work in the past and been able to do that because of this peer reviewed literature?

Anthony (00:48:10):

Yeah. And so the other papers, well, one of them was validating Cycles. And then we also came up with a method of capturing headlights, which is part of the solution there also, because you have your vehicle that you're driving through the scene that's creating the lighting in the video but doesn't have the CG object. Well, we need those headlights in the computer so that the CG is lit by the headlights of the vehicle that we're driving. So he's driving through the scene and he's capturing video, those headlights have to then light what's going on in our 3D world. So we came up with a method of capturing the headlights using HDR imagery, and then you use that to actually create the light in the 3D world. It's pretty cool.

Lou (00:49:05):

Okay. So this is a perfect example of why you need to pick your lane and stay in it. And a lot of these analyses at this stage, 2023, should be multidisciplinary, because there is zero chance of me having the time to learn that to the degree that you and Suway know it. And it's not something I can put in my toolkit, but the case might benefit from it. So it's like, all right, well, let's bring together everybody who's necessary to figure out how this case happened as accurately as possible. And in 1973, that was a totally different thing than 2023 when you have things like that that just require so much knowledge, so much work. And I'm sure you learned a lot as you're putting it all together for the paper, and there's very few people out there who have that capability. And it's good to know about them and know who to call when you need them.

Anthony (00:50:02):

Yeah, the last paper then, well, two of them, had to do with retroreflective materials, which are so unique because basically a perfect retroreflective material reflects back exactly to where the light's coming from. Now, that wouldn't be good because typically the lights off from the viewer's position a little bit, so they're not perfect retro reflectors, but they have a narrow band that they reflect back from where the light source is. And typically the driver is in that narrow band from their headlights to the retroreflective tape, which is why it lights up for the driver for their headlights, but not necessarily for the driver behind them. Your headlights don't light the retro the same way for you it lights up, but for the driver behind you, it won't light up until their headlight's lighted because it's that angle.

(00:51:05):

So we figured out how to model that in Blender, which again, because Blender has so many capabilities, it has the ability to know where the light source is and where the camera is. And so you can then tell it what the retroreflective values are and model that within Blender. So we can create a retro effect in our videos, which is why we were able to do, and you may have seen it, but tracker trailers with retro tape, we're able to model that inside of Blender.

Lou (00:51:39):

That's crazy. And it's calculating, there's some mathematical function where it's calculating the angle between the light source and the object and the camera position, and modifying it for every time step.

Anthony (00:51:51):

Yeah, so it uses surface normal, it uses the angle between the light source and the retro tape and the camera and the retro tape on the light source. So there's a couple angles going on there. And obviously as that angle changes, the ... number changes. So you have to know those angles, and there's a big formula that you have to plug in and it does those calculations on the fly.

Lou (00:52:17):

That's crazy. So there's a spot for that in Blender where you can tell it how it's going to change based on your research.

Anthony (00:52:24):

Yeah.

Lou (00:52:25):

Geez, I got to get into this. I mean, maybe that's what I'll do for the rest of the year, is try to learn Blender, because it seems like there's a lot there. And I guess the best thing to do is like you, is pick one spot where you think that it could really benefit your analyses, learn that, learn it well, and then just piece by piece add a new thing when you need it.

Anthony (00:52:46):

If I do a course, which it's on my list of things that I need to do, and I'm thinking what it would be is one day where it'll be more like intro to Blender, just kind of how to get around it, and what the different areas are. And then maybe a day on Blender for reconstructionists and just some of the tools that I find useful for recon. For example, somebody authored a GIS add-on where you can go download imagery, aerial imagery, that's geotagged, and if you import that in through this add-on, it automatically scales the aerial.

Lou (00:53:33):

That's nice. Yeah, that's really neat. So yeah, that's definitely on my list, and it's free, which is good. You just got to take a little time, which all of us have plenty of, right? We have plenty of time in this industry. Too many cases, not enough recons at this stage. So if you're listening and you're not a reconstructionist yet, it is an industry with a lot of job security at the moment, and there's a lot more work than there are people to do it. Kind of sticking on photogrammetry for a bit, are you using something like RealityCapture right now where you're taking a bunch of photographs of an object and creating 3D models of it?

Anthony (00:54:13):

So I've experimented with some open source applications, and then also Agisoft Metashape, which that's been a great program. I've used 3DF Zephyr.

Lou (00:54:28):

Yeah, that's a great one.

Anthony (00:54:29):

The open source is Meshroom. And then there's one that's absolutely great for drone imagery called OpenDroneMap, ODM, and it works really well.

Lou (00:54:46):

Dang. I know a lot of the community right now is using Pix4D. I think that the tide seems to be turning a little bit just because it's $350 a month or something like that to have access to that, where a lot of these open source photogrammetry algorithms can handle it now. So how are you processing drone imagery and have you experimented?

Anthony (00:55:09):

So I will say I'm not doing as much of that type of work, but when I was doing more of that work a few years ago, I would run it through Agisoft and then there was a period where I was pretty much using ODM, OpenDroneMap for everything. And you can build your own server if you want, and you can process it locally, or you can process it on servers that other people have built and you pay a small fee. And I think it's a per photo fee, but compared to Pix4D, it's relatively inexpensive. But Pix4D I guess isn't that much either in the big picture.

Lou (00:55:53):

Yeah, I guess, what, 3.5 to 4, 4.5 grand a year, something like that, it adds up. And it is totally worth it if it's the best tool out there. Right now that is what we're using, it just seems like there's room for exploration. Especially with RealityCapture, similar to what we were talking about, how Blender has modules that have been funded by Disney and things like that. RealityCapture's purchased by Epic Games recently, and their budgets are just huge. So the R&D that has gone into RealityCapture and how quickly it processes things and how tailored it can be, and the price is preposterous. I think if I load 400 or 500 photographs into that, it's like they want maybe $8 or something to process it. And I'm like, yeah, for sure.

(00:56:43):

So I'm exploring that. I haven't finished my exploration of that, but it seems like that is going to be a good tool for creating orthos and 3D models from drone imagery.

Anthony (00:56:54):

Speaking of Epic, they are one of the big sponsors of Blender.

Lou (00:56:59):

Oh, okay. That's interesting.

Anthony (00:57:02):

And that started a couple of years ago where they put a big fund towards Blender, so the development has really improved in the last few years. So Blender was on the series 2 I think, now they're into 3, and if you open up Blender 3 point anything, it looks a lot different than 2.79 if you had opened it five years ago and was like, oh, this is hard to get around, it's changed a lot. And a lot of that has been since they've got some big funding behind them.

Lou (00:57:40):

Well, yeah, we appreciate that. I guess everybody who bought, they make some huge video game, maybe it's Minecraft or something like that, but they're obviously really well-funded. And we'll take any of the trickle down effect of, hey, sure, my kids are, they're junkies and they're in front of the TV for 12 hours a day, but if I get better tools at the office, I'm all for it.

Anthony (00:58:01):

Right. Epic Games, they offer mega grants to people where you can apply for Epic mega grant and they'll fund you for your project if you are going to potentially use say like Unreal Engine of one of their, because they're the company behind Unreal Engine.

Lou (00:58:28):

Okay, yeah. Which is free, right? Anybody can go download it and use it for their application right now.

Anthony (00:58:34):

So you can build an application, it's free. If you're going to sell your application, then there's a fee that you pay for that.

Lou (00:58:41):

Actually, yeah, that reminds me of one thing I meant to ask you. So we're talking about the graphics and HVE and potentially using something like Unreal Engine, and then we were talking about Blender and scripts and how you can render beautiful things of course in there. Is there a world where these things don't necessarily become part of HVE, but HVE has the ability to generate a script that you can then just bring over into Blender, with the same meshes that you used in HVE and it's on the same coordinate system, and you just say, all right, we did all the physics over there, now render it beautiful in Blender?

Anthony (00:59:19):

So right now, you can bring over the motion data, you can't bring over the mesh data. But that's something I've been trying to think a lot about, like do we bring a different graphics engine into the playback portion of HVE and take advantage of it right there, or do you write to one of the exchange formats that include animation and output to that and then they can just bring it right into the 3D program of their choice? So those are the types of things that I'm thinking about for the future of HVE. Or do you bring HVE physics into one of these other-

(01:00:02):

... into one of these other graphics engines because that would be another option is you basically build a front end in a different graphics engine that you then take advantage of HVE's physics.

Lou (01:00:14):

That's cool. I'm excited to see what you come up with.

Anthony (01:00:17):

Yeah, me too.

Lou (01:00:21):

It sounds daunting, but as the user, I'm excited for where things are heading. A lot of the times I get questions from a client and they'll say, "All right, cool, hey, mediation's in a month, can you get me an animation?" And the way that I think of the term animation is like this photorealistic thing that takes tens of thousands of dollars to create, but at times I'll just show them a high end rendering from a simulation and they're just like mind blown. They didn't even know that was possible. And it's all physics-based. It's nothing that I can manipulate like you can in animation anyway, and so things that are not based in reality.

(01:01:02):

And I think that that's really valuable and impresses a jury, impresses a client and keeps everything nice and tight so that you're not iterating between simulation and animation, which is always a bit frustrating for me because like we were talking about before recording is like sometimes I'll get everything done. I think I'm completely buttoned up. I'll send it out to the animator, they'll do all of their work, it's a lot of money, and then I'll realize one thing that I didn't like that I want to change, and they've got to do a lot more work to get that done. So the more that can stay in simulation land, the happier I am anyway.

Anthony (01:01:41):

There's definitely been this move in the last few years to make it easier to go from simulation to a really nice looking visualization, whether it's Virtual CRASH or PC-Crash with their capabilities of bringing in point clouds and rendering right from inside the simulation software. In HVE, you can improve the graphics if you take the time to add textures to things, you can really get decent graphics with textures. The next major version of HVE will add shadows and it's hard to believe how important that is. Shadows add a level of realism that you don't realize you're missing until you see something with shadows and then you're like, "oh". Because then when you see it without shadows, you're like, I can't tell if it's floating or not. And it's like the shadows really ground the vehicle.

Lou (01:02:45):

Oh, cool. Yeah, I look forward to that. And is it possible, how difficult would it be and is it on the horizon to bring in an environmental point cloud that is not interacting with any of the physics, but is just there to facilitate rendering? Is that feasible?

Anthony (01:03:00):

You can bring point clouds in to HVE. Right now, it's not easy. That's the one difficulties, you have to convert it to a point set in the VRML format and then you can import it. And physics will ignore it. But you can create a decent looking scene. We have to work on how the points are handled visually. So you're aware in the other programs, there may be, depending on how far away you are from a point, it shows up as a different size. That's something that we have to work on our end is that they don't look the same. They're not the same size when you're a thousand feet away as they are when you're 50 feet away, because then when you're 50 feet away, it might look great. You're a thousand feet away, you can't see anything because it's so tiny. So there has to be a level of detail based on your camera position. The one thing I did notice in the version that we had with shadows is that the point clouds can actually cast shadows, which is interesting.

Lou (01:04:15):

Yeah, that sounds cool. That sounds computationally intense.

Anthony (01:04:19):

But it works. So it's interesting.

Lou (01:04:21):

Yeah, that's awesome. So I wanted to talk a little bit, so you're heading up HVE and EDC, that sounds like a huge undertaking. And then you've got all the publications going on and then you have Momenta. So you're doing consulting work still. You're doing HVE of course, and balancing those two things out. But with your consulting work, what's your current toolkit look like? It sounds like you're saying you don't do a lot of drone work and things like that, but I'm curious just what do you have in your kit as far as scanners, cameras, video cameras and things like that? What do you find yourself leaning on the most?

Anthony (01:05:02):

So I have a drone, but just the number of site inspections, I guess that I would be doing is less than it would've been say four years ago, but I just flew my drone last week.

Lou (01:05:18):

Okay. Yeah, it's still fresh.

Anthony (01:05:20):

Yeah, I have a Trimble X7, which I love it.

Lou (01:05:26):

Yeah, I've done some testing with that thing and have been very impressed.

Anthony (01:05:31):

I have a little BLK360, the original version.

Lou (01:05:35):

Not impressed. I am not impressed at all.

Anthony (01:05:39):

So we'll have to talk about this in more detail sometime later, but one of the things that I do with the BLK is I put it up on a 16 foot tripod.

Lou (01:05:51):

Like a shelf maybe? And just let people know you have it, but it does look good.

Anthony (01:06:00):

I get it up on a 16 foot tripod.

Lou (01:06:04):

That's cool.

Anthony (01:06:05):

Yeah, where I'm not afraid if my BLK happens to fall, it's not the same as-

Lou (01:06:10):

Because it's useless. Yes, I get it. I'm kidding. I'll stop bashing on it now. Leica makes really good stuff. That's just not one of the things that's good for scanning cars or anything like that.

Anthony (01:06:22):

So I think that's the key is that if you're using it for scanning cars, for example, in the way that you are doing it to create a point cloud that captures details of the vehicle, then it's not going to be the tool for that. Now, can it capture a crushed vehicle? Certainly, because I was using a tape measure before and I can guarantee you it does better than a tape measure.

Lou (01:06:49):

No, but I'm sorry, I interrupted. So you put it up high and you're using that, generally speaking at collision sites and that allows you to get a nice angle of incidents between the beam and the roadway. So you're capturing a lot of good data.

Anthony (01:07:03):

Yes, it does a lot better. And it's fast. Now, I have the original one. So what was fast when that came out is no longer fast because now the RTC360 is lightning fast. The Trimble is lightning fast. Now the new BLK, which I'd like to test out just to see how it does, is 10 times faster than those. Apparently it can do a full 360 scan in under a minute with photos.

Lou (01:07:32):

Holy cow.

Anthony (01:07:33):

But it doesn't have the reach, and that's what I'm interested in just seeing how well it does. But if you don't have the reach, but you can get 100 more scans in, maybe it doesn't matter.

Lou (01:07:45):

Exactly. If they're a minute long, that's cool. And that thing's so small, it can go in footwells and all sorts of tiny little places, underneath if you're looking at the reinforcement beam in a low speeder, something like that, and you want to just put it on the ground and maybe get the aluminum beam.

Anthony (01:08:02):

Yeah, I'll give you some examples. You're on a busy roadway, pedestrian traffic, vehicles on the side of the road, you can't get out into the roadway, but you could get your BLK 16, 20 feet up in the air and scan from up there. It doesn't matter if there's vehicles parked on the side of the road, you can get over top of them and you can still see the roadway. So that's one area that's nice. The other is. I mounted on my truck, my pickup truck that has a roof rack and I mounted on the back left corner on the roof rack and I can drive on the side of the highway, stop, start my scanner up from my iPad, scan, drive 50 feet, scan.

Lou (01:08:49):

That's awesome.

Anthony (01:08:50):

Drive 50 feet, scan. Yeah.

Lou (01:08:51):

That's brilliant.

Anthony (01:08:51):

So there's times where you may not be able to get a good scan just due to traffic. For example, New York City, there's times where you can't stop and scan. You have pedestrians everywhere. But I can pull along the side of the road park and scan from the top of my truck. What would be nice is to have one of those polls that comes up if I can get it.

Lou (01:09:16):

Yeah, like a news anchor or something. That would be really cool. And as much crap as I'm giving the BLK, it's obviously the fit and finish the size. The form is fantastic, Leica is obviously a very reputable company. The price is right. It's just like you don't want to scan anything that's shiny or dark with that. It's best for interiors with white walls or outside. And it sounds like it does a good job capturing the roadway.

Anthony (01:09:43):

Yeah, it does okay on the roadway. So probably you get it maybe like 50 feet. After 50 feet, you just start to lose data in the gray roadway. It'll pick up the white lines much further. But if you're trying to actually capture the asphalt, it doesn't have the reach that the other devices will have. The Trimble X7 is great. I know you tested it. It does well. It does really well. And it's fast and it's rugged and it's waterproof, which is nice, whatever, waterproof.

Lou (01:10:23):

IP54 or whatever the heck it is. Yeah. And so how long have you had that?

Anthony (01:10:30):

About a year. Yeah. It was about the time I think that you had tested yours. It was around that same time. You tested one when you did the round robin I guess, or whatever you want to call it with the RTC, the FARO and the Trimble.

Lou (01:10:47):

I still have to publish that. It makes me so sad. I've written it. It's sitting there. It's just waiting for a content editor to go through everything and fix some grammar and make sure that it's not so boring that nobody will want to read it. That's generally how I write, just very matter of fact.

Anthony (01:11:03):

I find it fascinating because those three, you did the FARO, I forget which one, but you did the RTC360 and the X7 and they all did well.

Lou (01:11:15):

Yeah, they all did really well. They're all good tools. The RTC is like if you have the money, it's the best tool. It's very clear, but it's twice as expensive as the X7, essentially.

Anthony (01:11:26):

And what I found interesting was that the X7, I think if I remember you did a Tesla with it, which has the glass roof or whatever, and the X7 did better on the roof, but not on the interior. And the RTC did really well on the interior, and it's kind of like why? It's an interesting finding.

Lou (01:11:44):

No, that was really strange. And I talked to Trimble about that. To their credit, they were very interested in what happened with the interior because that Tesla had that black pleather, but it was a gray Tesla with glass roof, so I'm with you. It's like, okay, if you can get the glass roof, you'd think the seats would be easier than that, but they were essentially vacant.

Anthony (01:12:02):

Yeah, it's weird.

Lou (01:12:04):

It is weird. And then so what are you using for a camera right now? I know mirrorless is all the rave. I have not switched over. I'm still using a Nikon D750 and it's just tried and true and the battery lasts long and it focuses well. So where do you fall in that camp?

Anthony (01:12:21):

I have an older Nikon D7000. I have Sony a7S II.

Lou (01:12:26):

Oh, nice. And that's for a lot of your video work I imagine.

Anthony (01:12:31):

Yeah, video and if there's any nighttime type applications, the a7Ss 1, 2, 3 have a superior sensor for capturing in the low light levels.

Lou (01:12:49):

Yeah, I've been very impressed with that. That's actually what I'm recording this with right now. And then we'll go out and use it whenever we have a 50 millimeter fixed lens so we can go out and try to recreate what the human eye is seeing to the best of our ability. But like I said, if it gets too complex and it's a nighttime thing, then we'll hire you and I won't try it.

Anthony (01:13:08):

Yeah, I have that camera. I have way too many cameras. You had asked some questions that you were going to go through. I'm going to answer one right now because it was what was under 5,000 I think. What was the question?

Lou (01:13:26):

Yeah, you can pick the number kind of, but I figure in recon that's a good number to get a lot of, it's like what's your best investment under 5,000 or the best tool you bought under 5,000?

Anthony (01:13:36):

So I'll answer that. It's this guy right here.

Lou (01:13:43):

Apple, the 13 or whatever it is.

Anthony (01:13:45):

The iPhone is the best tool under $5,000. It's an incredible camera. It shoots macro, it shoots incredible video. You can do slow-mo video. It has an accelerometer built in, a GPS built in. If I'm on a scene and I can't get access to some program that I have because I'm logged out, I can set up a hotspot log back in. So by far, that has been probably the best tool under 5,000 that I have. And it doesn't have to be an iPhone. It could be your phone of choice, but it's been huge. So as far as cameras go, it's one of my favorite cameras. And you always have it with you, which is great. That's the best camera.

Lou (01:14:35):

Exactly.

Anthony (01:14:35):

The best camera's the one you've got.

Lou (01:14:36):

Exactly. It's so true. And I recently upgraded, one of the big motivators for me upgrading was Eugene Liscio's app Recon-3D, and I don't know if you've had a chance to play around with that or any of the other similar programs, but with the LiDAR in there, one of the things that we used to say is just that it's more like a computer than anything, but now it's not because it's a computer tied with, like you were saying, all of this unique hardware in the form of GPS, LiDAR, cameras, accelerometers. You could make this essentially as good as a VBOX if you have the right filtering. So I don't know, have you used the iPhone for any data acquisition, like skid testing or anything like that at this point?

Anthony (01:15:29):

I've played with it, I'll put it that way. But I haven't used it on anything that I've needed that data for that I can think.

Lou (01:15:36):

I know there were some papers SAE, like 2016 where some of the, I think Kinetic Corp guys were looking at the sensors and comparing it to bonafide data acquisition systems that are scientifically oriented and we're finding a really good fit. And I went down a similar path because one of the things that I was really interested in measuring was motorcycle lean angle and from an iPhone or the like, you'll get rates, roll rates, but you're not going to get the absolute role. And when you want to get the absolute role, then it requires high-end filtering, which is essentially not going to be paired to any of these apps. You have to go to something a lot more big league.

(01:16:16):

So I think the sensors in these things are just as good as a lot of the other sensors that we're dealing with. It's just about the filtering. So if you can export to MATLAB or something like that and apply your own filter or find an app where they have already done that legwork and it's like apps are ridiculous, they're like, yeah, well it's six bucks for this one. And we're like, oh gosh, six bucks. How dare you.

Anthony (01:16:38):

I know. When anybody's charging more than 50, it's like, what? This must be the best app ever.

Lou (01:16:45):

Exactly. How much work must they put into this? All right, so I think that ties up the tool section. I think it's cool for people to hear what heavy hitters like you are using as they're working up a crash. Obviously you have a lot of software tools too.

Anthony (01:17:02):

Well, also on the camera front, I have a Garmin 360, the VIRB, that I love. I mount that outside of my vehicle and then record 360. And what's great is it also interface, it has a GPSs, so it'll track your speed. You can actually interface with your OBD II if you want. You can get speed data and things off of your vehicle. So anybody who's not familiar with Garmin's cameras, you should look into them, the VIRB cameras.

Lou (01:17:33):

Dang it. So is the OBD II just being streamed via Bluetooth to the VIRB?

Anthony (01:17:37):

So I don't have that access, but you can get a connector to connect it to your OBD II and then transfer it.

Lou (01:17:45):

That's crazy.

Anthony (01:17:46):

Yeah, it's pretty interesting. I think it is a Bluetooth OBD II connection. Actually I have one that I used to use with my phone, but I haven't used that in a while. But just the fact that it captures GPS is great. I mean, you get vehicle speed, so even if you're not going to use that as your video, you're going to shoot with your Sony or some video camera. If you also have that running, now you have GPS data that you can then tie to your other video, which is nice. So I use it in that way. Even if I'm shooting video with a different camera. I brought some toys to show.

Lou (01:18:25):

Oh, nice. I love show and tell.

Anthony (01:18:27):

Yeah. So this is a ZED stereo camera.

Lou (01:18:31):

Is that Intel or?

Anthony (01:18:33):

No, they're their own Stereolabs, I think is who makes it. And so I got this years ago when I was experimenting with using SLAM, Simultaneous localization and oh, what is it? Modeling maybe? I don't know if you're familiar with SLAM Technology, but it is used in robotics mainly. It can track where something's moving by using the images in a video.

Lou (01:19:07):

Makes sense. I mean, there's two cameras. They know the characteristics of them, they know how far they are from each other, and then it's essentially like human eyes.

Anthony (01:19:16):

So it builds a three-dimensional world as the robot goes through the world. So you could use it in a similar way to create a point cloud of a vehicle just using this and going around it. Now, of course-

Lou (01:19:29):

I think that's what DotProduct is doing, out of a Boston, and they're using Intel's stereo camera like that, and then just hooking it up to a droid tablet. And doing that, and the data looks really good and it's a low cost solution. So could you hook that to a car like suction cup it to the roof and drive through a scene and get a point cloud or is that too big?

Anthony (01:19:52):

I've tried it on scenes and it's worked okay, but now you use your drone or you use your scanner. I still use it to capture stereo imagery though sometimes, like if I'm driving through a scene and when that could be important. Maybe if it matters to have stereo imagery, the other, the new one, the lens is a little closer so that it mimics more of the human eye. These are a little far apart.

Lou (01:20:22):

So could you then just put that into an Oculus Rift or some sort of headset and be looking at things realistically? Oh, nice. There we go.

Anthony (01:20:31):

Yeah. So other questions like what's the future? What are we going to be doing in the future? I think that Virtual Reality is somehow going to be in the future. It's just how are we going to use it in our industry? I think that the Rift is, part of this is the Quest. The Quest has gotten the price down and I guess Facebook took it over or Meta now. And so there's a lot of development in using it for training and things of that nature. I think it's just a matter of time until it becomes more useful in our industry. I'm not sure exactly how we would do it. Are we going to put a whole bunch of jurors in the request? I don't know. But I do see that, yeah, there's a potential future there. I have multiple 360 camera setups.

Lou (01:21:26):

Oh wow.

Anthony (01:21:26):

Yeah, so that's like Stereo Imagery N360, same thing with that, Stereo Imagery N360.

Lou (01:21:31):

That's beautiful.

Anthony (01:21:34):

Yeah, you can capture video with that and then you could watch it back on the Quest and it'll be 3D and stereo.

Lou (01:21:45):

Dang. So yeah, then it would be obviously looking at the accelerometers in the head unit. So how much are one of those 360 degree arrays?

Anthony (01:21:55):

So you can spend anywhere from, for 360, you can spend, this is an older one, but 1,000, you can spend 12,000. This is an older one, and so it might be 4K output in 360. That's not good enough. You want 8K, at least. They have the $12,000 one I think is 12K.

Lou (01:22:23):

I can only imagine how big those files are.

Anthony (01:22:30):

When you're talking 360 and stereo, if you imagine 4K video looks good, but you need to have 4K anywhere you look to make it look really good. Or 1080P, you have 1080P, but you need it everywhere you look. So you need 12K video for 360 degrees in both images, both eyes.

Lou (01:22:54):

Yeah. You got a lot of fun toys over there.

Anthony (01:22:55):

Yeah. Yeah. That's more experimenting, where are we going in the future.

Lou (01:23:02):

Which is one of the cool things that I see you experimenting a lot. I try to experiment a lot whenever, sometimes I'll give up billing and I'll give up the things that I should be doing just to have a little bit of fun and start playing with things that I think might be useful down the line. Which kind of brings me to, I don't think you know this question's coming on any level, so I saw on LinkedIn that you take these classes from Coursera and I did a little bit of research on them and it seems really cool. So it's just like you in a similar vein, you just continue to learn things and challenge yourself. So what is that platform? What classes have you found to be beneficial on there? And do you recommend other recons to look into that?

Anthony (01:23:48):

Yeah, absolutely. Or even LinkedIn Learning, I think has a lot of classes now. A lot of universities now are tied in with Coursera. And so you can take courses from, I think I took a couple from Duke, University of Illinois, and you're actually getting the course, just like a student in the class would be getting the course. It's just prerecorded. Now, some of them you can take and actually get, if you want, you can sign up and get college credit for them. Now, the requirements is a little different in terms of your requirements to do homework and things of that nature and take tests. But if you just want to take the courses, you can get a certificate and they're great. I took the University of Illinois Vehicle Dynamics class was excellent.

Lou (01:24:44):

Oh, wow. Who taught that? Is it somebody that we know in the community or it's not like Dan Metz or something like that?

Anthony (01:24:53):

No, it wasn't Dan Metz.

Lou (01:24:54):

Okay.

Anthony (01:24:56):

Yeah, I can't recall at this time. And then I took some courses that had to do more with vision and the brain and human factors and things along that nature. So there was some really interesting ones. I think one was at Duke, fascinating on the eye and vision and the brain and how it works.

Lou (01:25:20):

That was in service, I imagine, those papers that you were writing and kind of looking at low light visibility and things?

Anthony (01:25:28):

Well, yeah, just in general I have this interest in vision of starting with Wilmer Eye Institute, coming out of there. At Hopkins, it's just always been something that's interesting to me. And it plays into our field. I mean, it's huge how we see and how the brain interprets what we see. So when I saw those courses were available, I was like, oh, that's perfect for continuing ed, instead of some of the courses that you end up having to take that you're trying to get credit for your PE or whatever it happens to be. But I found these very interesting courses to take also and learn from them.

Lou (01:26:08):

That's cool. Yeah. Well, I was looking through there and I think I'll try to find some on there that'll blow my hair back and take them. It's really cool to have access to that sort of stuff. I've taken a couple classes from MIT, just going and watching some prerecorded things. I don't get any credits or anything. It's just like, well, free education like this. It's ridiculous.

Anthony (01:26:30):

It's unbelievable. Yeah, it's unbelievable. I was going to say that you can take any course at MIT, which is unbelievable.

Lou (01:26:36):

I love it.

Anthony (01:26:37):

And for younger people today coming out of school or even going into school, you can tailor what you want to learn. It's a lot different because you can choose the best classes for you. Okay, I want to take these two courses at Duke and I want to take these two from wherever because this professor's teaching this and this professor's teaching that, you can really tailor a course load. Now, maybe you're not getting a degree out of it. Like you said, it's more about the learning aspect as opposed to the degree.

Lou (01:27:13):

Yeah. Hey, you could become Will Hunting. I'll take that.

Anthony (01:27:19):

How about dumb apples?

Lou (01:27:21):

Yeah, exactly. All right. So I wanted to switch gears a little bit and kind of get into the future. We've talked a little bit about it already, but where you see things heading, and I reached out to Eric Deyerl, who I used to work with and as an HVE power user in a simulation guru in general. And one of his questions, he had a bunch of very interesting questions, some of which we'll get into, but it's regarding the short-term and long-term future of HVE. So what do you kind have on your short list of things to accomplish? And then kind of looking out to a farther horizon, maybe 10, 15 years down the line, what do you think the platform will look like?

Anthony (01:28:03):

So the short list is to try to make the user interface to improve on the user interface. The way that the user interacts with various menus within HVE, the tabling system, being able to change, say you want to go in and change all the breaks on a tractor trailer, instead of having to click through, all of them at once, being able to do things in a more global manner. So just trying to work on how you interact with the program. And I'm always thinking about improvements to physics and mainly in SIMON and DyMESH and the one area where I think that most of these physics programs, simulation programs don't do as well as in low speed.

(01:29:00):

So sub one mile per hour or something is where your tire models start to break down, I'll say. So improving the tire models at the low end of the speed realm, and then also looking into more improvement in low end impact modeling too. What I'd love to do is to have some testing, maybe somebody who's done a lot of low impact testing and start to do some validation at that end. And I know there has been some, but just more of that type of work. Longer term, graphics engine, we've talked a lot about it. Possibly looking into not being tied to a Windows platform would be nice. That's not as big a deal, but maybe some people want to run it on-

(01:30:03):

That's not as big a deal, but maybe some people want to run it on a Mac or a Linux or on the internet, getting it into a cloud-based system, which some people would benefit from being able to just log into the computer, all the processing's done in the cloud somewhere. And then there's maybe some physics applications that can be run on other types of devices. And I'm not saying all of HVE on here, but there could be things because HVE is obviously a package with physics programs inside of it. And so when I think of HVE, it's not just simulation. It could be other tools that you may want to use in accident reconstruction, and some of those tools may be able to be run on your phone or on your iPad or in different areas like that. Yeah.

Lou (01:31:04):

Yeah. Like you're saying, the iPhone is one of the things... It's kind of like that same sentiment is the best camera is the one you have with you, the best computer, the best analytical tool is the one that you have with you. And sometimes when you're on the road, it's nice to be able to run some quick analyses while you're there to get an idea of what's going on.

Anthony (01:31:26):

For me, one of the big positives to simulation that I think sometimes you can do the recon by hand or in some simple way, Excel spreadsheet, something like that, but visualizing how an accident happens, it's huge. And it helps when you are trying to figure out marks at a scene and you're like, "I can't figure out that mark." And then you run a simulation and you're like, "Oh, now I know how that mark was made. It's this tire or it's something underneath, maybe created that mark." It really helps you visualize when you can watch it, then you can start to figure out, "Oh, these marks now make sense." Sometimes you're at a scene and you'll have marks that you just, "I'm not sure what made those marks."

Lou (01:32:16):

Yeah, that describes my process. I'd say most of the time when I'm looking at auto crashes where there's a lot of tires involved, of course tractor trailers too, but even if it's just two cars and you have eight tires, I'll generally go through the photogrammetric process of modeling them, building my site, building my scene virtually to bring it into some sort of simulation package without getting too married to anything yet as far as what car made, what tire marks. And then I'll start running the sim and figuring out what I think is likely to have happened and then start getting an idea of, "Oh, those marks are coming probably from this guy," and then I'll go back to the physical evidence from the scene photographs or whatever I have to try to corroborate that, but it's always a back and forth for me.

Anthony (01:33:03):

Yeah. One of the... Just kind of sidebar on the same thing is when you have secondary slaps on vehicles, you have a nice impact and then the vehicles come and touch, simulating that is great because you can keep simulating until you get the secondary slap in the right spot. Just another something that stands out to me in terms of simulation.

Lou (01:33:31):

Yeah, a little harder to do that with hand calcs. One of the things, I teach the motorcycle class and we have a rotational mechanics section where we are trying to figure out how fast the motorcycle is going. You and I have talked about these calculations, based on how much the target vehicle rotates, and no matter how good your hand calc model is, it's never going to be as good as a simulation model that can account for the steering angle, that can account for what the driver's doing with the brakes, that can account for the fact that these tires are going to actually rotate a little bit as the vehicle is sweeping that arc to final rest. When we're doing the hand calcs, you kind of have to pick one coefficient. You're like, "Oh, the tires, let's call them locked 0.7 gs." So the simulation, what you can account for, the subtleties and how sophisticated it can be, which can be daunting at times.

(01:34:28):

I imagine, especially if you're new to it, you're looking at all these input parameters. And I remember the first time I was exposed to HVE, I was just like, "How could I ever fill this out completely and confidently?" But the more you use it, the better you get at it and the more comfortable you get. And you could perform sensitivity analyses and just say, "All right, well, let's assume X, Y, or Z."

Anthony (01:34:48):

Yeah. I was just going to say, I think an important thing is for people to do sensitivity analysis for themselves to understand what effects, say, like we were talking about suspension parameters before, it's like, well, okay, well, half it, see what happens, double it, see what happens, and get an idea of what range actually makes a difference in what you are trying to determine. I mean, that's ultimately what the question is. If you're trying to determine an impact speed, suspension parameters probably aren't going to matter in a vehicle to vehicle accident that much, but if there are cases where they may matter, and then in those cases you just need to know that, that they have a bigger effect.

(01:35:32):

When people ask, it's like, "Well, you don't know all those parameters. If you look at the vehicle output from assignment runs, it's like you don't know all those parameters." And it's like, "Yeah, but your 360 momentum has none of them." So if you can get an answer with no parameters, none of the vehicle parameters except for weight and speed, again, it comes back to what you're trying to determine. And sometimes those parameters won't have a big effect on the ultimate thing you're trying to determine, especially if it's speed in an impact. And again, we use 360 momentum and we get a pretty good answer, but if you're trying to look at how the vehicle spun out afterwards, or like you said, a motorcycle impact where you have the vehicle sliding sideways and the tire forces matter and how far did it slide and things like that. Okay. Yeah. Or my example earlier with the vehicle rolling.

Lou (01:36:31):

If you're trying to max that, it becomes a lot more sophisticated. And one thing that I try to do is, well, I think I always do this, but I'm sure there's cases where I haven't been able to, just back up my simulation with hand calcs and then I walk in because one of the common cross-examination questions on any simulation is garbage in garbage out, right, Mr. Peck. And I'll be able to defend via sensitivity analyses in parameter selection in general, what literature I use to get those parameters or what testing I use to get those parameters. But at the end of the day, I can defend all that and then just fall back on, but my momentum analysis, if that's applicable or my rotational mechanics analysis, which is very basic, is completely in line with this simulation. So if you want to hammer this down, then you're going to also have to hammer down my rotational mechanics analysis. And that's going to be a little bit difficult for you to do, I think, because it's based on simple Newtonian physics.

Anthony (01:37:27):

Right. Right.

Lou (01:37:29):

All right, cool. Well, so we were talking about low-poly modeling, and this is probably a good time to talk about it. At Lightpoint, of course, we have like 800 point cloud models that we've been developing over the past couple years. And then when Tony and I hooked up a couple years ago and started talking more to each other, realizing both of us I think wanted a way to make low-poly meshes to be implemented in our simulation platforms and 3D graphics rendering platforms, and we kind of put our heads together to try to figure out how to make that happen. And you were instrumental in finding the right people and developing the process. So if you want to talk a little bit about that, where we are, where we're going, and what that process has looked like.

Anthony (01:38:19):

I'll give Matt his due. Matt Blackwood is instrumental in this process, but basically going from a point cloud to a low poly model seems like it should be easy, but it's not.

Lou (01:38:32):

Yeah, it does.

Anthony (01:38:34):

Yeah. I mean, even meshing a point cloud in itself can be... I mean, you can do it in cloud compare, you can do it in other programs, but you'll get a very dense mesh because the point clouds are dense typically. So you have a lot of information there, and it's normally not as... I mean, you can get really nice looking models, but they're not as useful for say like HVE DyMESH where you want a low-poly model. You don't want a lot of vertices because it'll really slow the program down. So low poly modeling I think is going to be huge. Talking about the future, I think it's also huge for other real-time rendering engines like Unreal and Unity, which they're rendering on the fly where we were talking earlier about Blender and Cycles where you can render, it might take three minutes or eight minutes or half hour to render one frame depending on how many times you want that light to bounce around. Real time rendering happens real time. And so game engines-

Lou (01:39:46):

That's crazy.

Anthony (01:39:47):

Yeah, they need to do it 60 frames per second to get something that... Or 30 frames per second, whatever. But normally it's going to be like 60 frames per second. You're rendering 60 frames per second, and it looks really good. And the way that they do that is with low poly modeling and really nice texturing, so it looks great. The underlying model is accurate. Well, our goal is for the underlying model to be accurate, which is where the point clouds come in and so we're starting with an accurate measure documentation of a vehicle, and then we want to create a model that's going to be useful in our industry so that you can defend it if asked upon about it in court, it's like, well, where'd you get that model from? Well, it came from a point cloud and here's the documentation and the low poly model is based on that point cloud, and here's the overlay that shows that it matches, it's dimensional and correct.

Lou (01:40:56):

And those are two gaps that we've never been able to really... We've never been able to really bridge those two things where it's, we know that it is a scientifically sound model as far as the measurements go, and we're also getting low poly straight out of the gate where even if you go to TurboSquid and buy something that may or may not be accurate to the true dimensions, oftentimes there's too many polys to make it useful. So you have to spend a lot of time whittling it down. So hopefully by getting both of those taken care of, and then of course you've developed methods so that it's going to be delivered in a format that is instantly integrated into all of these different platforms, Virtual CRASH, PC-Crash, HVE, msmac3D, and any just general CAD program like Blender.

Anthony (01:41:46):

Right. Yeah, exactly.

Lou (01:41:49):

It's going to be great.

Anthony (01:41:50):

Or if you want to use it in Unreal Engine or Unity or like you said, 3ds Max, it'll work in any program.

Lou (01:41:55):

Yeah, I remember when we first started dealing with point clouds maybe 10 years ago, and we thought that that would be easy. We're like, "Well, in a year or two, somebody will have a... You just push the button and the point cloud becomes a 3D mesh and off you go." Still has not happened. It still... Matt is... It's a largely manual process to get everything done. He's just excels at it and is able to do it very quickly in his developed processes to help him speed things along. But they're beautiful looking models and I'm looking forward to getting those out there and using them myself. A lot of my entrepreneurial ventures within recon are just scratching my own itch, just like, "Well, I want this, so let's make it happen."

Anthony (01:42:39):

Yeah. Well, talking about changes to HVE, that's half of it. It's like when I find something that I find annoying, it's like, "I need to fix this. I want to change this." It's the same idea.

Lou (01:42:51):

Yeah, I'm sure why Terry thought it was really important to bring a user in to take it over. If you brought somebody from outside the community, they would not know the pains and what needs development. And I think we touched on this one.

(01:43:04):

One of the things that I wanted to talk about, but it kind of goes to Blender and HVE and the potential integration of those is do you think it's possible in the future we are going to have an all-in-one tool that we can use for photogrammetry, that we can use for CAD, that we can use for rendering, that we can use for simulation, and just have that all under one umbrella?

Anthony (01:43:28):

That's definitely possible in the future. So Blender is kind of that all in one tool except for the physics aspect of it, right? Virtual CRASH or even PC-Crash, I mean, HVE, msmac3D, they all do the part of the graphics. Virtual CRASH, you can bring in point clouds, which is great, but you can't do the photogrammetry stuff. You can't maybe do all the video editing type things. You can't do the motion tracking. Blender is an amazing tool because it can do all of that. The question that, and I've thought a lot about using HVE physics in a way, because Blender has its own physics too. It has rigid body physics, it has soft body dynamics, it has computational fluid dynamics. There's lots of physics that you can work with inside of Blender and you can program, it's open source so you can program for it all day. I mean, somebody made a fracture add-on where you can basically fracture a building and it crumbles down. I mean, there's some really cool tools that have been made for Blender.

(01:44:40):

The one unique thing that I haven't figured out how to do, where HVE shines is it's built, and this goes to Terry, it is built based on the matrix, human vehicle environment event playback. You add a human, you add vehicles, you add an environment. In the event, you can add multiple events and use different vehicles. And then in playback, you can combine all of those. And what's nice is inside one program, you can have your event, you can have what if scenarios all in one file. And that's pretty powerful when you want to do what if scenarios and say, "Okay, here's my simulation. Now I want to see what if this guy was going 50 instead of seven?" Okay, copy the event, change it to 50, run it, see what happens. What if you steered different? What if you brake? You can do those all in one file.

(01:45:43):

And it's because of the way it's laid out in these compartments. And the question is, these other game engines, or like a Blender, which is a 3D program, it's not made for that. You'd have to build a front end to it to utilize that sort of mentality, which is I want to have a list of objects in their humans. I want a list of objects in their vehicles that can be used in my simulation. I've thought a lot about this though, because what would be really cool in a program like Blender, 3D Studio Max or any of these programs is if you had nodes that were your physics nodes and you had objects that you could define and you say, "Okay, these are the objects that I want to go into this SIMON or msmac for physics node, and then there's an output that comes out of that node. But you could have all kinds of other things in the scene that are completely ignored by physics. So you could have some really high end cool looking stuff in your scene that you're not even sending to physics.

Lou (01:46:49):

Yeah, it's just being handled by the CAD program, which is already fully built up. Dude, that sounds awesome.

Anthony (01:46:56):

Yeah.

Lou (01:46:56):

Make it happen, Tony. Make it happen.

Anthony (01:46:59):

Yeah. But to me, I've thought a lot about this process of how we create events and can create multiple events and then combine them in post. So I don't know if Blender is the right tool, but for me, obviously everybody realizes I love Blender as a tool. I think it's very useful. HVE is my number one tool. Blender is my number two tool. So if I could combine the best of both worlds, that would be amazing. Absolutely.

Lou (01:47:35):

Yeah, it sounds like to me too, I'm just thinking about that from the way that I currently handle things and the more I can stay in one program, the smoother everything becomes. And right now, the work horse of my photogrammetry projects is absolutely PhotoModeler, but there's a few things that they're not doing on the CAD side that I'd really like to see them do. And to their credit, they're very open to it. So I'll probably call them and have these conversations and see if they can do that. But for instance, just moving a point cloud around in PhotoModeler until it aligns with the corresponding pixels of the surveillance frame or something like that would be hugely beneficial. But it sounds like maybe removing distortion in PhotoModeler and figuring out the camera parameters and then just going into Blender from there and handling the rest, it would be a great workflow.

Anthony (01:48:36):

And I'll have to look, but I think that there may be an add-on that brings in, I can't remember if it is PhotoModeler, but there may be an add-on that'll bring in the PhotoModeler cameras and everything.

Lou (01:48:49):

The add-ons are extreme for Blender from what I've been seeing, which is awesome. And then one of the cool things about PhotoModeler as well is what you can export for a lot of very specific CAD programs like Rhino, and I think there might also be a Blender export so that it's all scripted and ready to just bring it right into Blender. And then what about integration of EDR data? Of course, we're getting all this pre-impact data now, we get five seconds of pre-impact data generally at two hertz or something like that. Have you considered making just an input table so it's like, all right, here's what the EDR says, and that can just be run in a sim, so you hit play and it's taking those inputs and turning it into a dynamics situation simulation?

Anthony (01:49:32):

Yeah, so I've gone back and forth on this one because users definitely want it. Actually, I think we just got a question this week and throughout the last couple years it's come up where people want to enter speed versus time because that's what you get in your EDR data. And we can do that. I could create a package where you could enter in speed versus time and it would then figure out where the vehicle needs to be using physics. You just have to integrate... You're just calculating position based on the speed. I just have to think of how we would do it. The problem with that is the user needs to understand that if they're entering in speed versus time, they're saying the speed is correct, is accurate, so it's on the user to make sure they're using it correctly because your EDR data, the speed may not be correct.

(01:50:39):

And the nice thing about simulating it and entering in a throttle and braking to try to match speed versus time on a roadway that you've built is you may find that there is something wrong. Your speed may not be accurate, and especially when you have a lot of braking, the speed's probably under predicted quite a bit. So that's the advantage of simulating is that you'll see those differences, whereas if you had a lot of braking but you just entered in the speed directly, your positions will be off. But again, if the users... You're putting it on the user to say, "Look, you need to understand that when you do this, you're forcing physics in a way. You're no longer using the tire model at that point."

Lou (01:51:30):

Yeah. Yeah. I would say there's the HVE, the driver model where that you currently have, where you could say... I've never really used it. I always go back to the driver tables. So you're probably going to do a much better job explaining it than I ever would, but you can essentially take the car and say, "Go from here to there, here's your general speed profile, fill out the driver tables to make that happen." Could you do something like that? Well, first of all, did I describe that right? Second of all, could you maybe do something like that to fill out the EDR table?

Anthony (01:52:01):

So the driver model, as you described, right now the way it works is you place vehicles and then it creates a spline, and then the driver is trying to stay on that spline through steering, and you can also then tell it's speeds at certain positions and it's trying to hit those speeds and it'll use throttle and braking to try to hit those speeds. So it's trying to act like a driver, and it's looking forward at the spline to say, "What do I need to do now to get there at the right speed and position?" And it's position based though, it's looking forward. It's not time-based, and people want time-based because that's what they get there from the EDR.

(01:52:46):

I've thought about that. If you put in speeds and times, and then we try to figure out the throttle and braking to hit those speeds and times, and if it's not possible, it won't happen, which is the way the driver model works. If you put in certain positions and it's not possible to hit those physically, it won't hit them physically because maybe the vehicle slides out or it just can't accelerate up to whatever. You put it here and 100 feet later it's zero, and then it's 100 miles per hour, and it's like, "Well, I can't get to that." It'll max out the acceleration, but it'll only get to the speed that it's capable of getting to. So there's a possibility there of using speed and time and trying to calculate the throttle and braking and steering necessary to hit your points. So it's an interesting thing though, because we have speed versus time and we have steering versus time. So obviously you can enter in the steering versus time directly. That part's easy.

(01:53:46):

And I've even played around with it where we can create a model where I give you a speed table instead of a throttle table. I give you a speed table and you enter in the speed versus time, and you can still put in steering. So the vehicle will drive the U velocity, the forward velocity will match the speed that you enter in the speed table, but your steering inputs will still allow the vehicle to steer because it still is calculating tire forces and slip angles and steering the vehicle. That would be the ideal situation, and it is doable. The struggle I'm having is giving that capability, how do we put it out there, but with the understanding that, look, you need to make sure that if you are entering in speed here, you're saying that that's the speed, it's going to hit that speed exactly because you're putting it in there. It's not going to... You could put in a speed of zero and then a half second later put a speed of 100 and that vehicle, it'll go from zero to 100 and a half second.

Lou (01:54:49):

Yeah. Yeah, I guess, so I've done that a lot too, with a similar process with photogrammetry where we're looking at a video, we pull out key frames, we know what the position of the vehicle is in each key frame, and then we go into simulation and try to hit that, and it requires you to develop a certain speed curve. And then you're like, "Well, that requires a braking rate of 1.4 Gs or something. And there's no way that that happens. So something's wrong. Something's got to give." So I wonder if even just giving the user an output that says, "Here's your acceleration curve for that speed profile, please make sure it makes sense before we go into it."

Anthony (01:55:29):

That's a good idea. Yeah. That way they at least have a check where we say, "Look, that seems a little bit out of the ordinary."

Lou (01:55:37):

Yes. Which HVE has told me many times, "Hey, this is outside of the range of what... Are you sure you want to do that?" Especially when I'm building motorcycles in HVE.

Anthony (01:55:48):

Are you sure you're running an internal combustion engine or do you have rockets on that car?

Lou (01:55:53):

Yeah, exactly. That's the good fun stuff then.

(01:55:57):

So I would consider you a bit of a futurist, and I think that you solidified my idea by pulling up two 360 cameras and a headset. So have you put any thought into AI and how it might be either... I mean, obviously I think a lot of people are afraid. I'm not personally one of them at this point, that it's just going to oblivate the need for schmucks like us at all. I think it's more of something that is going to help us and maybe take away some of the power. But have you considered the need for power? I'm sorry, just supplement our judgment with computational power is what I meant by that. What's your take? Have you put a lot of thought into that, and how do you think AI is going to integrate into recon over the next decade?

Anthony (01:56:50):

So I'll start with, I think on the driving front, with autonomous vehicles, AI is used there because those vehicles have to learn how to drive in situations that they've never seen before. And the way that they teach vehicles how to drive is through deep learning, AI, and I'm by no means an expert in this, but I do know that they're trained on certain situations, and then the vehicle will see something that is not what it's ever seen before, but it knows how to respond to it because no way for an autonomous vehicle to have seen everything before.

(01:57:34):

And so the training that goes into that is interesting to me because one way that you could train a vehicle is to give it simulations. And so that could be an area for work in the future, is that we have simulation programs that we can create eight samples to then teach the autonomous vehicles. So then the question is, well, can we do that with crash reconstruction? Can we create enough crash scenarios to train a system, to be able to look at certain inputs to then say, "Well, based on these inputs, there's a certain probability that this is the outcome." What we want to do is the reverse, which is to say, " Here's an outcome. Can we then go back and figure out what the input is?"

(01:58:37):

It seems like, "Oh, wow, that sounds crazy." But 10 years ago, we thought a lot of things that are happening today are kind of unbelievable when you see the deep fakes with video and the chatbot stuff that's happening now where it's the art that's being created on the fly, that's all AI based. I know in the medical industry, they're using it for diagnosing various conditions based on imagery. So I imagine that there will be some way that it comes into our industry. As far as will this get rid of us as the need for experts, and this applies to if AI gets to that point, but also just with autonomous vehicles in general. If we have autonomous vehicles, what do they need us? And the same thing happened 20 years ago when EDR came out.

(01:59:38):

I was like, "Oh, they're not going to need us anymore because everything has electronic data now." And it's like, "Well, they need us more because now we have to analyze the electronic data." And now video, all these vehicles have video. And it's like, "Oh, well, you have video, you don't need people." Well, now you have to analyze the videos. And so I see it as just maybe changing the areas that people will be focusing on in this-

(02:00:03):

The areas that people will be focusing on in the future. When we started off in this industry, we probably didn't think we'd be doing motion tracking of videos and analyzing videos as a regular thing. And now I would say it's probably 50% of accidents, if not more, that have video in some way, either a surveillance camera or onboard video. And it's got to be close to every accident has electronic data. It's got to be real close to that where if you don't have electronic data, it's probably because you didn't get to the vehicle fast enough.

Lou (02:00:38):

Exactly. It's so true. And that's a good point. I voiced my opinion already, maybe. I'm sure I didn't predispose you to any judgments you've already made, but the sentiment that AI is going to take over the need for recons, I don't see that happening. Like you said, when you got the EDR data, it became more work. When I get a video now, I'm with you, it's like 50, 50 of the cases I have video on, and it just results in more work for me because now I can answer the questions in more detail. But to do that, I have to analyze these videos and I think that's what's going to happen. We're going to have vehicle data reports essentially from these autonomous vehicles, and it's going to be instead of 15 pages with five seconds of pre-impact data, it's going to be 150 pages and you're going to have to go through all of that.

(02:01:28):

But I think it's a really good point that you brought up about training systems and if you can feed it enough collision scenes, photographs from collision scenes, videos from collision scenes, and it can start to identify what gouges look like and tire marks look like and scrapes look like and final rests and the grade of the roadway and say, well, considering everything I see here, these tire marks, these gouges, these final rest positions, and ooh, I scanned that car, I see that's a 2040 Toyota Camry, and I know that weighs 3,210 pounds. Well, they were probably going 30 to 35 pre-impact. That doesn't seem that far-fetched considering what we're seeing right now with Chat GPT. So it'll be interesting to see how that develops.

Anthony (02:02:12):

But even if you get a response like that from a system, then you'll have to analyze that response. You'll still have to look at it and be like, okay, this is what the AI gave us, and then now let's review that and interpret it and make sure that it did it correctly.

Lou (02:02:29):

Yeah, that's a really good point. We're a forensic group. This is going to court. This is to determine whether somebody should be behind bars at times or whether somebody is entitled to a certain amount of money. It's not like they're just going to let the well, we'll see. But right now it seems far-fetched that they would just let AI make a determination and then make a ruling based on that.

Anthony (02:02:53):

Yeah, it may help us get to a solution which it'll provide enough information. Then you can take that and say, okay, now I'm going to use that and see if it plays forward in the appropriate way.

Lou (02:03:09):

And now that we're talking about it, I could see that helping people at the beginning of the case when they're trying to determine whether or not it should be settled or whether or not criminal activity ensued is just like run it through this. And if it's very clear one way or the other, then maybe we don't hire a recon and spend all that money. We'll see.

Anthony (02:03:29):

Along those lines, you would've thought video would've done that.

Lou (02:03:33):

Right. And I wonder if there sure as heck doesn't seem like there's fewer cases right now because there's video. I suspect every once in a while somebody's dash cam shows that they were smoking a cigarette and eating a McGriddle simultaneously with no hands on the wheel and attorneys see that and just end it right there. But for the most part, yeah, it does not seem to have affected things. And on that same note, some of this might've already been answered in your prior question, but have you put any thought into how you think autonomous vehicles in general will affect the recon industry? What will our recons look like in 10 years, 20 years from now if, this is another baked in assumption that I don't want to make, but if a major portion of the vehicles on the roadway are fully self-driving?

Anthony (02:04:27):

One thing that I think will be interesting is are we going to allow a mix of vehicles? Are we going to allow a huge fleet of autonomous vehicles and then continue to allow people to run non-autonomous vehicles? It's interesting. If you went full autonomous and the vehicles could communicate with each other, I think things would be different. But as long as you still have people driving on the road, you're still going to have a potential for the human that's in control of the vehicle to make a mistake and get in an accident. If the autonomous vehicle gets in an accident, which we've seen already where that's happening, then it's just a different level of investigation. Now the question becomes, well, what programming went into that? How come it responded the way it did? Did it respond appropriately? Should it have responded differently?

(02:05:26):

And then you're at a manufacturing level, which I think again, it's just going to be, it's similar to airbag deployments. It's like, well, the airbag didn't go off, and it's like, okay, well was it supposed to go off? That's the first question. It's like, okay, well you check and it's like, well, it never got the signal to say deploy, so it did what it was supposed to do. So there's not a manufacturing defect on the airbag itself where it didn't deploy because it was never told to deploy. Then the question is, should it have been told to deploy? That's to me a whole different question. That question is, well, the algorithm that's in there said it wasn't. So then you have to go if you think it should have, you'd have to go to the manufacturer and be like, we think your algorithm-

Lou (02:06:09):

I know better than you. Yeah, exactly. Show me your algorithm and I Lou Peck don't think you designed a good airbag algorithm. Yeah, good luck. But that's a really great analogy. That is the analogy. It's just going to be a lot more of those because those cases, I think we've all gotten the call and you're like, well, yeah, it hit a pole and it center punched a pole and the accelerometers didn't see the same poles. But yeah, a lot of those, so it's going to get to that product liability level a lot of the time. You'll absolutely, in my opinion, in those cases, need a multidisciplinary effort to come after them. You're going to need a really good recon and then probably some electrical engineer, computer engineer, somebody who understands the algorithms and the related sensors, and then there's not going to be any mediocre recons going after Tesla. It's going to be an elite group I imagine, who have those skills and are tried and true.

Anthony (02:07:11):

And always when you see these Tesla cases or any case where when I see it, I always think to myself, would a human have done better? I'm always wondering that because like, oh, the Tesla hit a pedestrian that ran out in front of the Tesla. It's like, okay, well could a human have avoided? That's a question that I always ask myself. I imagine that at some point the cars, just like they do with ABS or other systems, they perform better than humans in terms of reaction times and things like that. The cars are faster, and I think with autonomous vehicles, it's so new to us that when one gets in an accident, it's a huge deal. But the question is, is it something else that was completely unavoidable and it wouldn't have mattered if a human was driving or not?

(02:08:10):

It didn't matter that it was autonomous, it was going to get in that accident regardless. Now, if it is just driving down the road and all of a sudden it goes off the road and runs into something, then the question is, okay, what failed in there? What failed? And that's a completely different question, but if you roll a ball out in front of a Tesla, I imagine it's going to hit the ball if it only has two tenths of a second to do something.

Lou (02:08:37):

Yeah, exactly. I talked with Jeff Muttart for a bit about that specifically, and that was his same sentiment is that there's got to be a lot of effort geared towards comparing what the autonomous systems can do to what humans can do. And right now he sees that as a big gap in the communities in that the auto manufacturers are not really up to speed on what the human literature suggests is doable. So they're not necessarily making valid comparisons at this point, and that's something that he wants to work on. Always room for improvement. All right, so we're going to do, I just have a few speed round questions and then we'll tie up.

(02:09:19):

It's been over two hours now, so I appreciate you playing along for that long with all of my, I've got paperwork here. Listen, I got a lot of stuff to get to you. I appreciate you taking the time.

(02:09:35):

So we already talked about your best investment under 5K, the iPhone, and I think we're going to find that that's true with a lot of people. So what is the most used tool in your current arsenal? Could be software, it could be hardware, what's something that you could just not let out of your grip?

Anthony (02:09:54):

HVE absolutely has been my most used tool for years and I made an effort to focus in on simulation, so that absolutely couldn't do without it.

Lou (02:10:10):

I think that's a good place to focus your efforts because I'm with you. So I did a podcast recently with Eugene Liscio where I was the subject and he was asking, do you try to do a simulation on every case nowadays? And I was like, I do. It's rare that I don't run a simulation. It does happen from time to time. Sometimes I don't need a simulation answer the question at hand, but for the most part I do, and it's a skill worth developing and there are not everybody has to get SIMON if you don't have the money to do that. There's HVE CSI, if that's what it's still called, that's what it's called last I knew, and that's reasonably priced and you can answer a lot of questions with a simple platform like that. Simple.

(02:10:58):

You and I talked about this a little bit before, but what tools do you think you won't be using anymore in five years or 10? Could be five or 10, like something that's just phased out.

Anthony (02:11:10):

That was probably my hardest. I struggled to think about what tool I wouldn't use because it's like we have all these new tools that we keep getting in the industry and I can't think of anything where I'm like, I'm using it today and I won't be using it in 10 years. I can tell you something that I was using before that I have that I don't use at all. That would be a laser transit, like a surveying equipment. I think people still use it and it has its place, but I don't use it at all anymore. So I think in 10 years that in our industry, that may be completely gone. I think laser scanners, drones, and we'll talk about the future when we get to that, but I think that there's things where that's pretty much in our industry gone. Now in the measurement world, I think it still has a huge place for capturing very accurate points, long distance, but in our world it's not as used.

Lou (02:12:16):

Yeah, I'm with you. I started Axiom in 2018 and I developed a list of tools that I needed and I took out a necessary loan to buy all of those things, and one of them was not a total station at that point. I figured that I started with a FARO M70, very cost effective, and if I am very deliberate in the order that I perform my site inspection, I think I can get everything done with just a scanner. And by that I mean photograph the evidence, then mark it, then scan it. And if you do that, you're going to capture what you need to capture for the most part, especially when you put a drone on top of that, if you're allowed to fly there.

(02:12:56):

Of course, we all have to be prepared for those sites that you can't fly and when there's roadway evidence and you can't fly, you just want to pull your hair out. But the thing that I think might fill that gap a little bit is RTK, some of those RTK tools you can get for under $5,000 right now, and you can still touch the evidence and mark the point and it's very easy to work with and they're very affordable and they're very accurate and they're compact.

Anthony (02:13:27):

And they work well with our other tools also like our drones.

Lou (02:13:31):

Exactly.

Anthony (02:13:34):

And so they interface.

Lou (02:13:34):

Set control points. Okay. So then, yeah, like you were saying, alluding to, we're now going to start talking about what tools people won't be using, but what do you think will be in everybody's toolkit in the coming five to 10 years? What is everybody's kit going to look like? What's something that will absolutely be in there?

Anthony (02:13:54):

So I think in 10 years everybody will have a mobile mapping station of some sort where you could drive down the road and map the road while you're driving. It's out there already, but it's costly. But I think in 10 years it'll be down to the level where it's getting into our industry. Right now, if you're interested in just three dimensionality, the systems that you can get a system that works really well where you drive down the road and you can capture a scene in 3D pretty well. If you're interested in roadway evidence, then it's different. You're probably going to have to go a little slower if you want to be able to capture that same information, but I think that that's going to be the tool that a lot of people start to add to their arsenal. I'm sure there's some out there your way that-

Lou (02:14:51):

Yeah, Rene Castaneda has that mobile Leica station that he throws on the back of his Tundra. I think it costs as much as a nice house in a lot of towns, but it can be useful at times. Like you're saying, most of the times we can just fly a drone and we get really good data and then combine that with some scan data to make sure everything's kosher. But that really would be nice to be able to drive down the roadway and capture and we're close like you're saying with just the cameras that are set a known distance apart. I think actually Kineticorp Corp wrote a paper about this several years back.

(02:15:29):

And I've been thinking about trying it too is just put two GoPros on the extents of your truck and point them in a direction that helps you capture everything and then bring them into a program like RealityCapture. And I think that it even has the capability of reading the GoPros' GPS coordinates so it knows where every photograph was taken approximately. And then you could build up a good model if your frame rate's high enough and you have a high-powered camera. But adding lidar to that would be huge and seems completely feasible.

Anthony (02:16:02):

Especially given some of these cars today that have lidar on them already. You have cars that are being produced with lidar that are able to map the world. And so probably eight, 10 years ago, I was at a conference in DC and a guy gave me a drive in his car with a lidar system mounted on the back and was mapping the roadway as we were driving around. And that was eight, 10 years ago. It was a Velodyne, which is used heavily in the car industry now, but they had a little 360 LiDAR system. They called it The Puck at the time. I don't know if they still call it that or not, but I think it's like 16 lasers or something like that. And you could drive down the road and it mapped it. Now, it didn't colorize it, but it created a three-dimensional map while you were driving, which is, that was 10 years ago.

(02:17:03):

So I imagine Rene's system, I don't know if he's using the double FARO or what kind of system he's using, but they have lots of different types out there that can actually not just map it, but also colorize it at the same time. ZEB Revo, you can walk with it and it uses slam technology and accelerometers and it's basically a camera and a lidar system, and you walk around and it basically maps as you're walking. Recon-3D, I saw Eugene post, I think he walked across a bridge and walked back and mapped it, walked around a building and mapped it. And that's all from an iPhone. So to me, mobile mapping is going to be, Leica has the BLK to go, which is again like a slam I think system that you can walk around. So all of those technologies are going to eventually lead to something that we can use on a regular basis on our scenes to map them out at a level that we're satisfied with the accuracy and being able to testify to it, because that's what it ultimately comes down to.

Lou (02:18:17):

That sounds awesome. I would love that dude. Just be able to drive through the site prior to during the site inspection and oh, I got all that mapped out. Now I can get to my photographs and fly the drone if you want some photorealistic orthomosaic or something, but I'd rather spend the majority of my time on analytics and as little as possible on the tedious things that are costly to the client, but not necessarily benefiting the analysis a ton. You need to do it, but if you can shorten it, make it 10 times quicker, then you get more time to spend on the analytics, which is really where the rubber meets the road, in my opinion.

Anthony (02:18:57):

Yeah, I'll give you my little spiel on scanning where I think that it's been absolutely wonderful for our industry to be able to go out and 3D scan things. It's a quick way of mapping, but the one thing that I think it hurt in a way is when you're out at the scene and you know you have to scan and it's going to take a long time to do the scans, you have to also remember that you're there to observe the evidence, especially on a scene that happened where there's still actual physical evidence. And I have to make an effort sometimes, like, okay, study the evidence first, make sure you do what you would normally have done, which is when you're out there and when you actually had to document the evidence, you were studying it because you had to look at it and say, I want this point, I want this point, why those matter.

(02:19:52):

So I want to document them. And now with scanners, it's like, oh, just set the scanner up, let it run, move it, let it run, move it, let it run, and then we have everything we need. But you didn't actually take the time to, I'm not saying you, but remember to take the time to actually study your scene so that you're thinking about the accident while you're there, you're thinking, what do those marks mean? Why do I need those marks? What could they possibly have been from? As opposed to, I'll just look at it later in the computer. A lot of times looking at it later in the computer is not as easy as you would think because it's like, oh, where is that mark in this giant point cloud?

Lou (02:20:36):

Yeah, exactly. And if the photograph wasn't taken from exactly the right angle or the polarizer is not set to the right angle, you can't see it as well as you can while you're there, and if you don't mark it while you're there. Yeah, I totally agree. And one thing that I try to do, sometimes I find myself performing the site inspection by myself, and those are brutal inspections. I find that I am much better served to bring an associate with me to handle the scanner so that I am not focused on that at all, and I could focus on taking my photographs and observing the evidence. And like you said, doing the things that are important. That's where our level of experience comes in handy, because when your boots on the ground, you make observations that are critical to the case. And pressing the scanner button is important to memorialize things, but that's not the most important thing. That's a good point.

Anthony (02:21:30):

Now, I'll give a plus to the Trimble X7. It has the ability to, after you do a scan, to look on your tablet at the scan image, and if there's something in there, you can annotate it right then and there, which is really a cool feature. So if you do have marks like the beginning of a mark, the middle of a mark and end of a mark, you can actually mark it in your scan on the scene so that when you go back and load it into the computer, it already has that marked for you, which-

Lou (02:22:07):

I think that's a huge benefit. And then, correct me if I'm wrong, I think Calvin Ricard showed me that you can shoot a laser beam like a visible laser at the evidence and after you've run the scan and say, hey, get that exact point, and then it'll do a little cluster of scans right around that point and be like, okay, there's your point. That's really cool.

Anthony (02:22:30):

Yeah, and Calvin's the man. Absolutely. You can do a high res scan of a point. If you're using your scanner you can actually use it in a similar way to create ground control points when you're out at the scene. You can scan in those points at high res and then when you have your scan data, it's marked, that point is then marked in your scan data so that you can use it later for if you are doing drone imagery for your GCPs. The other thing that, and I think the Leicas could do this for a while too, but now with the X7 just again, the things I like about it that they added in the last six months, you can do a quick scan, two minute scan, no photos, and then select a box around what you want to scan with photos in high res.

(02:23:22):

So for example, if I'm doing a vehicle, I set my scanner up, I can do a two minute scan, and then I can go in and just draw a box around the actual vehicle and then scan that at the highest resolution. But instead of taking 15 minutes, it only takes four minutes. It's only doing a smaller window, which I think is also huge. And I think Leicas could do that. I know FAROs for a while, you could select an angle. I only want to scan from zero to 25 or whatever.

Lou (02:23:51):

I think when they came out with premium, they have that similar option as Trimble does. I'm not sure if Leica has it or not. And that's one of the benefits like you're talking about just with how phones have affected things. Tablets have too, because now we can see what we've scanned in pretty high detail in real time and then modify future scans accordingly.

Anthony (02:24:13):

And it's registering in real time, which you can then see, oh, I missed something, I need to fill in this gap. Which is great too, because there's been times when you get back to the office and you're like, wait a minute, what happened to this 50 foot section?

Lou (02:24:30):

Exactly.

Anthony (02:24:30):

That's the part that mattered.

Lou (02:24:32):

Dang it. Oh man, that's the worst. I know when you're flying blind and you're done with your inspection, you have an SD card and you're like, please SD card, have everything I need on it. Please. It's better to know beforehand. Anything else that you wanted to talk about that I didn't ask you about or any other topics you think are worth bringing up before we start winding down?

Anthony (02:24:52):

No, I think we hit everything. Let me just, give me one second. I think that hit every topic.

Lou (02:25:03):

Awesome. That's good. I try, I had four pages here, four pieces of paper, lots of notes, so I'm glad we were able to cover most things and well, everything that both of us wanted to talk about. Where do people go to keep up with you, to keep up with HVE, to keep up with Momentum? Where should they seek you out?

Anthony (02:25:27):

Yeah, so EDC is E-D-C- C-O-R-P .com. Momenta is Momenta, M-O-M-E-N-T-A-L-L-C .com. I'm on LinkedIn. That's probably the best way to reach out to me is through LinkedIn or call me anytime, (410) 212-1194.

Lou (02:25:52):

Nice.

Anthony (02:25:52):

And I love, yeah, I love talking about recon and I love talking about simulation and I love talking about blender, so any of those topics, I'm interested. And I think it's also really important to remember how connecting with others in the industry is important. It's been huge for me in my career, and so I encourage others to do the same thing. You never know who is going to make a major change for your future. And it's happened to me a couple of times and it's been great.

Lou (02:26:19):

Yeah, I totally agree, and I've really enjoyed our conversations over the past couple of years. I've been one of the people to give you a call and pick your brain on certain things, including obviously these low poly meshes. And then blender and I will probably take you up on your offer to ring your phone and talk a little bit more about blender in the future. So yeah, thanks so much, Tony. I appreciate you taking the time, and I'll be talking to you soon.

Anthony (02:26:47):

All right, thanks, Lou.

Lou (02:26:49):

Hey, everyone, one more thing before we get back to business, and that is my weekly bite-sized email to the point. Would you like to get an email from me every Friday discussing a single tool, paper, method, or update in the community? Past topics have covered Toyota's vehicle control history, including a coverage chart, ADAS, that's advanced driver assistance systems, Tesla vehicle data reports, free video analysis tools and handheld scanners. If that sounds enjoyable and useful, head to lightpointdata.com/tothepoint to get the very next one.