EUGENE LISCIO | 3D SCANNING & PHOTOGRAMMETRY

Lou had the opportunity to sit down with Eugene Liscio of ai2-3D to discuss the future of laser scanning, sensor fusion, full spectrum photography, his toolkit, and the role of the smartphone

You can also find an audio only version on your favorite podcast platform.

A rough transcript can be found below.



Timeline of Topics:

00:18:34 - Eugene's current tool kit

00:25:15 - DSLR vs. mirrorless cameras

00:30:58 - Laser scanners - past and future

00:49:23 - Recon-3D

01:07:27 - The future of photogrammetry

01:19:17 - Best investment under $5,000

01:21:11 - Predictions for the future


Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.

Lou (00:00:19):

This episode is brought to you by Lightpoint, of which I'm the Principal Engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analyses. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, you know it takes hours of searching for the perfect model, awkward conversations with dealers, and usually some cash to grease the wheels. Then back at the office, it takes a couple more hours to stitch and clean the data and that eats up manpower and adds a lot to the bottom line of your invoice. Save yourself the headache so you can spend more time on what really matters, the analysis. Lightpoint has already measured most vehicles with a top of the line scanner, Leica's RTC360, so no one in the community has to do it again. The exemplar point cloud is delivered in PTS format, includes the interior and is fully cleaned and ready to drop into your favorite program such as CloudCompare, 3DS Max, Rhino, Virtual CRASH, PC-Crash among others.

(00:01:13):

Head over to lightpointdata.com/datadriven to check out the database and receive 15% off your first order. lightpointdata.com/datadriven.

(00:01:22):

All right, my guest today is Eugene Liscio. Eugene is a registered professional engineer in Ontario, Canada and is the owner of ai2-3D, a consulting company specializing in 3D forensic documentation, analysis and visualizations. In May of 2022, he released a new 3D scanning app for the iPhone dedicated to forensics called Recon-3D. A very cool program that we're going to be talking about today for sure. And Eugene has testified in the US, Canada, and in Europe utilizing 3D technologies such as photogrammetry, laser scanning, and structured light scanners. He's the past president of the International Association of Forensic and Security Metrology and is an adjunct professor at the University of Toronto where he teaches 3D forensic reconstruction and mapping. Eugene is actively engaged in research, mentoring students in publishing, focusing on 3D documentation and analysis techniques. Most recently, Eugene became an adjunct professor at Laurentian University to assist postgraduate students in pursuing further research. That's at a lot of accomplishments right there. Thanks for taking the time to join in today, Eugene.

Eugene (00:02:32):

Well, thanks Lou, I really appreciate it. Thank you.

Lou (00:02:35):

Your career is interesting. We have similar bachelor's degrees and mine's in mechanical engineering, yours is in aeronautical. And in speaking with some aeronautical engineers, I find there's a lot of overlap, but then things kind of differed, we're both in forensics, but definitely different spots. I was wondering, I saw in your undergrad degree that your project, one of them was called Drivematic Riveting Process for Aerospace Assembly. And then fast-forward about 30 years and then we get to another project called Calculating Point Origin of Blood Spatter using laser scanning technology. There must be a pretty interesting story in between those two projects. How did you get from riveting to blood stain and blood pattern analysis?

Eugene (00:03:22):

Okay, riveting sounds so trivial or whatever, but what had happened was I had an internship in the last year of my university at the... I went to a place called Ryerson Polytechnic University. They have a really great aerospace program. And so I had this summer internship and I worked in the materials and process engineering laboratory. What they did was anything that touched the aircraft, it didn't matter if it was coolant, it didn't matter if it was sealant, it didn't matter if it was rivets, it just didn't matter. Heat treatment, they did everything. If something went wrong or if there was a failure, the lab did the investigation. And so it was an amazing job because you weren't stuck in front of a computer doing one thing. I had an amazing boss at that time. They began implementing or bringing in some new machinery that rivets the skins of the wings to the stringers and spas and stuff like that, it's all automated. And basically what they do is this machine goes through to all pre-programmed locations, drills a hole it knocks in an aluminum anodized cylinder, that's all it is.

(00:04:34):

And then based on the shape of the hole, it's like countersunk and it just smashes this thing and basically it fills in the hole, so that's a drivematic rivet. And that's what I started. Then after that I did my thesis on it. And so it was different. I ended up working thereafter, which was great. I had some experience already when I went in and I ended up working for the company that made the machines, so I worked at the aerospace plant there for a little while. And then later on I took a job in western New York where that came to be. That's the story behind the rivets. And I probably still have samples somewhere that I kept these sections of aircraft that we've cut up or whatever. After that, obviously there was a big migration and change, because I was working in the aerospace. And I had another job before I went in on my own. And so I did start off on the journey like you did at some point and he said, "Hey, I'm just going to go for it," in 2005.

(00:05:49):

After that I began doing a lot of civil cases. I was doing a lot of stuff like animation and 3D modeling and stuff like that, but I wasn't really into all of the tech, like the laser scanning or whatever. In fact, in 2005, you'd have a hard time buying one because there was only a few, so I got into that. And then I was doing a lot of work for other forensic engineering firms. People that didn't have the skills to model nighttime animations or things like that. I figured you know what? I'm not going to do the reconstruction myself because then everybody becomes my competitor, so I'll just offer the services for what I think are unique to me. And this way everybody's my customer. That served me very well. And around 2009, I had one of the very first cases that was... It was an officer involved shooting case from Philadelphia from what I recall. And there was a gentleman whose name was Dr. Jon Nordby. He's actually, if you're in the forensic science world, there's this big text, it's like a bible and everybody knows it.

(00:06:57):

He was one of the editors or authors of that. And he got me started on this journey around the crime scene side. It was just after that around 2009, 2010 that I started looking at 3D technologies and things like that. Photogrammetry I started with in about 2006. I'd known about it when I first started. And part of the reason was in a lot of these vehicle accidents or whatever, police photos, you'd look at something and you'd go, "Hey, there's like something in here that's important, like a measurement, but there's no measurements for it. Nobody took them or whatever, so how can I extract that information?" Photogrammetry was a really good option because it was relatively low cost too. When you're starting out making a big investment, like a big laser scanner's quite a big commitment. I began renting scanners around 2009, got involved in some of the homicide and shooting cases. And then I remember the first time I had a scanner to use, I tried it for blood stain, for a blood stain project.

(00:08:03):

And I'm not embarrassed to say I failed miserably. It was a disaster. It was an absolute disaster. I didn't have a clue what was going on. It took a lot of calls and communication to try and figure out what was going on. And then there was even with one of the engineers that I was speaking to in Germany at the time, there was a couple of things that he wasn't sure of either, but we figured out a few things, little tricks that you could do. And then it clicked and I was like, "Oh wait, okay, now this means I can do this, I can do this, I can do this." Eventually I started testing different types of software and how good they were for the bloodstain pattern analysis stuff. There's two things that I focus on today. I call it B and B. It's not bed and breakfast, but it's blood and bullets. Blood and bullets are the two things that I take a big interest in and do a lot of research in and try to validate the technology.

(00:08:59):

That's the long story of how I got from the aerospace over to the blood stain and the bullet stuff.

Lou (00:09:07):

And it's been fun watching your research and topics of interests. I follow you on LinkedIn and I think that's a great place for anybody listening along right now to find Eugene. He posts there regularly and it's interesting for me as somewhat of a layman, I'm in forensics, but as I specialize in motorcycle collision reconstruction and there's not much overlap between those two fields. But how much of that blood splatter, blood stain analysis is still being explored and is not really down to a... Well, I'm sure some of it's down to a science, but there's still room to explore, which I found interesting.

Eugene (00:09:47):

There are different aspects to blood stain pattern analysis. There's the interpretation part and classifying different types of patterns, so you can say, "Okay, I think this is type of pattern and maybe this is the mechanism that created it." And then there's of course the chemical part of blood stain pattern analysis where you're doing chemical testing and things like this. And then there's the physical analysis, which is the things like area of origin analysis, trying to figure out where something came from. And more recently, I did... I have a method that I came up with, it's called the path volume envelope, and that's for castoff patterns. When you swing an object, you get blood that is projected off of that object onto another surface. And there's ways to calculate the volume of space from where those stains originated from. I think it's one of those little achievements that I did and I'm a little bit proud of. And it's one of two types of analysis. And there's a couple of other smaller ones that are very helpful to bloods stain pattern analysis because it's not subjective, it's an objective type of analysis.

(00:11:03):

And so many disciplines in forensics are under the gun for being objective and not subjective. And so the more types of analysis you have, which are objective, then it lends more credibility to that particular discipline.

Lou (00:11:19):

I love it. And its sounds like 3D modeling is a big part of that, which brings me to my next, maybe a weak segue, but my next question is name your company ai2-3D, if I'm saying that right. And so where did that come from? Is ai artificial intelligence way, way back in 2005?

Eugene (00:11:40):

This is an engineer's mind at work. We're not very good at, how can you say marketing? So back then numbers and putting a little squared symbol with, I thought that was cool. But what it stood for was animation, imaging and illustration.

Lou (00:11:56):

Oh, nice.

Eugene (00:11:56):

There was those three things, because that's what I thought I would be doing. That's what I thought I would doing. And I had the sense of mind to put 3D after that. And interestingly, the ai2, the animation imaging and illustration is actually the part that I do the least of. The part that I do the most of is the 3D. That's where the name came from. It just stuck and I just left it. And it's confusing because people say A one 2 3D and people say all kinds of stuff, but it's okay. I just haven't bothered to change it, it's been around long enough, but that's where it came from.

Lou (00:12:31):

That's funny. Lightpoint is a similar story in that ultimately we are creating 3D points via light, which was a photogrammetry based project. And we still impart are with LiDAR, but it's just grown into a larger business in that. But Lightpoint sticks, it's the origin story and I think it's cool. Thanks for sharing that. With ai2-3D, I'm super curious, what do your days look like? What do your cases look like? What are you generally working on? Because I know you got a lot going on, you're teaching and all sorts of other things.

Eugene (00:13:07):

Teaching definitely is one component. The university stuff is something that I've just commenced on my own as a side thing. I teach one course at the University of Toronto. I think it's the only course of its kind in the world, I don't know about another one that exists like that. But it's a 3D forensic mapping and reconstruction course. And basically what we do is we use 3D technologies to solve problems, whatever those problems might be. It could be something from video, it could be something for anthropology, it could be a bullet trajectory analysis, blood stain, all those types of things. Basically introducing the students to what the different types of technologies are and how they can be applied. And so there's a lot of software and hardware, stuff like that. The teaching is one side. Of course, there's courses that I teach myself, which are not through the university, so that's the CloudCompare, the photogrammetry, the Recon-3D, the FARO Zone, so all these other stuff, these other programs and things like that, that people are getting into and they want to learn about the 3D world.

Lou (00:14:14):

Hey, I just say from a practitioner, I really appreciate you offering those because there's nowhere else to learn a lot of those topics, and you do a great job of that. I just wanted to pop in and say that, but I don't want to derail you.

Eugene (00:14:27):

Well, thank you very much. And that's part of the reason why I decided to do the course because probably like you it's pieces, you get a piece here, you get a piece here, you try to take little courses, and then all of a sudden you accumulate this wealth of knowledge. And so I thought it'd be nice if I could give back and create a course that was for students that was all in one place. It was a summary of my work and that sort of thing. That's more on the teaching side. Then on the case side, like I said, the majority of it are criminal cases, and even on the civil cases, sometimes there are shootings, just on the civil side. I do get the occasional... Video is extremely prevalent, and that's something that we can probably talk about, but that's coming up more and more. And people want to know about speed from video or suspect height analysis, stuff like that. I do a lot of case work. It's probably split 50/50 between the US and Canada, and it is split about 50/50 between defense and prosecution or police.

(00:15:28):

Often I can get a call from the police and work directly with them. Then of course, research. As most people will know, research doesn't pay, so it's something you do out of interest or something because you feel passionate about or whatever. But I learned early on that this was really important for a couple of reasons. I shouldn't say it doesn't pay, it does have returns, but they're not financial returns. They are papers and credibility and learning. I always say that research is the playground of the scientist or the engineer, whatever, because it really helps to define where that line in the sand is that you can't cross. And that's usually what I try to do when I try to validate something or test something. How far can I go before this breaks down or this is not going to work under this situation or whatever. It's very, very helpful in that regard. It helps the students out too. They get a paper if they do an internship with me and stuff like that. I always push to do paper.

(00:16:32):

And then like I said, on the cases, of course, sometimes it means I got to go out and scan. Oftentimes, especially today, it's a lot more common now for people to send me the scan data because they've already got it scanned. Scanners are just a lot more prevalent nowadays. Before when I was going out and scanning a lot more, now I don't have to, which is absolutely fine with me, it's not as common.

Lou (00:16:54):

You could stay in Toronto.

Eugene (00:16:57):

I'm good with that. I'm good with that. The casework stuff keeps me busy. Trials, of course, you have to testify at trials every now and then. That's not the thing that I'm always looking forward to and not so much because I dislike it, but just often it gets in the way. And anyone who's been an expert witness for some time knows that schedules are never fixed. They're always moving with trials and everything else. That is the majority of stuff. And then of course I have Recon-3D, so that's the new iPhone app since May 2022, so that's been keeping me busy. It's like a second career almost, but a lot of stuff happening there. And then there's also some of the symposiums. For example, this past year, but I started it the first time in 2022, it was the forensic photography symposium.

(00:17:53):

And so that was just a way of bringing together a lot of people who had an extremely good handle on photography and a lot of new concepts and things like that to share in information, whether it was for accident reconstruction or for autopsies or for just regular crime scene investigation, for fire investigation, underwater stuff. Just a lot of different things that cross over it. And we can help each other in these other areas because things like infrared and UV and all these other things are super helpful in some cases when you get into problem areas. I would say that's the majority of it in a nutshell. But of course when you put all that in parallel, it keeps you pretty busy.

Lou (00:18:34):

Oh my gosh, I can only imagine. I feel your pain because I'm in a similar boat and I wear a lot of different hats and I have diverse days, but it's really, I wouldn't have it any other way. I've never been one of those people that would like to show up and just do the same repetitive task over and over. But it sounds like you have your plate full, for sure. On the consulting side, what are the major tools in your kit right now? And fill me in, but what I'm thinking is photography, handheld scanners, terrestrial scanners, video cameras, whatever you have in the kit. I'm interested to know what the state of the art is, at least in your kit.

Eugene (00:19:23):

Terrestrial laser scanners for sure. I have two. One is an older one. The very first one that I purchased, it's still going strong.

Lou (00:19:29):

120, FARO 120?

Eugene (00:19:30):

Yeah, that one there. Old FARO 120. It's still working, so I'm not going to put it aside if it's going to keep pumping out point clouds for me. There's a newer S350, I'm looking at upgrading there. There are some structured light scanners. There's one that I just got recently, which is... It's a newer one, it's 3D Whale it's called, but it's a small one. I did a video on this, but it's a small structured light scanner, much like the R-Tech scanner.

Lou (00:20:02):

Okay. A lot cheaper, I saw. I watched your video and it looks like the Whale is like 8K, and I know the Leo's more around 30. Have you compared those back to back with a similar object?

Eugene (00:20:15):

I don't have a Leo, but I have access to a Space Spider, the small ones. The small components, bones, human bones or small parts and things like that. I would say the big thing right now is that the software is fundamentally different. The Artec Studio 17, the latest version, super comprehensive. It has all kinds of features when you need to do things very, very well set up and structured. Whereas the other software, I believe it's called J Studio or JM Studio or something like that. It's new, let's put it that way. It doesn't have as many features. It doesn't have everything that you want to do, so there's still a lot of things coming. Now, having said that, there's a prototype that I have that someone sent me, it's a structured light scanner. It just comes in a box kind of thing. And it's used more or less for scanning things on the ground.

(00:21:14):

For example, tire tread impressions or footwear impressions, things like that. That's what it's intended for. That one, I did some testing on, it works pretty well.

Lou (00:21:24):

What's that one called?

Eugene (00:21:25):

That one has... Oh geez, what's the name of that one? That one is called... I know there's a V in there, but because it's a prototype, I'm not exactly sure right now what that particular model is called.

Lou (00:21:36):

Okay, we'll put it in the notes. And is that making a mesh?

Eugene (00:21:40):

That will make a mesh. That makes a mesh.

Lou (00:21:42):

Wow, okay.

Eugene (00:21:43):

That makes a mesh. And I think that prototype is already one generation removed now, so that there's a newer model which is smaller and more powerful and more accurate already. The person who developed that. His name is Dr. Song Zhang, and he's from the University of Purdue. I met him at a conference once and he was very willing to share some of his information. And he's the guy who designs the whole thing. Just a really brilliant man with some really cool technology. On the structured light side, also things like just the little Intel sensors, Dot product, stuff like that. I was a Dot product user for a long time. I think they're doing some incredible stuff with their software. Just really fantastic.

(00:22:28):

And photogrammetry is a big one. Photogrammetry different software packages, so between the paid and even free stuff. For example, a long time ago I was using VisualSFM, which nobody even talks about it now because it was so old. But there's another one called COLMAP. There's Meshroom, those are free. And those are things that you can get online. And then there are programs like I used to use one called ELCOVISION, which was a long time ago, iWitness, a long time ago. I don't use those anymore, but the PhotoModeler for sure-

(00:23:04):

Metashape. What's the other one? 3DF Zephyr, and RealityCapture.

Lou (00:23:10):

Yeah.

Eugene (00:23:12):

I don't have Pix4D. I don't do a lot of drone work. I do some drone work, but it's not like large, long roadways and things like that. They're usually smaller things, and the software like Metashape and the other software, it handles it just fine, so at some point, I have to stop collecting 3D photogrammetry software.

Lou (00:23:32):

Yeah. No, and I appreciate that you do, because you give us a peek under the hood in a lot of cases, and I think we're all learning from your willingness to experiment, and try some new stuff, and show everybody the results, so if you have time, please keep doing it. You're the one, you turned me on to 3D Zephyr. Is that... Did I get that right? 3D Zephyr?

Eugene (00:23:54):

3DF Zephyr.

Lou (00:23:55):

3DF Zephyr. I knew I was missing a letter. And they have been super cooperative in helping me try to understand the program, and it churns out beautiful results, similarly, Capturing Reality, or RealityCapture, from Epic Games, is phenomenal, really cheap to use, super fast processing speed, and with respect to drones, right now, we are generally relying on Pix4D, but I'm with you. Pix is very expensive, and it has right to be, because it's a great program, but I feel like a lot of these photogrammetry programs can now accomplish the same task for very short money, including Capturing Reality, really, for the drone stuff, is something we're investigating.

Eugene (00:24:39):

If I didn't have all the other photogrammetry software, and I was just really focused on drones and doing drone work. Pix4D is a great option. They really took advantage of the drone market when it first started taking off, and between all the apps and the services and things like that that you can get for it, it's a great solution. But if you're already a photogrammetry user, and you have all this other software, sometimes you can just do stuff with... And sometimes, one software will do a better job than the other, so often, I'll do the same project in three different software, and then I'm like, "Oh, okay. I don't know why, but I got a slightly better result here," and so I just take the best of whatever it is that I have.

Lou (00:25:15):

You have all those lenses behind you. I'm curious, it was just something I planned on talking about later too, but this might be a good place to talk about it, is obviously, cameras are a big part of what you're doing. Seems like a lot of the community is switching from DSLRs to mirrorless cameras. I'm curious, what's in your kit, and have you made the switch? Do you think it's worth making the switch? What are the benefits?

Eugene (00:25:40):

So, I am on the edge with that, because I am so... I've been investigating, and investigating, and investigating, "Should I get this? Should I get that?" So I have not made the switch. And part of the problem is, what I have is working.

Lou (00:25:55):

Yep.

Eugene (00:25:55):

And so, you know, I'm trying to determine whether the... At some point, at some point, I'm going to have to make the jump. I'm going to make the jump, for sure. I just don't know that I need to make it right now. But I mean, I do have a Nikon camera. You know, the Nikon D7100. It's an older camera, but it just... I know it inside-out. I'm comfortable with it. It does everything that I need to do. I've got accessories for it. I've got other lenses for it and stuff, so you know, when people... It's not just buy a camera body and then you're good to go. You know, now, there's a lot of other things you need to do, and Nikon has made a lot of improvements on their new... These are lenses that designed from scratch for the mirrorless cameras, so they will tell you that there is a significant difference between the quality of what you're getting on a new lens that is meant for and intended for the Z cameras versus adapting something with a spacer or an adapter or something like that. So-

Lou (00:26:54):

Yeah.

Eugene (00:26:54):

Yeah, yeah. It's not just the camera body that you're going to have to buy. It's going to be a lot of other stuff. But, like their new... Their Z9 is like off the charts. It's like a crazy-

Lou (00:27:04):

Yeah.

Eugene (00:27:05):

Yeah. It's way too much. Like, it's way too much. Even like single... I can't remember how many megapixels it is now, but it's way too much data for one photo.

Lou (00:27:13):

I think it's like 50 or something, yeah. It's huge.

Eugene (00:27:15):

Yeah. It's crazy, and as these file sizes start to compound... Do you know what I mean? It's just, we were complaining about the same things, about storage space. You know, I was complaining about it 20 years ago. I'm still complaining about it, because everything just gets larger. But yeah, no. You know, the accessories are really important when it comes to cameras, because... So for example, there's a device that I use called the CamRanger 2. It's for remotely controlling the camera, so basically, you plug it in... It's like a little WiFi hotspot, but you can then control it from an iPad, or your phone, or something like that, but you can do things on it that you can't do on the phone itself, which is kind of... Excuse me, that you can't do on the camera itself. You can do things like stacking of images, like-

Lou (00:28:03):

Oh, man. Focus stacking type stuff?

Eugene (00:28:06):

Yeah, yeah.

Lou (00:28:06):

Wow.

Eugene (00:28:07):

So it lets you do a whole bunch of really cool stuff. Just even the HDR bracketing and such, I find it's easier. And even just like a lot of times, when we're working with small... Let's say you had a small part or component that's fractured or broken, and you want to take photos of it. Just being able to even just see it on a very large monitor beside you, and you can get into all the details, and really determine if the photo is really crisp as opposed to just kind of looking at the little view at the back, because sometimes, they all look great there, because they're so small, and then you zoom in. It's like, "Oh, crap. I took a terrible picture."

Lou (00:28:40):

I've been there. You get back to the office and you're like, "Oh, man. That wasn't as good as I hoped," especially the macro stuff, you know? That's really tough, and I suspect you've had people talk about that at the photography symposium. The focus stacking seems to be really helpful there, and I'm looking forward to... I don't know. I just saw RealityCapture, or Capturing Reality, I'm sorry, I never get that right, because Autodesk has, I think, a program called essentially the same thing, but I'm talking about the Epics Game program. They just posted on LinkedIn this morning, that they have focus stacking capabilities within the program, so you can manually take photographs of the same object from the same place and change your focus 100 times, and then import it into that program, and apparently, they have a processing method for that, so I'm looking forward to exploring that a little bit. I was at an inspection of a helmet the other day with a colleague who had the Olympus T6, which is a specialist in macro photography, and they, on board, have the focus stacking process, which sounds amazing. I've got to check that out.

Eugene (00:29:42):

Yeah, when you want to get into small, little details, fractures and things like that, I mean, you can't beat it. It just does an absolutely incredible job. And one of the projects I was working on a little while back, which I'm going to initiate again, was looking at trying to recover like postmortem fingerprints, so obviously very, very small details, but doing it in 3D. Obviously, if you try to zoom into looking at the ridge details on the finger, they're very small, and you have a limited depth of field, so if you do focus stacking on one perspective, and then you have a good image, then you shift the image, do focus stacking, focus stacking, focus stacking. Now you have a whole bunch of images that are in focus that you can process in photogrammetry software, and it works.

Lou (00:30:27):

Wow.

Eugene (00:30:28):

So, it's super cool, but-

Lou (00:30:30):

That's crazy.

Eugene (00:30:32):

Yeah, yeah.

Lou (00:30:32):

Yeah.

Eugene (00:30:33):

The challenge we had was I did it on a cast of a finger, but the project we had, we were trying to use real humans, but when you get down to that level of detail, even just the blood pumping through your fingers and such moves the skin a bit, so dead people make really great subjects, because they don't move, so unfortunately, living people is a little bit more difficult, but that's something that I'd like to entertain again in the future.

Lou (00:30:58):

That's really cool. So with respect to the scanners, you started... What was the scanner you started with? Was that that big FARO that had like... It was silver and had the cooling fins on the side? I remember seeing that thing, yeah. I didn't start scanning until 2014, and we were already at the S120 at that point.

Eugene (00:31:17):

Yeah. It was, jeez, 2009 or something. It was the Photon 80. It was a Photon 80, so 80-meter range, and it looked like something out of Doctor Who, right? It was like this big silver thing or whatever, and everything was tethered, so you had like an ethernet cable. You had to use a laptop to control it, or at least to set it up, to get all the settings. And then it was one button on it, so you basically hit the thing, and it had... Oh, the cool part was the crank, so you put the photo on, so what you do is you would crank it up, and you'd take all your photos, and then you'd crank it down, and then you would scan... Or it was the other way around, I think. It was something like that, so basically, the nodal point would... You'd get the camera here, and then you'd go up, and then you'd scan at that same point.

Lou (00:32:04):

Oh my gosh.

Eugene (00:32:06):

Yeah, so there was a crank on it, literally. Like, you'd crank it up to go, so it was fun.

Lou (00:32:10):

It sounds like a Flintstones devices, yeah.

Eugene (00:32:12):

Yeah, yeah, yeah.

Lou (00:32:14):

Even though it's shooting out laser beams and measuring the speed of light and all that jazz, but... And then you go from the 120, which obviously, a big upgrade, really small, pretty darn light, and then it sounds like you have the S350. Now we're even lighter. We've shifted from that 890- nanometer wavelength to the 1550, which seems to do a much better job on black and a much better job on chrome and shiny things in general. And speed, speed. The things that I've seen, kind of like we're improving speed and we're improving the ability to capture data for tricky surfaces. What are you seeing, and where do you think we're going to go from here, with respect to terrestrial scanning?

Eugene (00:32:58):

Yeah, that... So you brought up an interesting point, about the shift in the wavelength. Most terrestrial lasers or scanners today are using the 1550-nanometer. Almost all the manufacturers have gone over, and that's just because of there's some unique properties of that particular wavelength, that get absorbed by moisture at that... If you were to look at an absorption graph, you'd see at 1550, like for moisture, it drops down. It just absorbs that particular wavelength. Which is-

Lou (00:33:22):

Interesting.

Eugene (00:33:23):

... good and it's bad. What you will notice... So I think the 120, I thought was at 905-nanometer, but if you look at the contrast, so for example, if you scan a roadway and you're trying to pick up tire marks, you'll actually find that the 1550 gives less contrast, so the 905, because it's lower, closer to the visible spectra, it gives you a really nice contrast, so I used to like it for that. But I think there are other things going on with the new scanner, that are partly the reason why we're getting better data on black cars and things like that, and not solely because of the wavelength.

(00:34:02):

Now, we're seeing, obviously, a lot of a move towards, well, a couple of things. One is, for sure, remote control or being able to see the scans coming together, like on a device, right? A lot of people are excited about the fact that they can use an iPad, or a phone, or something like that, and see the results, because before, it was like you popped the SD card, and you'd give the sign of the cross, and you'd hope that you go back and you got it, right? Because in the back of your mind, you're always wondering, like if something is corrupted, like I'm going to be flying back out here again. So, yeah, that's no longer... Or some people would actually... And that's what I would do if I had a big job. I'd go back to the hotel right away, and I'm like pushing the card in there, to get it going right way, just so I can see that I got all my data, or at least ensure that everything was okay.

(00:34:49):

So the obviously size is a big factor. Like, the fact that it's become so portable. Like, that Photon 80, when it arrived at my house, it was like four cases. Like, it was massive, and there was a lot of stuff to wrestle around. The tripod was a really robust tripod, you know? And now small carbon fiber tripod, and you know, you can put the scanner in a backpack now.

Lou (00:35:13):

Yeah.

Eugene (00:35:14):

And you know, you look at what Leica's done with the BLK, the little BLK3D, and I mean, you hold it in the palm of your hand. It's a beautiful little scanner, right? So the design is very beautiful. They've done a really nice job there, so anybody can carry these things around, just about any place. And I think that's really important. Portability is important, because it means you can get anywhere. You can stick it out on the end of a boom. You can put it upside-down. You can do whatever it is that you need to do. I think that's helpful.

Lou (00:35:43):

Yeah, getting it into place is... Exactly, like you're saying. Getting it into small places, if you want to measure the footwell of a car or something and scan the pedals. You know, if you have a beast like the old FARO, that's not going to happen, but that BLK, the data's obviously not as good as the FARO S350, or the RTC360, or something like that, but it has an advantage, where you can get it where you need to, kind of like when I'm doing vehicle inspections sometimes, my best tool is just my phone, because I can get it into this really tight spot and get a good shot.

Eugene (00:36:15):

Mm-hmm.

Lou (00:36:16):

But-

Eugene (00:36:16):

Yeah, absolutely.

Lou (00:36:17):

Yeah, no, you were still... I didn't mean to derail you. You were still talking about kind of where things are heading and what you think's improving.

Eugene (00:36:23):

Yeah. I mean, the other thing is that the... I always say the scanner is becoming a more intelligent instrument, but really, it's a dumb instrument. It goes like hell, right? But it's not like... A Total Station always knows where it is, so when you're working with a Total Station, whatever, you have to set it up over a spot. It's leveled. It's stationary, and every time... It knows where it is in its coordinate system or whatever. Otherwise, it doesn't work, and it loses its marbles. So, the other thing that's obvious is there are more sensors that are going to be packed into these things, or into the scanners, right? Look at the RTC with the VIS system, right? Now, we're optically trying to track where this thing is being carried, which I think is a really cool feature. And that is what helps the registration. That's what helps put everything together. All of the sensors, whether it's the compass, the GPS, inclinometer, all those things, they are all aids to helping you figure out where the scanner is and how it's oriented, to set you up for a successful registration, so, the more sensors that you have, the better, right? For sure.

Lou (00:37:29):

Yeah.

Eugene (00:37:29):

The cameras, right? The cameras that are in there, and this is something I'd like to see a lot more of, is better cameras, right? Like, really good quality cameras that are inside of these things, and getting really good pano images, getting higher resolution images, so you can get in close to some of the details, and being able to pick those out. I think we're still suffering a little bit, because of the way that they piece together the images. Like for example, in the FARO, it's a smaller image, like little thumbnails that they kind of stitch together, and you end up with like a 70-something megapixel image. So, in the case of something that was really high res, I mean, you get really beautiful images, HDR, which is really fantastic, too.

(00:38:15):

Something that I was always hoping for, and maybe we'll get it in the future, with all these buzzwords with AI, and machine learning, and everything else, has to do with context. And what I mean by that is everything that we do right now, or almost all these technologies, whether it's photogrammetry or laser scanning, is brute force. I mean, we're using the physics and everything just to measure, measure, measure. But when you look at a pano image of scene that you have, you're looking at it, you're going, "That's a car. That's a roadway. Those are road lines. They're streets." Right? You have context. You can look at it.

(00:38:50):

But the scanner doesn't, and often when doing registration, if there's a failed registration or something like that, you can... It's up to the user to go back and say, "Well, I know where they... I can see where the differences are," and I think that will be interesting, the day when the software begins to understand, "Okay, visually, I'm here, and now I'm here, and I can see where there are common references, and I can tell you that that's the car, that's the tree, that's everything else. So, that, to me, would be an interesting sort of situation.

(00:39:20):

Similarly, for example, with photogrammetry. You know, we have difficulties with photogrammetry on flat walls, right? A flat, white wall. Cars. Good luck with cars, you know? You want a nice, smooth, crisp surface, and you get this mushy-looking disaster, right? And wouldn't it be so nice just to walk around a car, take photos or video, and then get this wonderful looking model? And I think we're almost getting there. You know, you've seen about like these NeRFs. You've heard these neural radiance fields, or things like that. But again, it comes to context. Like, if there's something that I could do to help the software and say, "Look, this is a flat wall," or, "Look, this is a car body, and it is basically smooth," maybe there is something there in the future, where it could fix those problems for us. And if anyone has seen some of the images that I've been putting up, going up online, of NeRFs of cars and things like that, it does a pretty good job of trying to figure out where the surface of the body is and things like that.

(00:40:28):

But there are some dangers with that too, and you know, the purpose... When I first started seeing the whole NeRF stuff, and it was a post from NVIDIA, they had a picture of a woman, and they took like 20 photos, and they produced this really cool model. I thought, "Oh, that's pretty cool. You're not using a lot of photos." And now, sometimes I think I saw a post somewhere, where somebody said, "Hey, look at this NeRF model," right? It looks really cool, but it was like 600, 700 photos. And I thought, "Well, if I'm going to do 600, 700 photos, I'm not better off," you know?

Lou (00:40:57):

Right.

Eugene (00:40:58):

That's what I'm doing. I can do it with less photos with photogrammetry, but the thinking there is that if there are these difficult surfaces, or there are surfaces which are occluded, that it somehow knows what is there. That, to me, is great from a creative standpoint, but it's also very dangerous from a forensics standpoint, right? We want to measure or know what is there. We don't want to guess at what isn't there, and so that could be a problem, when people start using things, either if it's AI for video, or AI for photogrammetry, or AI for laser scanning, we just got to be careful as to what it's actually giving us. You know, and right now, when you measure with the laser scanner, if it can't measure it, you don't get data, or you get a lot of noise, or you get a problem, or you get some kind of a bias in the data. But at least you can see it.

(00:41:50):

And so I always say, and I say this a lot when I'm talking about the Recon-3D app, that point clouds don't like. You know what I mean? Point clouds, they're not perfect, but they don't lie, meaning that if there's noise, you're going to see the noise. When there's a gap in the data, you're going to see the gap in the data, you know? Meshes are a little different. Meshes start to fill in the holes. They start to average out the surfaces. They start to do other things, so there's a lot of work still ahead of us, and I almost feel like...

(00:42:21):

I was thinking about that this morning, I knew we were going to be talking, that we're still coming out of the early ages of scanning. You know, laser scanning has only been around since the late '90s. It's only really taken off in the past maybe 15 years, you know, here in North America. In 2010, 2009, if you asked somebody if they had a laser scanner, you weren't going to find a lot of people that had them.

Lou (00:42:46):

No.

Eugene (00:42:47):

Yeah, so you know, like you said, you began in 2014. You know, so not even... You're getting on to 10 years just now, so if you think about that, we're still in our infancy when it comes... If you were to relate that to other technologies, that have been around forever, we're still early on in the scanning days.

Lou (00:43:06):

Yeah. I have noticed that that's... It affects a few things. One, like you're saying with respect to the registration software, we are still manually selecting things if the scanner has a problem, whereas if you took some smart matching concepts that are available in photogrammetry right now, and applied them to the registration software algorithms, it seems like it would be able to fill a lot of that void. The other big place where I'm seeing the young nature of scanning for the analysts is just their ability to integrate it into their analyses. They can scan. They get the data, but then where do they go from there? And I think one of the best tools for us... We've done a lot of exploring, and figuring out how to make good use of this data because it's so valuable, but CloudCompare is one of the best tools, in my opinion, for using the data and then putting it into a format, putting it into a size that is digestible, taking slices of cars if you want to look at crush profiles, things like that.

(00:44:12):

I know you're a huge fan of CloudCompare. I think I first heard of it through you, and then you have that Zero to Hero class, which a couple of my engineers have taken, and say great things about. But man, I'm so grateful that CloudCompare is around. It's one of my most used tools, and yeah, thanks for sharing all your knowledge there and bringing people up to speed.

Eugene (00:44:32):

Just a quick thought from... And just backtracking just a little bit, because you said something that triggered something in my mind, and that had to do with we were talking about like the scan data and then the photos and the imagery as well, and sensor fusion. Why can't we take the... Right now, we're just pulling the RGB values and just applying them to points, but again, why can't we take the photos and the laser scan data and do a type of photogrammetry laser scanning combo, where one technology helps the other, so where laser scanning is advantageous, take advantage of that data. Where photogrammetry's advantageous, take advantage of that, and combine them together, and help one validate the other. So I think in the future, I'm hoping that we can do something with that. You know, panoramic images are not that great for photogrammetry. They do cause some problems with distortion and such, but people are getting there. There's been some posts recently where people are having more success processing panoramic images. And to your other point, I mean, knowing what to do on the backend is really, really important, and I was fortunate in some regard, that when I couldn't afford a laser scanner, I had the software, so I would hire or I would rent people, and then I would receive the data and work with the data first. So I knew how to work with the data before I knew how to press the buttons on the scanner, and I felt that that was actually quite beneficial to me, because the hard part is the output part. Like, anybody who works with point cloud data, let's face it. The scanner's the fun... It's the cool part, right? Like, you press the buttons, and it's the tech, and it's-

(00:46:03):

The scanner's the fun part. It's the cool part, right? You press the buttons, and it's the tech. It spins around and everything and it's awesome. We love that stuff. But the pain starts after, right? That's where the pain starts.

Lou (00:46:13):

The 300 million point point cloud. Yeah.

Eugene (00:46:18):

Yeah. Yeah. So to your point, knowing what options you have in terms of software, whether they're free or whatever. Daniel Girardeu-Monteau from Grenoble, just making this available. And in some regard, he's the guy who started it all. He put it together, but it was a company that he was working for at the time that said, "No, we're just going to make this open source." But he gets like 30,000 downloads a month, and that could be people who are either updating their software or new to the software, whatever.

(00:46:53):

So there are tons of people using CloudCompare, and I used it several times daily. Especially now with Recon-3D, I'm opening scans with it, I'm cleaning things or whatever. So optimizing point clouds before you start doing what you need to do with it is still very hopeful. But, the number of avenues that we have with a point cloud is still limited and quite difficult. And as you know, meshing a point cloud just of a car, simple object, a car, we see it every day. But, it takes work; it's not just press a button and it happens. Otherwise, you get garbage. So there's a lot of manual intervention that needs to happen. There are some things that you can do and there is some software out there that can help you, for sure. But a lot of people are still doing a significant amount of retopologizing the mesh. They'll mesh and then they clean up the mesh and the tires. What do you do with a single face of a tire when you can't get underneath? There's a lot of manual work that needs to happen.

(00:47:55):

But, if people understand that they have virtual tours, they can have models. Now there are services also you can pay for now you can get your model online. You can look at it. I take measurements and do things like that. That's pretty cool. Of course, you can always do the same drawings and things, cross sections and measurements. So measurements are well suited for the point cloud because that's what they are, they're just a bunch of points that you measure from. Yeah, and the 360 panels are super helpful when you're creating those virtual tours.

(00:48:22):

So 3D printing, that's another one for sure, if you have to 3D print something. But again, not an easy workflow from a point cloud.

Lou (00:48:31):

Yeah, because you got to get to that mesh first, and it's got to be one of the problems that we found. You go into a program and create a mesh automatically from a point cloud. First of all, it has a lot of noise, a lot of topological issues. But then it's also several hundred million polygons at times, and if you try to bring that somewhere else, it's just going to crash the program, crash your computer. So, we have been working hard on developing processes for taking that multimillion polygon mesh and turning it into something more usable while still remaining highly fidelic to the point cloud, which at this point seems to require very skilled human intervention. It's not automated. And honestly, I thought we'd be there by now, but it doesn't seem to be getting there, and maybe machine learning, neural networks, AI, maybe that's the answer, like you said.

(00:49:23):

It should be able to identify a door panel and then move forward accordingly, so hopefully we get there at some point with that sensor fusion, and that brings me to handheld scanners and your app, Recon-3D, which it's a fantastic example of that fusion. And for those that don't know, came out what May 2022. It's using any iPhone with a LiDAR. So I think if I have learned from you appropriately, let me get this in the right spot. The LiDAR is this little guy right here, and then you have the camera array, and it's combining the photographs with the LiDAR and making use of both of those systems' strengths to create the best model that it can. Who would've thought that would be our phone, that would really be the first implementation of that? I'm not sure if it's the first, you can tell me there, but it's for sure an elegant implementation of that and a cost-effective implementation of that.

(00:50:23):

So how did that journey start for you? How'd you get into building that platform?

Eugene (00:50:28):

Well, so like you and a lot of other people, as a user, you're always trying to find a low cost method or way to do something, whether it's scanning or whatever. And yes, we have the scanner. I do not think that the terrestrial laser scanners are going anywhere. They're still amazing, incredible pieces of equipment. For roadways, for so many different things, they are extremely vital. So, if anyone thinks that I'm thinking that this is going to be a replacement or whatever, I don't think so. Recon-3D has, its sort of little area where it takes up a little space, and it's useful in that space. I think a lot of people find it quite helpful.

(00:51:09):

But what had happened was, I've always been looking at, so the Intel, I've got one up here, so that little guy up on the little tripod there. That's the little intel sensor, right, so it will sense structured light devices. So I was playing with those, I was playing with the first connect. I was always looking at what can I do to extract as much information out of these low cost devices. Because that I can throw in my backpack, I can put it in my back pocket and just hook it up to my phone.

(00:51:34):

So I want to give a shout out to Dot Product, because they were one of the ones that really motivated me for these sensors on a tablet, kind of thing. But it's always been... How can I say this? It never met the threshold for me. I was always waiting to get there. It was like, cool, and then I'd look at the data and I'd go, "Ah, it's just not quite there."

(00:51:57):

So I met a gentleman called David Boardman. I don't know if he's the CEO or CTO, but he's up top there at every point. And before they were called something, they were called US Robotics, I think. US Robotics or... No, I can't remember. It was something robotics. But basically what they were doing was taking massive data sets of photographs, and then processing them for photogrammetry. But they were working with defense departments and aircraft and they would take, and we're not talking 500 photos, we're talking tens of thousands of photos, maybe even a hundred thousand photos, and then processing them, even while they... They would put network systems on the aircraft that would start processing while the aircraft was still flying. At least that's my understanding.

(00:52:50):

And so, I knew him back in 2011 or something like that, and every now and then, we'd keep in touch. And, they went through their sort of iterations of businesses, and they focused on something called stockpile reports where they're using a phone app to... They weren't using LiDAR initially, they're just using video. They're just recording and then pulling the frames, and then using photogrammetry. But the chief scientist there, computer vision scientist, his name is Jared Heinley, brilliant guy. They decided that they could take the LiDAR out of the iPhone and then use it to create what's called a depth map. And so, that depth map is really a picture where the pixel color represents a certain distance to an object. And if you have an estimate of depth from an image and you could overlay that onto a photograph, well, it's giving you something about you have some information that you didn't have before.

(00:53:44):

And so, when I first saw the technology, the first thing I asked was, "Can you do a car? Let's see what a car looks like." And when I tried it on my car and I saw the results, I was like, "Dan, that's a pretty good result for a phone. I've never seen anything that close before." So the data looked crisp, at least for the phone. It looked accurate. So normally on the bodies where you get distortion or noise, it was conforming to the actual surface of the vehicle. And that's where I really thought hard about it, whether or not this was something that I wanted to jump into.

(00:54:26):

And, I just decided I'd like to do it. I felt like I was the right person to jump in and do it, so yeah, that's what I did. I embarked on a new journey, and it's going well. It's going really well. I'm happy that people are using it. I think that's the big thing. People are coming back and saying, "Hey, it's working for me," and people are doing a lot of vehicle inspections with it, and it seems to be a useful tool. As somebody who's developing something like that, I think that's the biggest compliment you can get, is when people are actually using your software or hardware or whatever it is that you're doing.

Lou (00:55:07):

Yeah, and I think you were the right man for the job just because of your foundational understanding of the scientific principles that drive that engine. And then of course you have such a big presence in reputation, so I think you could distribute it and make it known.

(00:55:25):

And I'm a user of it. I have found several places that it's super helpful. I have a few different scanners. I have the Leica RTC, the FARO S350, and then a FARO M70. So in a lot of cases, I don't necessarily need it to do a normal car inspection or something like that. But at times I will, anyway. I'll just supplement whatever I did with my FARO. And what I did with my FARO probably takes an hour and a half, and what I do with Recon-3D takes about four minutes.

(00:55:56):

And then we have intentionally looked at some of the Recon-3D data compared to the RTC. SATAI I think is actually happening this week or next week. But at the last year's conference, there was some crashed tests performed. It was a black Honda Civic. We scanned it with the Leica RTC 360, which in my opinion is just the cream of the crop, the best laser scanner out there right now, at least that I've ever got my hands on. And we compared that data to the Recon-3D app. It's not the same density. There are some differences, but if I, well, I did. At that conference, I made a presentation and I put up the Recon-3D model. Well, scan data, let's say; the differentiation being I consider a model of mesh or something like that. But anyway, the 3D point cloud data from Recon-3D and then from the RTC, and I asked the audience, "Tell me which one is which."

(00:56:49):

Now, when you get really close, you can see the difference. But from a left front view or something like that, the data was remarkable, and then we compare the accuracy using CloudCompare and it overlaid extremely well. And one place where I have found that app to be really useful, Recon-3D, is during my site inspections. So I'll run the scanner. Obviously, there's a decent amount of downtime between those, and if the motorcycle hit a boulder or something like that, that I would like a lot of detail on, I can actually get better scan data of something like that with a lot of texture that I can walk all the way around it, get super low to the ground with Recon-3D, and I just make sure that I get enough context so when I come back, I bring that point cloud, both of them into CloudCompare, merge them up, and now I got a killer hybrid of data. So that program's great. It's definitely worth the price of admission in my opinion, and thanks for doing it. I'll let you speak anything about anything further you want to there, but it also seems like they're working on developing mesh creation tools via those point clouds. And I'm not sure if that's good or bad, but would love to hear your take on it.

Eugene (00:57:59):

Well, yeah. Well, let me back up for a second. So the Recon-3D, because we're working in this area of accident reconstruction, and the majority of people who have taken it up right now, if I had to guess, I would say somewhere around maybe 80% of the people, are working in the accident reconstruction field. So they have really taken hold of their Recon-3D app. But my commitment there is that I want the crime scene people and other people in forensics to use this. And as a result, we have to focus on understanding what the accuracy is. And I always tell people this, is that it's more important to understand when it doesn't work, what the errors are, what the uncertainty is, than the actual answer or the actual truth. And so that's just the way forensics works.

(00:58:50):

If you're doing an analysis and let's say it's a suspect height analysis or something, and the guy's six feet. Great, but what if your air tolerance is plus or minus five inches? Well, there's a lot of people that fit in that range, right? So the uncertainty or the air puts it in the context of whether or not you can use it. So, we're doing a lot of work with studies and working with people that are working on accuracy studies. And honestly, whatever they find, they find. But I just want to be very transparent, get the information out there, keep doing studies, using it for different applications. And in each one of those areas, validating the data.

(00:59:33):

The point clouds look good. I think you benefit from the camera in the phone, because you'll notice that the colors is actually quite good. Even sometimes in weak lighting or whatever, you get pretty good color; whereas sometimes on the scanner or whatever, depending on what scanner you have, you'll see the colors don't look the same. The contrast is a little bit different or whatever. So in many instances, the color can work to your benefit. There's a whole bunch of things there that need to be developed, and it is still new. So there's a lot of work that's ahead of me, in terms of features and options and all kinds of different things. So the good news is that the data is very promising and there's a lot of room to grow, and that's the real exciting part, is the roadmap ahead in what you do. Things like meshing are possible for sure. It's not uncommon to mesh, especially because we have a photogrammetry element where we're taking video, extracting the frames, creating a model, but using the LiDAR to come together. Oftentimes, people call it LiDAR, it's a LiDAR device. It is, but you should actually think of it more in terms of it's photogrammetry assisted by LiDAR. That's really the way to think of it.

Lou (01:00:47):

So it's telling it what the depth of those points are or is, so that if it's having a hard time on, say, a huge flat panel on the side of a car, it's like, "Well, I can't really tell where that is." It's like, "Well, I'll tell you where it is," and work accordingly.

Eugene (01:01:03):

Right, exactly. That's exactly what happens, and that's exactly why flat walls, flat white walls and things like that, you get a pretty good result from something like that.

Lou (01:01:14):

And one of the great things, like I said, and I'm not just trying to blow smoke, but you're the right man for the job and I think that something that's really important when you're using data this is to set the scale. If you set the scale wrong, even if it's off by 10%, all of your data's crap. And you have an integrated system for setting the scale. We've used it, we found it to be very accurate, and it's just crucial if you're using these measurements for anything where the actual measurements matter as opposed to just a nice looking model.

Eugene (01:01:48):

Yeah, well that's a good point. But again, it's not a calibrated device, so you need to somehow have a reference scale, a reference measurement, something that you can hang your head on that's guaranteed. So in the app, we have the april tags, that thing that's behind me there, and you set a couple of those. You measure it as accurately as possible, and of course there's a method to how you lay those down. You never want to have them close, you want to have them further apart, and at least you have something. But if you don't, then you don't know what you have.

(01:02:20):

It could be perfect, but you wouldn't know. And I think that's the issue, is you have to have a known something to fall back on. In forensics, anyway. It's just the best practice, whether it was with the laser scanner or whether it was with the total station, people are always taking some kind of a reference measurement. And I don't think that this is any different.

(01:02:41):

There is a technique to scanning. It's not just point and go whatever. It is relatively easy. Most people can get some data out of there, but there are things you can do that can improve your results. And so, that's another reason for having the Recon-3D course. And of course there were other reasons, because of course, again, in forensics, people want to get a certificate. They need to be trained. At least they can say, "Hey, I've done an exercise, an assignment. I've shown that I'm able to do this." So yeah, like I said, it has its use, but it's important for people to understand, too, where it's not useful. So there are going to be situations where photogrammetry is going to be a better option for you.

(01:03:26):

So I've had people say, "Hey, that's great. I'll take the phone and I'll put it on my drone." I'm like, "Why? If you're going to do a roadway with the drone, you've got a great system there. You're not advantageous." And the fact that this is limited to about five meters before the sensor just dies out. It's not a high-powered LiDAR sensor.

(01:03:48):

So yeah, there's a lot going on there and I'm really looking forward to it. It's a fun class to teach. And I get excited talking to people about how they're using it and different applications. But like I said, there's people outside of forensics that are using it, but it's not my main focus right now. So I'm focusing in on the accident recon people, and a lot of the stuff that they're doing. They seem to really be taking it up, and especially people who are new. Those seem to be the people who are adopting it. People that are brand new that just couldn't afford a laser scanner before, it's their entryway into this sort of area. And so, buy the phone. And it's not hard to convince somebody to buy an iPhone, especially if they're going to use it for work or something like that. So it's worth the investment. And CloudCompare, free software, which allows you to do a whole bunch of stuff with it. So the relative cost of this is low.

(01:04:41):

And then, there's people who like yourself who already have scanners, but maybe they only have one scanner, but they've got four people in the company and three or four people could be out at any time at an inspection. So where do you send the equipment? Well, if everyone has a phone, now at least they have something that they can use. That seems to be very, very common, actually.

(01:05:03):

The other thing I'll say about Recon-3D is the community is really important to me. So having a strong user group, a user base is really, really important. And actually, later in March 21st, I'm going to have our very first user group meeting where I want to bring people together. I want people to talk and exchange ideas or whatever. I get out and I scan fairly regularly, but these are the people that are going every day. They're doing hard projects, they're doing more difficult projects, and they are learning. I know there's going to be a point where there's people that are going to know more about how this thing behaves than I do, because they're just doing it day in and day out.

(01:05:41):

So it's important that we share information and that when there is a new user, that they feel that there's a community there to support them. I started a Discord server now, so we're on there and we can exchange information. Yeah, I think there's a lot of things that are going to happen on this app. And even if it remains a small community, I'd rather have a couple few hundred really good, strong users who are sharing and helping each other, than a thousand people that are completely disconnected and are just not cooperating with one another either. So yeah, I'm going to do my best to try and grow the community as best as I can.

Lou (01:06:21):

Yeah, I think you're doing a great job and I'm really looking forward to seeing how things continue to evolve. And I think it's being adopted by the community, from what I can tell, just in my interactions with other recons. It's such a useful tool. And like you said, for the people that can't go out there and drop $50K on a FARO S350 or whatever the latest and greatest is, oh man. I mean, if you already have an iPhone, the price of admission is essentially nothing, and you can get some really good data that wouldn't be otherwise available to you and beats the crap out of a tape measure in a crush, Jake, or something like that. Yeah, thanks for doing that. Excited to see where that goes in the future. Which brings me to my next segue, which is kind of a speed round of questions for you tied with Eugene Liscio's prediction of the future. And granted, we're all pretty bad at predicting the future, but I think that one of my goals with a bunch of these interviews is to figure out, well, everybody's got their little niche, and you're going to be better at predicting that niche than I am.

(01:07:27):

One of the things that seems you and I have both observed the evolution of photogrammetry, which has been really interesting, back from 1850 to the digital advent where we're still manually marking everything, to smart matching, to now we can just take 50 photos of something and have a great 3D model of it. It kind of went in and out of Vogue for a bit, and now it's just back in huge fashion where photogrammetry is really a part of everybody's toolkit now, I think. And if it's not, it should be. But photogrammetry's been huge. So where can we go from here with photogrammetry? It seems like we've already made such big leaps and bounds. Do you think we can take it even farther?

Eugene (01:08:12):

I think so, but, well, there's going to be a few angles there. So for sure, the types of cameras that are being used. So just as an example, think about a 360 camera. The advantage of a 360 camera is enormous. So it can get in areas that would be very difficult to get in with just a regular camera. So for example, in here, in your room or your office. The 360 camera captures everything all around it. And if you have to do that with a regular camera, with even a wide 14 mil lens or something like that, you're going to be taking a lot of photos all the way around. There's a lot of difficulty there. So working with different types of lenses and different types of systems, if we can clean that up. Imagine you could do a collision scene or something like that just with

(01:09:03):

Imagine you could do a collision scene or something like that just with a 360 camera. You could walk around, click, click, click, click, just walk around or take video. And all of a sudden, it produces the video. So there are already people working on it and I've seen some early examples and in some cases it works pretty well. So yeah, I think there could be some speed improvements. Yeah.

Lou (01:09:23):

That's a little meta because you're taking photogrammetry of photogrammetry in a sense, because these cameras are being stitched together via some photogrammetric process, I imagine. And now we're going to apply photogrammetry on top of that. And as long as they're stitched well and we know the distortion characteristics, it doesn't seem to be a problem. But I guess, we have to get there and prove it.

Eugene (01:09:45):

Yeah. But the other thing is the sensors, just look at the GoPro, I can't remember who did the presentation a while back, but the amount of information that's inside of the GoPro now, right? A 10 or 11.

Lou (01:09:57):

It's amazing.

Eugene (01:09:58):

Yeah. You pull it out and you get all the telemetry and stuff like that. And one of the things with photogrammetry is you're always trying to figure out where the camera is. So anything that you have that assists in telling you where you were and where you went to speeds up the process and helps you get a better solution. And that's why GPS is helpful, right? GPS is helpful because it says, "Oh, you were here, you were here, you were here." But now you include accelerometers and other things which are in the phone.

(01:10:27):

That can be super helpful too. So hopefully, in the future, other cameras and other hardware will improve by having these other sensors that will help the photogrammetry, the algorithm. The photogrammetry itself, just on that side, I mean, yeah, the real magic behind photogrammetry, what set everything off and made it automated was something from the computer vision world, which was called SIFT, scale-invariant feature transform. That's what does all the feature matching between all of the images. And so before you were saying, and it's true, if anyone that did photogrammetry a while back, they just pulled their hair out because if you got even just 15 photos and you had to match it by hand, it's a real pain in the rear. So now you can have, well, you have to limit the feature match because otherwise you'll get too many. So you tell it, "Look, once you get like 7,000, 8,000 feature matches, just stop, just move on.

(01:11:22):

You've got enough." So that part there I think is fairly robust, but it could potentially get better. And then what we were talking about is context. So now not just looking at the features, but looking at the photos and saying that is a car, that's a tree. That those are road lines. Those are road markings. And then being able to use that information as part of the algorithm and being able to make some improvements. And on the back end too, there's people that are doing a lot of work with the result. So the point cloud data, being able to segment it so they can already classify different parts of the point cloud in different ways. There may be ways of doing things on the, well now we're jumping into the laser scanner, but I was thinking in the same way, in the same vein with intensity data, what other things can we do with the intensity data that we're not doing right now?

(01:12:16):

There could be some things there. So I think there's plenty of improvements. In the next five years or so, are they going to be like, "Wow, dramatic or whatever." Maybe not. But I think as we go forward, we're going to see these incremental improvements. I wish they were faster because a long time, it's like, "When are we going to have this or when are we going to have that?" It just seems like it takes forever and ever, but these things move incrementally. They rarely happen. Not everything is ChatGPT or just it seems like overnight.

Lou (01:12:48):

And that is a good point. It's like the discussion we were having earlier about sensor fusion too is naturally going to propel photogrammetry forward. And I hadn't even thought of that with your point about the GoPro, where it's got the accelerometers on board, it's got a gyro on board, it's got GPS on board. So if we feed that into a program that can look at all of that data, it will essentially have the Leica RTC360 VIS system where it knows where the phone was that took that photograph or the GoPro was that took that photograph before it even performs a photogrammetry analysis. So that can speed things up and then you fuse it with the sensor and then hopefully we can start modeling big flat things and it's going to be some exciting stuff to watch. I wanted to go back one tick with respect to something that maybe the future or may not be. For me, it is the future, but I also want to respect your time.

(01:13:42):

So I have a few speed round things, get you out of here. Maybe we'll do a round two at some point. But for photography, I had not as a collision reconstructionist, been exposed to UV or IR, infrared or ultraviolet photography. And one of the things that's really tough for us at times is photographing tires, especially on motorcycles that may or may not have been subject to breaking forces that would generate very subtle evidence that may or may not be visible to the eye. I saw some of the photographs coming from the symposium that you run for tattoo identification and things like that, it's really impressive. So I don't want to take up too much time on this, but if you could just introduce those two types of photography and what we might be able to get out of them in the reconstruction field that you guys have already figured out in the forensics field. I'd love to hear you talk about that.

Eugene (01:14:42):

Yeah. And I mean, this relates right back to the scanners and photogrammetry or whatever, and there are units now out there. So in forensics, multispectral analysis is not uncommon. It's something that's done on in different areas. On the low end, if we talk about UV, of course humans see this very tiny sliver of the electromagnetic field. It's just a very, very tiny and on the low end we're talking, whatever, 400 nanometers, up to 700s nanometers, something in that range. Once you get up to the 800, 850s, you don't see anything anymore. So your CCTV cameras, when you're looking at those and sometimes they have that little ring of light around it and it's kind of a little bit of a glow coming out, they're usually about 850 nanometers. So you can't really see it. You can't see the light that's being emitted.

(01:15:33):

So there are things that the fact that you have a longer or shorter wavelength, for example with UV, like you're saying with skin, you can see under the skin, it penetrates deeper into the skin. And when you start getting up onto the other end with IR, once you start jumping into the 800s, 900s, you see other things that maybe you won't see in the visible spectrum. And usually, what you're trying to do there is you're trying to look at absorption. So you want to subtract something from the background. So if the background reflects, you're hoping that the item that you're looking for will absorb or vice versa, something like that. And so you're looking for the negative effect of that. So sometimes it means you have to experiment with the filters and things like that to try to get you into the right bandwidth that you need or the right wavelength that you need. I've done a couple of projects where I took infrared, so I have a camera that's been converted. It's a Nikon D7500, it's converted. It's a full spectrum camera and there's a couple of companies that will do that for you. And it's not that expensive actually. So even people who want to repurpose an old camera, I know a lot of people they upgrade. If you've got an old camera sitting around, check out and see what it costs because it might just be a couple few hundred bucks and all of a sudden now you've got a full spectrum camera, you already have all your lenses and everything else, and it allows you to see things that you normally wouldn't be able to see. And that's really what is important. So tire marks on the ground, maybe certain things that are on a car body, especially things with even fluids, fabrics and stuff like that.

(01:17:16):

There's a lot of different things. For example, maybe even seatbelt, right? And I'm trying to relate to accident reconstruction because in criminal cases or whatever, they use it for semen and fluids and blood and stuff like that, gunshot residue. So they'll use that in the infrared range. So there's a ton of opportunities that are there. There's a ton. And just because of the way that the light behaves in at these different wavelengths. And so if you have a tire, you might be able to see something on your tire that you couldn't see in the visible spectrum. And that's what it is, it's looking. Unfortunately, a lot of it is trial and error, so it isn't always clear cut. You use this wavelength, you're going to see this every single time, that's not always true with different chemicals or fluids or with different types of materials, it's different.

(01:18:09):

But for sure, the higher you go in the spectrum, so if you go up in the infrared, things like fabrics for example, they lose. So I got a dark shirt, you got a dark sweater or whatever, in the high infrared it just looks white, there's nothing there. It's just there's no black, there's no red, there's nothing. It's just completely white. So you start to lose a little bit as you go back up. But sometimes you catch something, sometimes you catch something up in those ranges. And really that's what it's about. It's about being able to see things that you couldn't see with a naked eye alone.

Lou (01:18:39):

I love it. And I think these kind of interdisciplinary conversations that you and I are having right now end up with this cool creep between the disciplines technology wise. And that is one that I will certainly be messing around with and exploring. I'll probably go out, run some skid tests and start photographing tires and see if something pops out that I didn't see with the naked eye. Really cool.

Eugene (01:19:03):

Yeah.

Lou (01:19:03):

All right, speed round. We covered a lot of the stuff that I had on my notes here somewhat naturally, just because we have two 3D geeks talking for a while about a lot of this stuff.

Eugene (01:19:17):

Yeah.

Lou (01:19:17):

It's great. And now I got some speed questions. I'll start out with kind of a fun one that I'll be curious to hear your answer on which is, best investment you've made that's under $5,000 in the past couple years?

Eugene (01:19:32):

Best investment under $5,000. Yeah, man.

Lou (01:19:37):

You can tweak the price if you need to.

Eugene (01:19:39):

Yeah, I'm trying to think about something that's super cool or whatever. But I have to tell you, between my phone and my laptop, they are the two things that I use the absolute most and my iPad. Between the iPad, the phone and my laptop, I mean, those things are the things I use absolutely the most. The infrared camera, that costs less than $5,000 for sure. And that was a cool investment. So that one I would recommend if anyone is passionate about photography or doing something like that, you could get a brand new camera, especially now, you can for even less than $5,000 and it could be very handy during your investigations. Other things, camera polarizer for your lens, not that expensive. It works wonders in some cases, absolute wonders. So for a couple hundred bucks or whatever, you can get a really good polarizer and the little things. I'd rather get a lot of little things than one big thing sometimes, it's like the little prize bag you get or whatever the kids get at the birthday parties, right? Keeps you going for a little while.

Lou (01:20:50):

Yeah, I'll take a forensic grab bag please.

Eugene (01:20:53):

Yeah, something like that. So yeah, I think those are the main things. The scanner, that structured light scanner and stuff like that. But I think on a day-to-day basis, the things that really that I push are going to be my phone, iPad and definitely my laptop. Get a good laptop.

Lou (01:21:11):

Yeah, I think that's key. And then the phone has become so vital in this business nowadays from just looking things up while you're on the road to taking photographs in tight spots to the Recon-3D app to if you need to a data acquisition system on the fly, it's got accelerometers on board. Granted the GoPro now I think is going to take over some of that role, but I'm with you. Some of the most used tools in my kit are the simplest tools. It's not always the sexy stuff, but that IR camera is indeed sexy and I'm going to be checking that out. Is there a tool that you have in your kit right now that you don't think you'll be using in five to 10 years?

Eugene (01:21:50):

Total station.

Lou (01:21:51):

Yeah.

Eugene (01:21:52):

Yeah. Total station. Actually, to be honest with you, I've sold it. So-

Lou (01:21:57):

It's already gone. Yeah,

Eugene (01:21:58):

It's already gone. But I had it here. I didn't want to let it go. I was in love with the thing. It was such a workhorse for me, not a scratch on the thing. I baby my equipment and just keep it in tip-top shape and everything. And I keep thinking it was very difficult to let go of. But the fact is that between everything else that I have, I just haven't had the need for it. And that's not to say that it isn't useful, it is useful and there are going to be situations where it can be helpful.

(01:22:28):

We talked about things like sensor fusion. Well, there are things you can do with the total station and a laser scanner that can really be great for extremely large areas. So for example, if there's an air aircraft or a train derailment or a big disaster somewhere, you can set up the total station as sort of the central device and then you could do three or four scans over here and then you can move 1,000 feet and do three or four scans over here and then go completely to the other side and do some more.

(01:22:59):

And the total station by shooting in targets and things like that, keeps all of the scan data geographically located and accurate. So there are uses for it, but I don't do that often and unfortunately I've had to let it go.

Lou (01:23:13):

It's funny. So when I started Axiom, the consulting side of my business in 2018, I left a firm that had a total station. We used it quite regularly and I invested in a FARO M70. At that point I thought that was the best bang for the buck. I still have it kind of like you in your 120. I don't see any reason to get rid of it. It's a really useful tool and I just opted to never get a total station and I have not missed it that much.

(01:23:39):

If I do a good job marking evidence at the scene, then it pops up in the scan data in the drone data and a tool that I am considering is GPS RTK version of that. It's cheaper, it's easier to towed around and I can get maybe control points for drone flights or touch some evidence. At times, I would like to touch some evidence, but I really haven't found it to be necessary in the past few years. I'm going to put you on the spot here. It's okay if you don't have a good answer to this, but what tool do you think will be in everybody's kit in five to 10 years? Kind of the opposite of the last question.

Eugene (01:24:13):

Well, I mean, yeah, that's a good question. But I mean, the thing that is already in everybody's back pocket is the phone and so what else is going to end up on the phone or what else could you attach to a phone? So we already use the phone for controlling your GoPro, controlling 360 cameras, controlling the laser scanner. So everybody's looking at a way to maximize the phone and what we can do with it. So if we could get some more functionality out of the phone by some other sensor or something else in there, I just see more things coming and maybe the advantage is not going to be on the hardware side, maybe it's going to be on the software side. Maybe there's going to be something else that is going to come in that's going to make the phone even more valuable because of the way that we're extracting video or like you said, the sensors and things like that.

(01:25:02):

So I love the fact that scanners are getting smaller. I love the fact that you can put them in your back pocket now and you've got it in your phone, but there is probably still more room to shrink. And I don't mean just because of the sensor, but I mean looking at some of these larger companies like FARO or Leica, I mean, maybe they can squeeze a bit more and with some new technologies or new things that they can do that will make it easier for people to purchase one of these things and carry it around. But unfortunately the cost when something is 50,000, 60,000, 70,000, you got to work hard to pay that back. So it's not as simple as a phone. So I think hopefully they will be converting more and more people over to Recon-3D. So maybe they'll have more of the iPhones in their back pocket.

(01:25:56):

We'll see. And you know what, I'd be more than happy if Samsung, because people ask me like, "Hey, what about Samsung or Google?" I would love it, I would love it if they could come up with a sensor and I have heard rumblings about things like Microsoft working on a type of sensor and other things going on, but nothing could confirmed right now. But that would be super cool. And competition is always good, right? Always good. So we all benefit if there's a couple, two or three phone companies making different types of sensors. So that's what I really would like to see.

Lou (01:26:29):

Yeah, I'm with you there. I think the phone is going to continue to improve and it's really kind of getting back to where you were talking about before, it's that fusion of technology that makes it so powerful. A lot of people joke that an iPhone is just a portable computer, but it's much more than that because of the killer camera, because of the accelerometers, because of the LiDAR. And when we continue to potentially add things in there and just analyze the integration of those sensors better and better, then it's going to be more and more useful tool for us. So it's going to be cool to see how that evolves. So where do people go to find you if they want? I already mentioned your LinkedIn. I think that that's a great spot to keep in touch with you to stay up to date on the future of 3D modeling and 3D scanning because you're always pushing that boundary. So you're fun to follow in that respect and I always appreciate it. Where can people find you?

Eugene (01:27:24):

So ai2-3d.com, there's a contact form there that comes right to me. So the Recon-3D is recon-3d.com and there there's also a little form, there's a chat window that comes directly to me so it pops up and if I'm available, I'll chime in right away or I'll call people and just say, "Hey, I just saw you just left a message," or something like that. So I like talking to people, I like hearing their stories and what they have to say. So yeah, if anyone wants to ever reach out, I'm usually accessible so long as I'm not traveling on an airplane or doing something else or whatever. But yeah, those are a couple ways. Or LinkedIn, like you said, you can always pop a message there and YouTube. So I've got the YouTube channel where I post a lot of the videos so people leave comments sometimes there and then I'll just respond as best as I can.

Lou (01:28:12):

Well, awesome. Thank you so much for taking the time, Eugene. I know you're super busy and you wedge this in and I appreciate it and I think the entire audience will. So thanks again.

Eugene (01:28:23):

My pleasure. Absolute pleasure talking to you. And yeah, I've never spoken with anyone this long about 3D. Usually, they've dropped off real fast, so it's nice to have somebody else who's just as keen on the whole 3D stuff. So thank you.

Lou (01:28:37):

Yeah, I'm always there for you. Cool. Thanks Eugene.

Eugene (01:28:40):

All right, cheers. Thank you. Bye-bye.

Lou (01:28:42):

Hey, everyone, this is Lou again. One more thing before you take off, and that is my weekly bite-sized email, To the Point. Would you enjoy getting an email from me every Friday discussing a single tool, paper, method, or update in the community? Past topics have included Toyota's vehicle control history including a coverage chart, ADAS, that's advanced driver assistance systems, Tesla vehicle data reports, free video analysis tools and handheld scanners. If that sounds fun and useful, head to lightpointdata.com/tothepoint to get the very next one.