Potential Uses for Vehicle Point Clouds in Collision Reconstruction
A summary of potential uses for vehicle point clouds when analyzing a traffics collision/accident; including visibility analyses, 3D modeling, simulation, and crush.
The point clouds can also be used for crush from photos analyses. That process is detailed here.
A rough transcript of the video can be found below.
(00:09):
Hi everyone. This is Lou at Lightpoint Scientific, and today I'm going to be showing you a few ways that I integrate point clouds into my reconstruction analysis, and hopefully that gives you some ideas for new ways to incorporate them in your reconstructions.
(00:23):
So the first program we're looking at here is Virtual CRASH, and we use Virtual CRASH a lot when we're performing visibility analysis, simulation analysis as well, but specifically right now we are going to be talking about visibility analysis. So to perform a visibility analysis here, we bring in a point cloud that is generally generated via a laser scanner, and then we'll also supplement that data with point clouds generated from drones, which we're using Pix4D to create. On top of that we'll have an ortho mosaic image, kind of colorizing some of those point clouds and surfaces. And here specifically we're going to bring in a car so that we can simulate what the view would've looked like from the driver of that vehicle precipitating an event.
(01:12):
So we will make an entire tutorial to show this process in a little bit more detail down the line, but today I just wanted to show you what can be accomplished and just some real basics of it. So here, if you click on the cam and then click assign to view, this camera was placed inside the cabinet of the vehicle in a position that's going to approximate the driver's head position. So now we're inside the vehicle, we can see any visual obstructions that might be presented by the a pillar, by the mirror. So especially if it's a pedestrian case or something here where they might be walking across the crosswalk, we can get an idea of when there might be a blind spot. In addition, of course, just in general you get an idea of what obstructions might be presented by these light poles and trees and whatnot.
(01:59):
So of course you can use a mesh for something like this, which I'm sure a lot of people are doing, but I feel like there's a chance that there's a little bit of artistic freedom taken by the creators of those meshes. So something to just be careful of. For instance, so this is actually a Honda Civic model that was downloaded from Hum3D, and I just downloaded to take a peek at how it might vary from the point clouds. And this is something that we end up seeing a lot. So it's obviously a really beautiful model, it's really well done, but you never know exactly where they're getting the dimensions. So let's come over to the front view. Well, the right side view, I should say. In Rhino, it's my front view. And then we'll turn on the point cloud and turn it off. So what I did is I took this mesh from Hum3D, I brought it into Rhino, which is a 3D CAD program that we use a lot. And then I just scaled it so that the overall length was consistent with what Expert AutoStats says it should be. That's a Corolla, but the Honda Civic, so 15.16667 feet.
(03:10):
So when I did that and then I turned on the point cloud and I tried to align them as best I could with using the spoiler in the back, and then I rotated the mesh so that it aligned with the rocker panel of the point cloud. And when you do that, a lot of things look good, but as you start getting close and looking at some of the details, a lot of things aren't good. So for instance, if we look at the junction of the right side doors here and we're looking at the pillars, you can see that they're actually a lot fatter according to the point cloud and they're in a different position. So these of course, the point cloud is generated from real measurements, whereas we don't know exactly where the measurements are coming from for the mesh from Hum3D, and that's obviously not a knock on Hum3D, they're just, I think it's peer supplied modeling. So it just depends on who's creating the model and where they're getting their source measurements and what their methodology is.
(04:02):
So if we come over to the side view mirror, we can see this is the mesh and then we'll turn on our point cloud. It's in a different position, it's of a different shape. So there's some things that are just kind of created so that they'll be pretty and not necessarily super accurate. So same when we come to the front of the mesh, obviously it looks really good, it's really clean, but we turn on the point cloud, we can see that there's some difference in those measurements. So it depends on how detailed you need to be for your analyses, whether or not that mesh is suitable for you, but there's some things that you got to be careful when you are using those meshes for a crush analysis or visibility analysis or anything where small differences in the measurements might make a difference to your analysis in the results.
(04:49):
Kind of looking at the wheel here, you can see there's a little bit of a difference between the modeled position and the real position as well. And if we come over to the interior, the interior, in the 3D model from Hum3D is substantially different than the point cloud. So let's just take a peek at the steering wheel inside from the mesh and then we'll turn on the point cloud, and you can see that it's in a substantially different position. So something to be aware of if you are performing visibility analysis and the interior of the vehicle matters, the A pillar or the side view mirrors, things like that, probably better off using a point cloud. And then when I'm doing crush analyses, I certainly like to use a point cloud as opposed to a mesh.
(05:31):
Okay. So some other things that we do, Ben Molnar from Lightpoint Scientific just created a really in-depth tutorial showing how to use exemplar point clouds in surveillance analysis to figure out vehicle speed. So I'm just going to touch on that real briefly. We'll talk about it more before too long, and then you'll have access to the entire tutorial that Ben created, which I think is about an hour and a half long. So it goes into great detail with respect to how he does that and what the process is. But here this is just one frame from a surveillance video and here we brought in the point cloud from the environment. We just go out to the environment, to the site and shoot off some scans. Here these scans were created with a FARO S350. And then we select some control points, and then we know where the camera is, we know it's distortion characteristics, we know it's focal length, and you can see here that the point cloud is overlaid. So I'm going to turn on and off the point cloud now and you can see that it's overlaid onto the surveillance video. And the goal here is to figure out what the position of this blue sedan is. So once you do that for a couple frames and you know what its location is, then you can figure out its speed, of course.
(06:48):
So one thing that we've been doing now is reverse projection using those 3D point clouds. So here I'm going to turn on the 3D point cloud that was aligned in CloudCompare. All right, so now that's on. I'm going to turn on visibility of point clouds and then we can just use this slider here to adjust the transparency. So now it's pretty much fully transparent, and then when we up the opacity, you can see that it aligns really well with the pixels representing that blue sedan. So here we're just going into CloudCompare, moving the position of that 2D point cloud and then bringing it into PhotoModeler until that position aligns perfectly with the pixels in PhotoModeler. And then we have certainty that that's the position of the vehicle in that frame. So again, when you do that for a couple different frames, then the vehicle speed. So we'll get into more detail on that.
(07:45):
Another question that I got recently, or a question that I from a few different people was whether or not you could bring the exemplar point clouds from Lightpoint into Zone 3D, which is FARO's program. And I was really happy to find out that you easily can. So here's a quick look at that same Honda Civic point cloud. We just dropped it into Zone 3D. Of course, Zone 3D's really good at handling point clouds being from FARO. The colorization and everything came in immediately. So the format that we're exporting from, well that you'll get from the Lightpoint exemplar database, will plop right into Zone 3D. And then if you just create a symbol with that point cloud, and I'll show you how to do that at a later date, then you can animate that point cloud. So it's bogging my computer down a little bit here, but I just press play and hopefully you can see that moving along smoothly on your screen. All right, I am going to close that out, because that is slowing my machine down and I don't want to waste any of your time.
(08:52):
All right, so another thing that we do a lot is crush analyses with point clouds. So in a situation where you have damage that you're trying to model, well that you're trying to measure so that you can figure out the speeds of the vehicles, that impact, or the delta V, we'll bring in the point cloud into CloudCompare. And generally speaking, I don't have an exemplar for this one yet, this is I think a Chevy Equinox, but what I would do is I'd bring the damaged model into CloudCompare, and then I would bring the exemplar into CloudCompare as well. And then there's a couple of tools in here where you can align the point clouds. So there's this tool right here, which is if you hover your mouse over it says, "Align two clouds by picking at least four equivalent point pairs." So you just select a couple points, well three to four really is what it's looking for, and then you can add as many as you want to bolster that fit. Just three mutual points or so on the damage model and then the exemplar, and then it'll just align the two clouds on top of each other. And then from there you can do more fine-tuned analysis in an automated fashion in CloudCompare.
(10:04):
But to help with the crush analysis and kind of make it more manageable and easy to deal with, I will click on the point cloud and then I'll take a slice right at the height that I care about. When you have the exemplar in here as well, you can take a slice that comes out of both the exemplar and the damage model simultaneously. So I'll just give you a peek at what that's going to look like.
(10:32):
All right, so now that's just the slice and then we go down on the top view and we could see looking at the left side, that's about what it looked like before and now it's crushed in like this. So again, if the exemplar was in here as well, we would take the slice at the same time and then we could put it up right against this damage profile. And then we can export from here into a CAD program. So that's generally what I do, is I'll export the entire point cloud, but I'll also just export this slice and bring it into a 3D CAD program like Rhino. And then I can just use the Polyline tool to trace this damage, and then the Polyline tool again to trace the exemplar profile, and then I can take my measurements from there. So it is a really good way to be able to measure crush, and you can do it very efficiently if you know some tips and tricks like that. So when I found this slice tool, made things a lot easier, and then I'll generally project that crush profile down onto a plane as well just to make taking the C measurements a lot easier.
(11:36):
All right, well I hope that that was useful. If you have any things that you're using point clouds for that I didn't touch upon here, drop me an email, shoot me an email or put something into the comments. I'd love to see or hear about it. Thanks for checking it out and talk to you soon.