Full Tutorial: Establishing Vehicle Speed from Surveillance Video

A detailed tutorial showing how to establish the speed of a vehicle from surveillance video using an exemplar point cloud, PhotoModeler, and CloudCompare. A brief overview of this process can be found here.


A rough transcript of the video can be found below.

(00:00:09):

Hi, this is Ben Molnar. I'm a Product Engineer here at Lightpoint Scientific. I will be showing you how to use our new vehicle point clouds to perform reverse projection photogrammetry, specifically as it relates to surveillance video. Starting here, in iNPUT-ACE, we'll be looking at the actual video recorded using a GoPro. The GoPro was positioned off the roadway here high up in the air to simulate what you would get in from a surveillance camera within a case. As you can see here, we have a frame number timestamp, and on my other screen here I have all the specifics of iNPUT-ACE, being the frame rate, the frame count, the file hash, and so on and so forth.

(00:00:54):

But for the purposes of this video, this is all we will need. We'll just watch the video one time through. You'll see me coming through in a blue Tesla in this direction. The first step is going to be to select photographs for a few different things. The first thing we care about is selecting a photograph that is relatively blank, so no car is covering up any control points. This photograph will be used to calibrate the camera in PhotoModeler. The first thing we're going to want to do is get a frame just like this where it is completely unobstructed and all the control points are visible. With that being said, I will export this frame as a BMP file.

(00:01:42):

Generally, the way that I organize my frame numbers is I'll have a folder called Exported Frames within my folder structure for the project, and I will label it the frame number as reported by the video software, in this case, iNPUT-ACE. You can see here, I'll call this 528, and since this is for calibration only, I will note that in the file right here. 528 for calibration only. Save. Now the next thing we're going to want to do is get the frames for the reverse projection analysis. Generally speaking, you'll want to pick these strategically based on what you care the most about. For instance, if you were to have a collision in this intersection here and you have some pre-impact braking, you will want to get the first frame where the Tesla is visible and then try to identify where the Tesla started braking, and then you will get those two frames to get a pre-braking speed between those two frames.

(00:02:54):

Here, I was the operator and it was a controlled test. I was going at a constant speed through the entire frame, so the frames we select are not as important as they would be in a case where you have pre-impact braking. With that being said, we are going to use intra-frames to ensure that there's no predicted pixels within the frame. The first thing we're going to do is pick the frame we want. I'm going to use this frame here. It's 421 and it occurs at 14.014 seconds into the video. We'll do the same thing we did for the calibration frame. 421. This one, I won't put anything other than the frame number. Then, I will go 30 seconds, which is one, sorry, one second, 30 frames. Save that one.

(00:04:11):

Then, we're going to go 30 more frames. One more second. Save that one as well. Okay, so now that all of the frames are saved, we no longer need iNPUT-ACE. Just a little disclaimer here, you are going to want to either do a full forensic video analysis to ensure that this video is in fact usable for this methodology or you'll want to consult with a forensic video analyst who can ensure that for you. For the purposes of this video, it's a 30 frame per second video captured with a GoPro. It's relatively straightforward. We don't have any variable frame rate issues or anything along those lines, so we will just move forward with the actual photogrammetry now.

(00:05:08):

Now, we have a PhotoModeler Premium open. For this project, we're going to be using a PhotoModeler Premium. I highly recommend that you use PhotoModeler Premium as well, because you are able to load point clouds as well as meshes in PhotoModeler Premium, where you are not allowed or not able to do that in PhotoModeler Standard. I highly recommend using Premium for this. It helps a lot to really analyze the fit. As far as the reverse projection methodology we're going to be doing, you need a PhotoModeler Premium for this methodology. Here we'll click on start a new project. It'll be a manually marked project, and then you're going to select a file. It doesn't matter which of the screenshots you use necessarily, but because we're going to jump right into calibrating the camera, you will want to use the calibration screenshot or image that you exported from the video.

(00:06:09):

We're going to open that one. We're going to click on next. Now this dialogue box is going to come up asking you what camera you want to apply to the project. If you were able to calibrate the camera in some way, shape or form, you would apply that camera here. However, for most of us, we are going to be using an unknown camera and solving for it via control points. That's the process there. Now, you're going to open up this image and here's the frame that we exported for calibration. The first step will be to start labeling the control points that we are going to want to use. Generally, I pick them in any order, really, in here, that I see them, but then I'll follow the order that I use in here in CloudCompare. Keeping that in mind, you're going to want to focus on one area at a time, so that you're not moving around your point cloud too much in CloudCompare.

(00:07:15):

I'm going to speed this up and pick the points from here. Okay, so here are the points I have picked. You can see them here in this image. A few things to note when you are picking points, it's good to spread the points out as much as you're able to across the frame. You don't want all of your control points to be in one corner or another. Another important thing is to note the number of points. The goal there is to get as many points as you can. However, there is no magic number of points that will make it a good solution or not. Generally, I like to start with at least 20 to 25. If you can get up to 30, that's great, but it really just depends on the site that you're working on.

(00:08:57):

One note of caution is make sure that when you are selecting points that you're not selecting something that's easily movable or that has been moved between the date of the collision and the date that you were able to get out for an inspection. One example of that, for instance, could be this sign here that says Casa Nostra open. This was put up during COVID, so if for whatever reason you weren't able to get out to the site within a reasonable amount of time after the collision, then you will want to maybe check your scan data before you select this as a point and ensure that it definitely has not been moved or shifted from the wind or anything along those lines.

(00:09:43):

I like to stick to things that are relatively permanent, whether it's buildings or lane lines. Once again, you'll want to make sure, via third party aerial imagery or Google Street view, that the road has not been painted between the date of the collision and the date that you were able to get out and scan the site. If you are able to get out and scan the site shortly after, this warning is a bit unnecessary. However, I would always be sure that you're checking for these things as you're selecting control points. One more thing that you'll find once you are CloudCompare and actually selecting the points, you might find that one of the points that you selected in PhotoModeler is not available in your scan data and that's okay. What you'll do is just delete that point from your PhotoModeler project and move on.

(00:10:36):

You can also use this review/renumber points and that will allow your points to stay in chronological numerical order. With that being said, now I'm going to open up the scan data in CloudCompare. Here is the scan data in CloudCompare. Nothing too special about this. It was done with a FARO M-70 and a FARO S-350. The data looks very good, it's very clean and healthy. One thing to note is you'll want your PhotoModeler project with the numbered control points open on either a second screen or the other half of your screen in order to facilitate the picking order within CloudCompare. I will be putting it on my second screen as you can see in the video there. Now I'm going to select the points in CloudCompare, and just to facilitate the video, I will be speeding this up. As you can see, I picked all of my points in CloudCompare.

(00:14:21):

There are just a couple warnings I have to you as you're doing this. The first is ensure that you're using the right perspective and if you need to ensure that you have the right rotation center for your perspective, it can be detrimental to your project if you select the wrong point, thinking that you're looking at something, but not realizing that there's a tree between you and the target point that you're trying to select. Be very careful there. If you're ever in doubt or need security, just rotate around and make sure that you got the right point as you're selecting them. It can definitely save you the headache later as you're trying to quality control your photogrammetry project. With that being said, the next step would be to save the point selections to a CSV file. If you're using CloudCompare, I usually select local index XYZ.

(00:15:23):

I'll select that there. Then, generally, I will navigate to the same folder that I have all of my files in and I will call this Control Points and then ensure that it is a CSV. Once that is saved, you can then minimize this and bring up your PhotoModeler project again. The first step is going to be to start assigning these to all of that. In order to do that, you will need to import, sorry, import your geometry. In CloudCompare you'll pull up the imports and coordinates, select the import button, and then you're going to control the solution with these XYZ locations. You'll select using known XYZ from a file, positions only. Then, you'll go to the CSV file you just saved from CloudCompare or any other point cloud manipulation software. Click open, you'll set this defeat or whatever units you keep your point clouds in. Then you'll click next.

(00:16:36):

You'll get this dialogue box. You'll make sure it's XYZ comma separated. These should just be the defaults. You'll click okay and now here are the XYZ coordinates from the points that you picked in the CloudCompare. The next step will be to start assigning them using this activate control mark and assign mode. I usually select number one, then select that and then you can just go in order thanks to the order that we did them in, in CloudCompare. Then you can right click, hit next point, two, right click, next point, three. Once you have five points, it will solve, but it won't necessarily be the best solution. What I'm going to do here is stop at... One thing you can see here is I accidentally skipped a point here. The way you can fix that is select each of these and un-assign each of them and then go back and ensure you have it right.

(00:17:54):

It's always good to keep a keen eye on that as you're selecting them. I will just redo those really quickly. One exercise we will do to check the solution, now, after 10 points, is that we can solve the camera profile without the rest of the other 16 points that we've selected. We're going to do that and we are not going to solve for distortion, just to see how well it will do with the fit. If we run the optimize, or sorry, if we process the project and we do not select include camera optimization, we can process this and you'll see a residual of 2.41 and a focal length of 4.84. It did solve, it didn't have any sort of error or anything like that. What we can do now to see how accurate that fit actually is, is to bring in our point cloud to PhotoModeler and check the fit of the solution.

(00:19:15):

In order to do that, what you'll need to do is sub-sample your total site point cloud down to a number in which PhotoModeler won't choke in trying to handle it. Usually, I've found that that's right around a gigabyte. I know that for PTS files, generally speaking, 25 million is a good number. I'll do a random subsample to 25 million and that should bring the file down to right around one gigabyte. That's going to subsample now. Then the next step will be to save it and then import that into a PhotoModeler to check the fit. Now that that point cloud is saved, we'll go back to PhotoModeler. We will import the point cloud to PhotoModeler. In order to do that, you'll hit the same button you did to bring in the CSV, you'll hit add 3D data without transforming the project. You'll click on a point cloud or mesh file.

(00:20:30):

You'll click on these three dots and then you will go to where you saved your site point cloud. I recommend having a source files folder in the folder that you're working on the photogrammetry on. This helps to stay organized and when you look back at a photogrammetry package that you did one or two years ago, you know exactly what point clouds were used in getting to the solution that you did. If you click open there, then next, and then you'll get this dialogue box here. If you do have scalar values, ensure that you are ignoring the scalar values. For this, it will be... So generally, you'll want to know how you saved it out, what format you saved it in. One way to do that, at least in CloudCompare, is to check the way that it is formatted. In the PTS order, it'll be point, then scaler, then your RGB, and then your normal.

(00:21:33):

If you do the ASC, it'll be point, then color, then scaler, then normal. We saved it in a PTS format, so the scalar will come before the color. That's always helpful to know. Now if you open up PhotoModeler dialogue box, you'll see that the scalar value is being ignored and then you have your red, green and blue. That should be good. Make sure that this has space. I believe these are all default values, so you shouldn't have any issues there. Then you'll hit okay. Now, this is going to take a minute to load. The smaller the point cloud, the faster this will load and we'll get into that in a little bit when we're doing the actual reverse projection methodology for the vehicle solution.

(00:22:23):

What I found is that when you're doing the iterations for those vehicles, you'll want to subsample the exemplar vehicle point cloud that we deliver to you down to a very small number that is only what you need in order to see whether the fit is correct. Usually this is either 500,000 or a million depending on the car, but either way, it helps to speed things up. You can see how slow this dialogue box goes when you have 25 million points. It is certainly helpful when you're iterating over and over again to subsample the point cloud of the vehicle down to a relatively reasonable value. Okay, now that the point cloud has loaded, we can start to examine the fit of our camera solution.

(00:23:13):

In order to do that, you will come over to your visibility dialogue here on the left side of PhotoModeler or wherever you have that dialogue box. You'll select the point clouds button and then you'll have a transparency bar here in which you can slide it all the way solid or clear and you'll be able to control the point size as well. If it's a little bit overwhelming in point size, you can turn it down, or if you need to zoom in and look at something in particular, you might want to turn that value up and make sure that you're looking at the point cloud and not the photo underneath it. Generally speaking, I like to keep it one or two when I'm looking at an overall view, and then probably between three and 10, depending on the image, to ensure that you're not looking at the image below it when you zoom in.

(00:24:14):

A couple of things to note here, which is interesting or which are interesting. If we turn the point cloud on and off, you'll see that the fit is actually pretty remarkably good considering we only have 10 control points that are on the left side of the photograph for the most part, other than the 10th and ninth point here. Everything seems to line up relatively well. If we turn the point size up here, look at some of the fine details. We see a lot of relatively good alignment considering we only have 10 control points and we haven't solved for distortion. That is a bit surprising.

(00:25:01):

It could be due to the camera that we used, the GoPro in linear mode, but I would highly suggest that you don't cut it at just 10 and that you get as many control points as you can, spread across the photo to the best of your ability, to ensure the best camera solution and that you're accounting for distortion, which we are not currently. Before we move on and assign the rest of them, let's just take a look at the camera profile here. If you'd like to pause it and look, this is what we're getting so far without assigning the rest of the control points and accounting for our distortion characteristics here. As you can see, the focal length is 4.8 and we have zeroes for all the lens distortion characteristics. We will now turn this point cloud off and continue assigning the rest of our control points to ensure that we have a proper camera solved.

(00:26:11):

We'll start at 11 here, we'll begin assigning. Once again, you can see if you're not careful, you can make a mistake here. Just ensure that you're being extra careful when you're selecting points that you're getting the one that you intend to get and not either a different point or accidentally selecting a pixel nearby. Okay, so now that I've selected all the points, one thing I like to do is just look at the point number and the assigned to and just make sure that they all match up properly. If there's one that is a mismatch, you'll want to un-assign it like I did earlier and reassign it to the correct point number. Now that they're all correct, what we'll do is we will rerun the processing.

(00:27:24):

This time, we will select include camera optimization. As far as distortion goes, that's really a judgment call that's up to you. It depends on how diverse your points are across the frame and how much distortion you have in the photograph. If you have a really wide angle fish lens surveillance camera, you'll probably want to account for all the distortion characteristics that you possibly could. For the sake of this project, we're only going to select all of the Ks and we will solve them from there. Now, it's resolved. Now, I will show you, you can turn off the inverse camera at this point. The only caveat to that is if you do want to add points later, you have to just make sure, via the dialogue box, so you can go to properties, selected photos, and then make sure that the inverse camera solves for something.

(00:28:26):

If you are going to add more points later, you'll want to make sure that this is selected, that focal length is selected and anything else that you're interested in is selected. We'll hit cancel here and now we will take a look at the fit again. Before I do that, we'll look at the camera profile to compare to the last time. So the focal length changes only a little bit. Then if you are interested, you can take a screenshot of my earlier camera profile and compare the two. You'll see that now we have the lens distortion characteristics, K1 through K3 solved for. If you had a really high distortion camera and plenty of control, you could solve for P1 and P2 as well. We'll hit okay here and now we'll check out the fit, see if anything's changed. I suspect, based on how good the fit was last time, that nothing will change too drastically.

(00:29:26):

Once again, we can turn these point sizes down a little bit and kind of look at the overall fit before we start zooming in on anything. One way you can do this is either flicker it on and off and just look at a few different objects as you do that, whether it's this arrow or a line here, a line over here, this arc here, the sign, which is a little bit hard to see, because the point cloud data came out a little sparse for there. Or you can use this slider bar and slowly fade it in and out. When it comes to making exhibits for trial to back up your photogrammetry analysis, this is a very good exhibit to produce for the jury and or the attorneys involved. It really helps them to understand the photogrammetry solution and it's one of the reasons that we think PhotoModeler Premium ends up being worth the extra cost.

(00:30:34):

Once again, now, if you wanted to, you could zoom in on very particular things. You could turn up the point size just to help you identify them and then you can turn it on and off and ensure that everything lines up well. Overall, I'm quite happy with this fit. If you were not happy with the fit and you wanted to get your residual down and/or ensure that the fit is correct, you would want to troubleshoot this by looking through your residuals. You can open up the point table, the quality table and look at sort of where your highest residuals are. If you show the mark point with the highest residual and then you zoom in and you have to make a judgment call as to whether that actually does need to be moved over, one thing to help you do that would be to go back to your CloudCompare and just ensure that you got the right point.

(00:31:37):

I'll add here that if you notice any really large residuals, anything over 10, it's definitely possible that you accidentally selected the wrong point, which plays into the perspective issue I talked about earlier. When you're in CloudCompare, selecting points, it's very important to make sure that A, you have the right perspective, and B, that after you select the point that you go back and ensure that you got the point that you're interested in. Now that we know that this fit is correct, the next step is to begin trying to locate the vehicle at the various frames that we selected earlier. With that being said, what you will want to do is turn your site point cloud off. In fact you can deactivate the site point cloud in the project to help you move along a little bit faster. One thing to note here, it's very good to save this frequently.

(00:32:37):

PhotoModeler is relatively good at saving automatically, via project backups, but it would be very unfortunate if you had the solution correct to then have to go back and redo all of your work. I will save it now. Save. Then, it's definitely a good idea to Ctrl+S relatively frequently. What you may have seen earlier is it failed to autosave and that tends to happen when you start to choke PhotoModeler with a lot of point clouds in the file size starts to get large, the autosave begins to take too long to the point where it starts to bog up the system. I generally turn the autosave off for that reason. However, it is generally autosaving in some form via project backups, so that is comforting. However, I would highly recommend saving every few minutes just to ensure you don't lose anything.

(00:33:41):

Saving does take a little bit longer with the point clouds loaded in though, so that is one caveat to that. All right, the next step will be to select the first photographs in which we want to locate the target vehicle. The first step to doing that is to select the frame that you want to start with. Now, sometimes it's best to start with the first frame, the frame you care the most about. I tend to start with the frame where the target vehicle is as close as possible to the camera to ensure that I'm getting a very good solution and then use that to help me interpolate the previous positions of the Tesla. We're going to start with the last frame, which is 481. Just to reiterate how I did that, you will open up the properties of selected photos, you'll select the image file name, replace photo and retain the orientation.

(00:34:45):

Then you will go to the location of the file, you'll open it and you'll press okay, and now you'll see this is a new frame in which the Tesla is visible. One thing to keep in mind is if the camera is moving whatsoever, you cannot simply replace the photo and retain the orientation. This is only applicable for projects in which the camera stays completely still. If you have video of video where a police officer or first responder or business owner is recording a video of the surveillance system with their phone and they have certain stabilization issues, you will want to be careful when you are doing projects to ensure that all of your control points stayed where they should. One way to do that is just zoom in and ensure the corner of the sign is still relatively close to where your mark 10 is. If it moved, you'll see the corner of the sign here and 10 here.

(00:35:49):

Same thing for 11. The corner of the sign down here will be displaced in equal amount, generally speaking. Be very careful if you have video, video or moving camera, not to just replace the photo and retain the orientation. With that disclaimer, we'll start getting into it. The next step is going to be to try to locate where this Tesla is, so that we can do that in point cloud land. The way we're going to do that is open up our site point cloud in CloudCompare. We will turn the pick point list off, so that's not in the way. The next step will be to load up our exemplar vehicle point cloud. You can see I have source files here and I've dragged this Tesla Model 3 Exemplar point cloud from our database into the source files, so I know which point cloud I use to begin with. I will compare that into CloudCompare.

(00:36:55):

We'll see when it comes in, what the exact number of points is, but it's about 50 million points, which is relatively high. You'll see it's two gigabytes. Obviously, when you're doing crush analysis and or damage analysis or taking measurements from the point cloud, it's best to have as many points as possible. As far as reverse projection photogrammetry goes, like I said earlier, we're going to want to subsample that down to a reasonable size before we begin iterating on its location. We will do that in a second, once this is loaded. Okay, so the point cloud is now loaded. Now, what we are going to do is turn off our site point cloud for a minute here. Now that our subject vehicle point cloud is loaded up, and just to be clear, this is an exemplar, but because this is the actual vehicle that we used for testing, it's technically the subject vehicle, but in the case of your case, it will probably be an exemplar vehicle point cloud.

(00:38:09):

Now that this is loaded up and ready to go, the first thing we'll want to do is subsample it down to a size that a PhotoModeler can handle and it won't drive you crazy when you are iterating on the location. You can see in the corner here that it has 50,933,000 points at the moment. That's great for damage analysis and taking measurements from it and for overall aesthetics, however, it's not great for lots of iterations doing reverse projection. What we're going to do is sub-sample this down pretty small for now. We're going to go with one million points. You can see that it does that relatively quickly, one million. Then I'll generally take a look at it and see if that's okay for doing reverse projection.

(00:39:10):

I think this may actually be a bit too sparse. For this car, because it's got a pretty dense interior, what I'm going to do is subsample it down to two million and see if that'll work. Okay, so that's a little bit better. You can see some of the details down here. You can definitely see the front, you can see the wheels pretty well. You can see the back side relatively well. There are other ways to do this in order to see the exterior better. You could cut out the interior to help facilitate decreasing the point size, but for now, this should work for the reverse projection. Now that this is a subsampled to a reasonable value, what we can do is bring up our site point cloud in CloudCompare.

(00:40:10):

Then, what you will want to do is move the exemplar vehicle point cloud in the X and Y direction until it is approximately where you think it should be. Now, obviously, this is relatively vague, so what I like to do is have my PhotoModeler project up and pick a couple key points. One is the end of this double yellow line here. It's a few feet in front of that, or at least the rear left wheel is and a few feet inside of that from what we can tell. I generally use that as a guide. Now, we will go back to CloudCompare and position it accordingly. Okay. Now, the next step will be to cut out the area of road that we're going to be dealing with, so that you can ensure that all four tires are on the road.

(00:41:26):

In order to do that, we will zoom out just a little bit and then cut just the section of roadway we think we're going to be dealing with. If you don't cut enough or you cut too much, it's not a big deal, we can fix that as needed. That will be good. Okay, so now that we have just the stretch of road, what we can do is set our rotation center where we need it, so that we can get the right perspective. Come down to a profile view of the roadway and the vehicle, and then we are going to bring the vehicle down in just the Z direction until the wheels are touching. Now, you're going to want to iterate on this until you feel relatively confident that all four wheels are touching and that the orientation of the car makes sense.

(00:42:45):

One thing to keep in mind here, one of the weaknesses of this methodology is that the vehicle point cloud does not account for suspension differences. If you have heavy braking and the vehicle's in a bit of a nose dive, you can do reverse projection to account for that. It is a little tricky. What you'll want to do is cut the wheels out, rotate the body downwards and then combine the wheels back in. That might be warranted on bigger cases, but for now we're going to stick with just assuming that those differences are negligible in this case. We see that the back wheels are touching here, the front are not. Then, what I'm going to do is go through the rotation. Now this can be tricky, it depends on how the whole project is set up, but I like to try both and just see how that works. For this case it's pretty nice because the Y rotation lines up pretty nicely with pitch.

(00:43:49):

You can see that there. Then I'll move it down again and generally what I look for is for the wheels to be at least touching, if not through the ground, just a hair to account for slight suspension differences. Now keeping in mind when the vehicle is scanned, the suspension accounts for normal loading. If it's not accelerating or decelerating significantly, the suspension should be relatively right if you just have the wheels touching the ground as they would be under normal driving conditions. I'm pretty confident with that. Now, I'll go to a profile view this way and ensure that the roll is correct and that any roadway grading is accounted for or crowning. That looks good. We see this wheel's touching and this wheel's touching, so I think we can begin our reverse projection iteration process. What you'll want to do is once you have the wheels touching the ground appropriately and the vehicle in your first guess for its position, you'll want to save it.

(00:45:06):

What I like to do here is keep a folder called make and then location. For this, it'll be test locations. Then I will entitle it the frame number, so that I can keep track of which point cloud is for which frame. We'll call that 481.PTS. Then, we will save. Now, we will go back to PhotoModeler and now it's just a process of iteration. We will load this the same way we loaded our site point cloud. Add 3D data without transforming, a point cloud or mesh file, three dots right there, Tesla locations and then 481. Then, you will load this up. All of these should be correct. If you have a miscoloration here, then that means that you had the scaler in the wrong position. One way to do that is you could ignore the scalar and CloudCompare and then when you save, there won't be a scalar column. That could save some time and frustration.

(00:46:16):

You can see it loaded up relatively quickly, especially when compared to the site point cloud. Now, what you do is the same thing we did for the site point cloud is you'll turn point clouds on and you will observe the fit. What you want to do now is definitely turn the points down to a reasonable value. What I like to do here is just kind of flicker it on and off a couple of times and observe what movements need to be made. One thing I can tell very clearly is that our first guest, the vehicle was a little bit too far forward. That's one thing to keep in mind. Then as far as rotation, it's hard to tell yet. What I like to try to do is get everything matched up in one axis and then if rotation needs to be changed after that, that's fine. Now we know we can move it in just the X direction, move it back just a couple feet.

(00:47:23):

Now unfortunately, CloudCompare, there isn't a great way to numerically move. There are ways to do it, but I think they're more of a hassle than a help. That's a bit unfortunate. There are other softwares that do allow you to move in a more numerical scientific fashion, mainly being CAD softwares. We like to use rhinoceros when we're trying to make very fine moves. However, we find that because Rhinoceros doesn't export in a format that PhotoModeler can read with color, what we like to do is do the movements in CloudCompare and that cuts out the need for extra software. With that being said, you'll hit okay, then you will save again and essentially now is just an iterative process. For the sake of the video, we will be speeding this up until we get the fit right. Before we do so, one thing you'll need to know is that in order to delete the point cloud, you will want to remove it from the project using this red X button here. Then, you'll want to reimport.

(00:48:44):

You are able to import two with the same name into PhotoModeler, but I suggest that you don't have too many called 481, because then you'll lose track of which is the correct one. I like to try to keep it one or two at a time here with the same name. Okay, so you can see here that the fit is definitely adequate as far as this tutorial is concerned. Obviously, for casework, we would spend some more time and really ensure that all of the lines match completely perfectly and that the tires match completely perfectly. You can see there's a tiny bit of shadowing here that you could potentially correct out on a case analysis, but for the purposes of this tutorial, this will give us a pretty accurate speed within a couple feet. With that being said, we're going to move on to the next frame.

(00:50:21):

The way that you'll do that, just like we did before, you'll go to properties of selected photos, retain the orientation, select the next frame in line, which we'll do the one second prior, which is 451, 30 frames before. Here's the frame. Obviously, this position is no longer important for us, so we will turn that point cloud off. Then, the next step will be to go back into CloudCompare. The easiest way to do this is to take the point cloud of the previous frame and clone it, using this clone button here. Then, you will be able to just move this one back. Once again, what we'll do is we will take a look at where this occurs. You can see the front of it is behind this first dash line, so we'll use that as a reference. In CloudCompare, we'll move that to just behind the front of the line here.

(00:51:35):

Now once you have that, you can turn the previous frame that we did back on, use it as a reference and make sure that the car is going in a contextual straight line. A line that would make sense based on what you see in the video and that makes sense with the roadway. Now that we have this car positioned approximately right, we will save this as 451 to our file location. Tesla locations, 451.PTS, save, okay. Now, we'll follow the same process here where we import the point cloud into a PhotoModeler and continue to iterate until we have the position correct. I'll speed this process up, once again, so that we can save some time and I'll pick it back up once the fit is correct.

(00:54:33):

Okay, so as you can see, that took a little bit more iterative than the first frame, but the fit at this point is definitely acceptable once again. Once again, just a caveat, this is definitely acceptable for the purposes of this tutorial, but on a case, you may want to spend some more time and really match all angles of the vehicle. You might want to add in some more pitch downwards, but for the purposes of gathering speed and showing the process, this is definitely acceptable. One more thing just to show you, the exhibits that you can create with this are great. We'll go over that a little bit at the end as well. Moving on to the next frame, we'll do the exact same process. Properties of selected photos, we'll select one more second prior, which will be 30 frames prior, 421. This will be the last frame that we'll be analyzing.

(00:55:29):

Once again, you'll want to turn off the point cloud for 451 and then pick a reference point. One, two, three, four, five, six. The vehicle's just behind the six dash line, so we'll go back into CloudCompare. We will clone the last one, once again, and then move it back to where we believe it is. One, two, three, four, five, six, just behind there. You will want to start by making sure that it is correct in the Z axis. You can see that the tires are through the ground there. If you begin to try to match the location and it is incorrect in the Z axis and the tires are not touching the ground, you will drive yourself crazy trying to match something that's impossible to match. It's always good to start by making sure the tires are where they need to be. You can see here that those tires look approximately correct. Now, we will begin iterating once again. Once again, I'll speed this up to facilitate the video. I will pick you guys back up once the fit is correct. Okay, so at this point, this fit is also adequate for the purposes of this tutorial.

(00:58:44):

Once again, if you wanted to correct any of the shadowing here or fine tune this, I would highly recommend that for casework. But this will give us a relatively accurate speed and position for the time being. Now that all three vehicles are done, the next step will be to get the location of each vehicle and calculate the speed from that. I will go into that process now. Then your XYZ coordinates of each vehicle point cloud, distance, speed. Now, it's important to keep track of units here. All of our units for this project were on foot. If you're using meters, make sure you make that adjustment. These will all be in feet, which means the distance between the vehicle point clouds will also be in feet and your speed will be in feet per second. Then, I generally make another column for miles per hour. We did 421, 451, 481. Now, the next step will be to come into your video editor. Once again, we're using iNPUT-ACE, and find the timestamp for each one. 481 is 16.016. Make sure you are staying organized and putting these in the right rows.

(01:00:22):

For 451, it should be approximately one second before, but we're going to record the exact timestamp just in case there's any sort of variable frame rate. We have 15.015, so that's almost exactly one second. Then once again, we will record 421. It says 14.014. Now, for getting the XYZ coordinates of the vehicle point cloud, there are many ways to do that. You can use a CAD program, you can use CloudCompare. The way that I like to do it is use CloudCompare, since they are all already open from our iteration process and use the box center values right here. Because we cloned the point cloud, these box center values will be the same for each of those point clouds, but they will have different global coordinates. We'll start with 421 and get the XYZ. What I like to do is open the Excel and CloudCompare split screen, so that I can ensure that I'm getting the right values here.

(01:01:45):

For 421, we have 399.738, -145.066 and -3.99654. We'll do the same thing for 451, 458.458, -145.08, -4.31908. Now for 481, we have five 17.378, -144.608, -4.44236. Once you have those recorded, you can now come back and focus just on your Excel file. What I like to do is use the second row to calculate the distance between the first and the second frame, the third row for the second and the third, and then so on and so forth for however many frames you have analyzed. What I do is I type in the distance formula manually, so it'll be X minus X, quantity squared, plus Y minus Y, quantity squared, plus Z minus Z quantity squared, and then that whole quantity will be square rooted. That will give you a distance here. Now, it's important to note that it will give you lots of decimals based on the math here. Obviously, there is some error in reverse projection, so reporting many significant figures is not really a good idea, so I tend to just stick to one or two, usually one significant figure or three significant figures, but one place after the decimal.

(01:03:47):

Now to get speeds, we'll do the distance divided by time. Obviously it's about one second, but just to be thorough, I will use the actual values that we got from iNPUT-ACE, you'll be able to drag this down as well. Then for miles per hour, we will just convert that. Here are the speeds that we have from our reverse projection analysis. One thing to note is during this test, I was trying to hold 40 miles per hour. I set the cruise control on the vehicle to 40 to assist me in doing that. The fact that we got speeds of 40.0 and 40.1 for these two frames instills some confidence. We also had a VBOX Sport in the vehicle during the testing. Here is the speed trace during the testing that we did. You can see this dash line here is 40 miles per hour. The cruise control held it at just under 40 miles per hour. You can see it ranges generally from about 39 to 39.5 for all of these values.

(01:05:07):

You can see the speeds down here. Once again, just to reiterate, the speeds that we did get from the reverse projection and the iterative process of using the exemplar vehicle point cloud are accurate and that certainly validates the analysis that we did. As far as just capping this up, just to clarify, this process was done using PhotoModeler Premium, CloudCompare and iNPUT-ACE. However, this process can be altered or changed to match whatever software package you have access to. There are many ways to do reverse projection. There are many photogrammetry softwares that are available. By no means do you have to use all of the softwares that I did. This is just the process that we are currently using.

(01:06:07):

One of the things that we do like about PhotoModeler Premium and its ability to show the point clouds, including the camera distortion characteristics, is you can create nice exhibits showing the fit of both your camera solution using the site point cloud. A lot of times we will do screen records or output images showing the fit and fading it in and out. We've had great feedback from both clients and juries that this helps them to really understand what the photogrammetry solution is doing. Then what's nice is you can do the exact same thing with the vehicle point clouds for any of the given frames. You can create nice exhibits showing the fit of the vehicle fading in and out. Some of the advantages to using our exemplar vehicle point clouds are that they are created using LiDAR.

(01:07:10):

They're scientifically accurate, so you know that the reverse projection you're doing is based on a sound scientific model rather than an artistic rendering. Obviously, the aesthetics of a point cloud are not quite as nice as an artistic mesh or a rendered mesh. However, the accuracy that you get from using a point cloud over a mesh is certainly helpful. We've found that when we have tried to do this exact same process with meshes, there is more error in that the mesh is not perfectly geometrically accurate at all times. If you are going to use a mesh, you'll want to scale it to something like one of our exemplar vehicle point clouds or Expert Autostats or some scientific measurements of the vehicle itself.

(01:08:06):

As far as other things to stay tuned for relating to this process, I'm currently working on a technical publication, which I'm hoping to get published in SAE, that validates this process that I've shown here. Clearly, we validated it for this test, but I have several different videos, several different approach angles for the subject vehicle, and I will be publishing that very shortly. Stay tuned for that. Thanks for watching. I hope you learned and I'm open to any questions or concerns any of you have relating to this process and the benefits of using our new exemplar vehicle point clouds for the process. Thanks once again and I hope to speak with you soon.