MARK CROUCH | VIDEO ANALYSIS
Lou sits down with Mark Crouch, Head of Collision Investigations at FCIR, to discuss the omnipresence of video evidence and how it's designed to trick you, tools of the trade, and the future of video analysis.
You can also find an audio only version on your favorite podcast platform.
A rough transcript can be found below.
Links from the Show:
Timeline of Topics:
00:02:47 - Similarities and differences in collision reconstruction between the US and the UK
00:23:41 - Mark's background
00:28:24 - ITAI Membership
00:40:16 - Mark's introduction to video analysis & the development of his book
00:51:58 - Evolution of video analysis: frequency, diversity of sources, and autonomous vehicles
01:03:16 - Frame health
01:25:14 - Mark's video analysis toolkit
01:30:03 - Frame timing and position
01:41:05 - Recommendation for macroblock analysis
01:47:46 - Codecs
02:01:27 - Video of video
02:08:43 - Changes in video analysis over the last 15 years
02:09:50 - Current developments in the field of video analysis
02:14:54 - The future of video analysis
Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.
Lou (00:00:17):
This episode is brought to you by Lightpoint, of which I'm the Principal Engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analyses. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, it takes hours of searching for the perfect model, awkward conversations with dealers, and usually some cash to grease the wheels. Then back at the office, it takes a couple more hours to stitch and clean the data, and that eats up manpower and adds a lot to the bottom line of your invoice. Save yourself the headache so you can spend more time on what really matters, the analysis. Lightpoint has already measured most vehicles with the top of the line scanner, Leica's RTC360, so no one in the community has to do it again. The exemplar point cloud is delivered in .PTS format, includes the interior, and is fully cleaned and ready to drop into your favorite program such as CloudCompare, 3dsMax, Rhino, Virtual CRASH, PC-Crash, among others.
(00:01:11):
Head over to lightpointdata.com/datadriven to check out the database and receive 15% off your first order. That's lightpointdata.com/datadriven.
(00:01:25):
Alright, my guest today is Mark Crouch. Mark is Head of Investigations at FCIR, a collision investigation and reconstruction firm in South Croydon, England. Being a Yankee, I think that's essentially London. Prior to forming FCIR, Mark worked for the Metropolitan Police in London as a forensic collision investigator for eight years, where he conducted hundreds of traffic collision and investigations. He holds a master's degree in applied physics from the University of London, and his work in the field of collision investigation has led him to achieving chartered physicist status from the Institute of Physics and also a chartered forensic practitioner. He is a member of the Institute of Traffic Accident Investigators and was recently elected as the organization's chairman. In 2017, Mark authored an in-depth book, Video Analysis in Collision Reconstruction, with colleague, Steven Cash, and has recently released its second edition.
(00:02:20):
From across the pond, thanks for taking the time to sit down today, Mark, and have a good conversation with a Yankee like me. And I think we'll start a little bit with that, because in speaking with some colleagues and prepping for this conversation, like we were talking about before we started recording, it seems like there are a lot of similarities between our work here in the United States and over in Europe, but I think there are probably a lot of differences as well. So I'd love to hear you just kind of talk about the ecosystem, civil, criminal, when you're hired, how often you end up having to testify, whether you're writing reports consistently, and just kind of give us a feel for the environment over there if you could.
Mark (00:03:01):
Yeah, absolutely. So as you say, there are two strands that we have here, the criminal side and the civil side of the system. Both within the same judicial system, both have varying rules governing them. But broadly speaking, from a forensic scientist point of view, the work that you do in them is pretty similar because the science is the science. You just might be writing a report with a particularly different title on it or something like that. But the analysis work that you do is pretty much the same.
(00:03:33):
In terms of the collisions in the UK, the police go out to that, specialist police officers, collision investigators turn up. And typically in the UK, nearly all cases, the police will do the data grab at the scene and also do the analysis with part of the wider team doing the whole investigation about, perhaps, the driver and whether they were drinking, and all of those kind of things with the forensic collision investigators doing your vehicle speeds and vehicle damage, and all the stuff that we're used to. And then that gets put forward to the Crown Prosecution Service over here, who makes a decision. So somebody else makes a decision about whether a driver's going to be charged or not.
(00:04:15):
Slightly different in some of the other areas in Europe, in that the police will go to the scene, but they might outsource an investigation to forensic practitioner from a private company. But typically in the UK, that work's done by the police.
(00:04:33):
As it then goes through the system, it will typically be looked at by what we loosely call a defense expert here, but to be absolutely clear, that defense expert's talking about perhaps which side is instructing you, rather than your particular duty. Because us over here, just like you do, independent experts are absolutely independent, have to be down the middle. Duty is to the court.
(00:05:02):
So any report that the police writes will typically be looked at by a private practitioner who will, hopefully, if everything's gone well, essentially agree with the conclusions. There'll always be little things, maybe little variances in the way that a particular piece of evidence is favored based on experience. But if the job's been done well, they'll just agree with what the police did.
(00:05:28):
And it may then be the case that the defense report never gets given to the police officer. They never see it because ultimately, it might not be that helpful for the particular defendant. So the last thing that the defendant wants to do is have two experts telling them that they've done something very badly wrong. They'd rather just take their battle up against one of them. So that's the criminal side.
(00:05:55):
When those two reports are together, should, the two experts should get together and complete something that's called a joint expert report or a joint statement. And the idea behind that particular document is to narrow the issues that the court has to deal with. So they will sit down and list the areas in which they agree and which they disagree. That's helpful in criminal court, but it probably plays even more strongly on the civil side, which I'll talk about. Because in the criminal side, just like you guys, all of this evidence has to be given to a jury. So it doesn't matter whether all the experts agree on everything, it still needs to be rehearsed in the court and put in front of a jury.
(00:06:42):
If we talk about the civil side now, well, typically in the UK, there isn't a jury. Civil trials are heard by a judge who makes the decision. And what that helps to do, that particular document, is to really streamline the court process. So if the experts agree on nearly everything, well the judge doesn't really rehear that evidence because it's already agreed, and therefore, you can shave one or two days off of a trial.
(00:07:13):
To give you the comparator between the two, before we go and rehearse what the civil side looks like over here, a criminal trial for a... We would have the offense of causing death by dangerous driving, which is our highest driving offense over here if we were to park an offense like murder where the vehicles used as a weapon. If you were driving down the road really badly, the highest offense in the UK would be causing death by dangerous driving. That trial would often be two weeks, maybe three weeks, played out in court. The similar kind of crash, but heard by a civil trial, you'd probably do in two, three, maybe four days depending on the level of an agreement between the experts.
(00:07:59):
So civil, I spend most of my work in civil.
Lou (00:08:03):
Okay.
Mark (00:08:04):
And very similar to you guys, this is less about working out whether you can get over the threshold of guilty or not guilty. There are two verdicts over here, and it's very black and white. It's a very binary decision that a jury has to make, he's guilty or not guilty?
(00:08:23):
In civil, well, we're talking about apportioning the blame. Now we are talking about, well, yeah, they were very bad, but were they 100% bad or 90% bad or 80% bad? And the amount of money that's being talked about in these cases over here, typically a catastrophic loss case would be in the order of 10 million pounds, which is, depending on what the current exchange trade is, what, $12 million, $13 million, something like that would be the amount of money that they are talking about. And we would then be literally divvying up that pot of money according to where the liabilities sit. So 10 million pounds, 70/30, well, that would only be 7 million pounds paid by the insurance rather than the whole 10 million.
(00:09:11):
So in civil, they tend to do a lot more deeper digging around the facts because there's a bigger ticket associated to it. And sadly, when there are insurers involved, as we all know, well it's a game about money, unfortunately. Not that I see anywhere near that kind of money. That would be nice. But sadly-
Lou (00:09:34):
You don't get 20% of the ask?
Mark (00:09:37):
No, no, sadly not. Although that would definitely call into question the impartiality of an expert, wouldn't it?
Lou (00:09:44):
I think it would.
Mark (00:09:45):
If you got paid off according to the result, that would definitely cause some significant questions.
Lou (00:09:52):
Yeah, but if the case is big enough, they're probably not afraid to spend 50 to 100,000 pounds to try to figure out exactly how things happen.
Mark (00:09:59):
Exactly that. Exactly that.
Lou (00:10:01):
Yeah. Okay. It sounds like there are a lot of similarities between the States and Europe on that front, as far as figuring out how at fault a certain party was and then divvying up the pot accordingly. So I don't do any criminal work, but it sounds like there's a lot of similarities there too.
(00:10:24):
One of the big differences is that meeting of the minds, getting the experts together. So I'd love to hear more about that process because I've had so many cases where I'm like, "If I could just sit down at a conference table with the other expert and have a conversation, then I don't think we'd be having to have a lot of these fights in front of the jury or the judge." So what does that meeting look like? Is it similar to a deposition where there's a court reporter and stuff, or?
Mark (00:10:52):
No. So you guys don't have that situation where the experts get together?
Lou (00:10:57):
It's so strange. It would literally be inappropriate for me to talk to a colleague about my analysis without the attorneys... They wouldn't even... It would be strange. And it's crazy because, of course, a lot of times I'm going up against friends who I respect and I'm just like, "It would be great to talk about the facts, and say, 'This is how I'm seeing it, convince me that I'm wrong, or vice versa.' " And I think we could have productive conversations there, but it's inappropriate.
Mark (00:11:25):
Great. Yeah. So this is a really, really useful document. I didn't realize that doesn't happen with you guys. In nearly every case that will be instructed, be that criminal or civil. And our reports are already submitted, they're already exchanged. They're already, if you like, in front of the court, even though we're not quite at trial stage yet. So our reports are in and locked in. We can't do much to change them. It's not like a discussion pre putting our analysis.
(00:11:54):
But then the courts will direct, with a specific timeline, that we are to discuss the matter and produce a list of areas of agreement and disagreement. This is a conversation that, unless there is a route where the lawyers, in our case, your attorneys, could be involved in that meeting, but that's very rare. The idea behind it is it's expert to expert. Nobody else is involved. To the point that when I start talking to my counterpart, I can't talk to my side, my legal side, the attorneys or the solicitors on my case, until we've completed that document because it would be wholly inappropriate for them to have influence on me while I was in those discussions.
(00:12:43):
Now, how do they go? Well, there can be a real vary, as you say. Sometimes you've got two reports. It all depends really where you start. You can sometimes exchange analysis with somebody and go, "Well, they're the same then, aren't they?" And you're still directed by the court to sit together and produce areas of agreement and disagreement. And you're like, "Well, you could just read our reports and save yourself a load of trouble. Because even if you had just read our conclusions, you'd see we're saying the same things."
(00:13:16):
Sometimes you get into a situation where there are differences in the analysis, but ultimately, they're immaterial in terms of the conclusion. Had they been traveling at the speed limit, the pedestrian would always have walked clear, even though our approach speed is five or 10 miles an hour out. The material conclusions you reach are the same. So you do spend a very short period of time discussing where the differences are and why. If you can't reach an agreement between you by going, "Yeah, but you've got that, Mark, I don't know, two meters longer than it actually was. Here, look, I've got the better measurements." And the other person goes, "Oh, in which case then that's right. Yeah, the speeds become this," or whatever.
(00:14:02):
And then the third time that you have is when you are miles away with each other.
Lou (00:14:08):
Those might be the awkward conversations that might make the system just a little bit more difficult to deal with.
Mark (00:14:15):
Yes, it can be. It can be. And some of those discussions really test your professionalism at times. Particularly, how do I put this delicately on a recorded podcast, if you are dealing with one of the practitioners who takes their duties as an independent expert very lightly, then you have more of a struggle.
Lou (00:14:44):
Morally flexible, by the way, is how we generally term it.
Mark (00:14:47):
Is that how you do it? Orally flexible. I can't-
Lou (00:14:50):
Morally. No, morally, yeah.
Mark (00:14:52):
Morally flexible.
Lou (00:14:53):
Yeah, exactly.
Mark (00:14:54):
Well, yeah, I think we're talking about the same kind of people, aren't we? And then you have a situation that becomes... There's a real skill into writing those because the judge will have that document in front of them and you have to put enough work into the areas of disagreement that will come back to bite them. If they are playing a bit fast and loose with the truth, that's the document they're going to be held to and cross examined on, so you have to do quite a lot of work about explaining why you hold your position and why they hold theirs, into ultimately a document that's going to be used in cross-examination against them.
Lou (00:15:39):
I love it.
Mark (00:15:40):
But most of the time, most of the time, if you come up against an expert that's done the analysis properly, these aren't particularly difficult things to write. And sometimes the real skill in writing that document is keeping it short, because sometimes as experts, we like to be quite verbose, don't we? And write down lots of stuff and all of the geeky bits and pieces. But ultimately, if you agree, that's all the court wants to know. Do they need to spend a lot of time speaking to two of the experts when ultimately they agree on something? They almost give the court a fast pass that says, "This is what the experts are saying," but the disagreements, now we're going to get stuck into those.
(00:16:20):
So it's quite an important document. And like you say, sometimes it saves the court a lot of hassle just by having the two experts in a room together and discussing the case.
Lou (00:16:33):
Yeah, I imagine too. I don't want to hang on this too long, but it's super interesting to me because I imagine sometimes you go in there and you're like, "Well, it's a sliding motorcycle. I was using 0.48 g's with a standard deviation of 1.3. And then the other guy's like, "Well, I got a paper that's perfect for this case," just because they did 10 slides with this exact motorcycle, or something crazy like that. And you're like, "Oh, okay, that's awesome. Thanks for bringing that up. Let's use that one. I think that's a better idea."
Mark (00:17:00):
Exactly that.
Lou (00:17:02):
And that would be great because every once in a while, I mean there's so much literature out there, every once in a while I get another report or another analysis from an expert and I'm like, "Oh, wow, that is a more on point study and I'd like to integrate that."
Mark (00:17:14):
Absolutely. And occasionally, although it shouldn't happen, and we should identify it, sometimes the other experts seen more exhibits than you have. And ideally, you should be working from the same information, but it doesn't always happen like that. And ultimately, if somebody walks up to court with a better set of, I don't know, photographs or measurements, or something, that they got from the scene, well, our duty is to independently analyze the facts that we have in front of us. And if they turn up with better measurements, well then we work on those.
Lou (00:17:49):
Yeah, I totally agree. There's one quote that I heard recently, I don't know who said it, and I'll probably get it wrong, but you'll understand the concept. And it was something to the effect of somebody blasting somebody for changing their mind. And the response from the person was, "Well, when I'm presented with new information, I change my mind. What do you do, sir?" And I thought that was just perfect. It's like, that's what I should be doing. You're saying that it's a bad practice, but of course it's the right thing to do when you're presented with new information, so.
Mark (00:18:20):
It's awkward. It's an awkward and uncomfortable time. But just in the mind, our duty's to the court. I don't know whether it's expressly written in the States, but in terms of the rules, the criminal procedurals and civil procedurals that we have over here make it very, very clear that the expert's duty's to the court. And that's it. If you turn up with new information, my duty is to the court, not to them.
Lou (00:18:52):
Yeah, exactly. And that's the way that I look at it too, is my duty is to the truth, to figuring out what happened, and you're hiring me to figure that out. You're not hiring me to give you an answer that helps your case out.
Mark (00:19:05):
Absolutely.
Lou (00:19:06):
And people who are hiring me for the latter will only hire me once because I am not going to play that game.
Mark (00:19:15):
Yeah. And it normally breaks down halfway along the process, so we don't even do the full job.
Lou (00:19:25):
And how does writing that report work? Are you actually interacting? Do you write it during the meeting or is there back and forth during the meeting it gets written?
Mark (00:19:35):
Well, there's no set protocol, really. And again, it depends on how far away you are to start with. Typically, there's an unwritten rule in the UK that the claimants experts will do the first draft, but that's only really an informal sort of agreement between experts, really. But again, with workloads and things, sometimes that always changes. But, generally speaking, you'll have a discussion. Somebody will go away and do a draft. You could write it together, nothing stops you doing that. And then often there's an exchange of emails, depending on how close you are with that exchange of emails, maybe a quick phone call or a quick Skype call or whatever, just a chat over some of the refinements about how it's going to be written. Often easier if there are no material disagreements. If there are lots of disagreements, well, you're both trying to express your position of why you don't agree with the other person. So you could be 10, 12 drafts if it had to be.
Lou (00:20:50):
That sounds expensive.
Mark (00:20:54):
So if you looked at the document on its own, yes, that can be quite expensive. But if you think about the saving to the court process, it is so valuable because if you save, I don't know, two days at trial for that, you've got all of your attorney fees, all of your court fees, it's cheap. It's cheap.
Lou (00:21:14):
Yeah.
Mark (00:21:15):
So it depends. It depends, really,
Lou (00:21:18):
This might blow your mind, and you may already be aware of it, but in California where the majority of my work is, there are no reports. So our first disclosure of our opinions is really at deposition. So recently, they enacted a rule where you have to provide your file, at least, to opposing counsel three days prior to your deposition. And for a lot of experts, that includes an opinion sheet, but just very brief summary of their opinions or maybe not at all. So the first time they're hearing them is at deposition, and that's where basically all of the back and forth has to happen between the experts, if it's possible. You say, "Well, how do you disagree with Mr. John Doe?" And you verbalize those disagreements. But a lot of the time, especially if you're a plaintiff expert, you'll be getting your deposition taken prior to even hearing what the other expert says. So you can't even comment on it. So your first comments come up either via cross-examination through your client or during your direct examination. It's interesting.
Mark (00:22:23):
No, that's very different. That's very different. We know what the other expert's written in their report, what they've studied, what they've done. We've had this discussion before. And actually one of the things that we do at trial, because lots of civil cases settle before the actual trial, but when we get to trial, we're providing our counsel, our attorneys with questions, "Ask him this, I'll ask her that. How do they align those two things?" So at trial, we are working quite hard because we're saying, "You need to find out their position on this. You need to know what they're going to say on this." And that's during live evidence. So no, we definitely have it in advance.
Lou (00:23:07):
I love it. Other than writing the report, which my career started in Massachusetts, and we had to write a report for every case, it varies state by state here, most states you have to write a report. California is one of those where you don't. And that was part of the appeal of moving here, quite honestly, because most of my work is analytical at this point. I don't have to write a report every time. Although, as I'm sure you know, there are a lot of benefits to sitting down and putting all your thoughts on paper and working through everything, it helps you clarify everything, but very, very interesting.
(00:23:41):
And how did you start down this process to begin with? How did you become part of the Metropolitan Police and in their accident investigation unit, if that's the proper terminology?
Mark (00:23:52):
Yeah, yeah. It was then our collision investigation unit, or road death investigation unit. We have a few different names for it in the UK, but essentially the team that go out to fatal car crashes. So I left university back in 2008 with my shiny degree certificate. And up until that point, I had always wanted to go into the world of banking to make loads of money and have a wonderful life.
Lou (00:24:22):
It sounds nice.
Mark (00:24:23):
And all of that. Yes, wouldn't that have been nice? And sometimes I wonder whether I made the right choice. So I applied to The Metropolitan Police were advertising for, it was a role that was typically done by police officers, and they were taking a couple of civilian staff into those roles to see whether it could be done by a civilian. It's not something that they're doing a lot of now. So I think I managed to break the system, but-
Lou (00:24:53):
Bad idea. Yeah. Whoa, whoa, whoa.
Mark (00:24:54):
Yeah, exactly.
Lou (00:24:54):
No more civilians. Yeah.
Mark (00:24:59):
This was my wild card. I don't particularly know why I applied. Had a degree with applied physics, and it sounded applied physics-y at the time, not really knowing much. Probably my entire knowledge of roads policing and traffic policing was what I'd seen on the telly. I didn't really know anything about this. Applied to it as my wild card. And the more and more I found out about it, the more it's, "Well, that's quite interesting. It's a practical use of what I've done." And the rest with that is kind of history. The more I did, the more I enjoyed it, and I caught the bug very much so. When you speak to a lot of people that do things like this, it just captures you. And I'm still fascinated by collision investigation now. So yeah, it was the shiny thing out the corner of my eye that has turned into a career.
Lou (00:25:57):
That's pretty much how it started for me as well. I didn't know about it, and I was introduced to it by one of my undergrad professors. And once I got introduced to it, I was like, "Well, this seems like tons of fun." It's putting a lot of what I like together in one package.
(00:26:11):
And that first role with the Metropolitan Police, that was an accident investigator, like straight out of the gate?
Mark (00:26:18):
Yeah.
Lou (00:26:18):
Cool.
Mark (00:26:18):
Straight out the box. Yup. Straight out the box.
Lou (00:26:21):
That's awesome.
Mark (00:26:24):
And obviously there was training, and courses, and mentoring, and things, but yeah, the only thing I did at that time was collision investigation.
Lou (00:26:31):
So then you stuck there for eight years, obviously got a lot of exposure to a lot of, I'm sure, brutal crash scenes, and learning how to interpret evidence. And boots on the ground is got to be invaluable from the private side. It's very rare that I'm able to put myself on scene. I'm usually there years afterwards.
Mark (00:26:54):
Yeah, exactly.
Lou (00:26:55):
And then you started your own firm eight years later, so that was about seven years ago at this point?
Mark (00:27:04):
Yeah. I was fortunate enough to be able to start exploring this idea while I was still in the police. So I had an overlap of a couple of years. And so 2014 was when I came up with this idea of, wouldn't it be good if I found a way to do this privately? And I spoke to a couple of different companies about going to work for them, and there was just a little bit of me that was, certainly not speaking ill of them because they're good companies, they have good reputations, but there was always something with them that was like, "Yeah, but wouldn't it be better if we did this?" But I couldn't do that here. Or, "Wouldn't you want to do it this way?" But couldn't quite do it there. And then well, long and short of it, you reach the position where you go, "If I want to do this, I'm going to have to do it myself," so I did.
Lou (00:28:03):
That's awesome. That's similar for me. We joke around here that we're running a pirate ship because we do a lot of things that, if we were at some big company, they'd be like, "There's no chance you're doing that." And I wouldn't have it any other way. It works out really, really well. And it sounds like you're in a similar position.
Mark (00:28:22):
Yeah, yeah.
Lou (00:28:24):
One of the organizations that's really intriguing to me, and you might have seen, I don't know if it came across your desk now that you're the chairman, but last night at 10 o'clock my time, I signed up for ITAI to become an affiliate member. Well, not a member, an affiliate. I don't know how you would term it, but there's a few different-
Mark (00:28:41):
Yeah. Yep.
Lou (00:28:42):
Member. Okay. So there are a few different levels there I see. You can be an associate, an affiliate, and then a member. So I'd love to hear you just talk a little bit about ITAI, and then also the different levels of membership. And it sounds like once you become a full member, that there's a lot of vetting that goes there. You have to have certain education and prove your competency. Sounds like it's similar to ACTAR in the States, whereas it's a bit of an accreditation of sorts, or at least a stamp of approval.
Mark (00:29:14):
Yeah, so exactly that, really. We're the UK body, although we do have coverage, as we see around the world, for collision investigators in UK and wider afield. And what we try and do, or it was born in the very late eighties, but realistically the early nineties, it was a gathering of like-minded people. It was the idea to get people who were previously out there practicing, previously out there doing things, to get into a situation where they could share ideas. That's all it was. An idea of sharing ideas and perhaps getting to the point where...
(00:30:03):
... an idea of sharing ideas and perhaps getting to a point where they might be able to write some papers or to try and get ... Try and raising the standard, professionalizing the industry if you like. That kind of idea.
Lou (00:30:16):
Yeah.
Mark (00:30:18):
So there were lots of meetings and out of those meetings, there was some shared knowledge, and out of the shared knowledge became a paper and a journal. Impact is our journal, comedically known as Impact, where we have a load of different papers and they can be really wide-ranging from human factors to digital data on vehicles to road design to coefficients and friction. There's no real limit to what's on there. If it's related to collision investigation, then it plays. So that's been from the early nineties, so 33 years almost worth of papers and publications, which is really good. And it is an idea to get like-minded people together, but also recognizing their standard and their ability and have some kind of vetting. We're not a regulator in the UK, but we do set the standards for people who want to join. And you can start a relatively low level. So for example, an affiliate could be anybody that had an interesting collision investigation.
Lou (00:31:33):
Yeah, I like that wording. I was trying to select what I was going to do and I was like, okay, well, I don't want to submit an application or anything. Not yet, anyway. So that wording was there, interested in collision investigation. I was like, well, that's me and I'd love to participate in some of what's going on with the body and then read the publications. And I imagine that's what that gets.
Mark (00:31:53):
Exactly that. Exactly that. And actually, if there are people here that are thinking that that is a really nice way of getting into the organization, it's relatively inexpensive and it gets you access to so much material. So if anybody was thinking about that, that would would definitely be a good route in.
Lou (00:32:15):
That's one of my bones to pick I guess with our industry on a worldwide level is that there are a lot of provincial publications that don't get spread between the countries at times. Where it's like that you guys might have the perfect publication for what I need on a certain case, but it's just not kind of in the ether here in the States, so nobody knows about it unless you have somebody who is kind of cross-pollinating. And for us, I think Wade Bartlett's been a big part of that. Colin Glynn has come over to WREX and spread some of that knowledge, so I'm excited about that.
Mark (00:32:49):
Exactly that. We work very closely with EVU, which is the European crash teams or crash body, I should say. So we work quite closely with those, but there's certainly a lot more stuff that can be done between us and ACTAR, for example, just this share of knowledge. At the end of the day, it's a science. It is a discipline. We have to share the knowledge, otherwise we'll all just sit over there knowing, oh, a little thing.
Lou (00:33:17):
Exactly.
Mark (00:33:17):
But it doesn't actually help us investigate how people died at the end of the day. So the more that we can knowledge share, share ideas and like any kind of research really, nobody does the whole bit of research themselves. You know, you do a little bit and then somebody else takes that and moves it a little bit further and then somebody else picks it up again and goes a bit further. That's how we learn. So that share of knowledge, really important. Yeah, slightly digress there, but yeah.
Lou (00:33:45):
No, that's cool.
Mark (00:33:47):
So, back to the membership grade. So we then have the associate member. I'll step away from the associate very slightly and talk about members first because it puts then associate into a little bit more context. So full member, we're talking about reconstructionists, so people who go and reconstruct collisions. And to be a member, there are various different entry requirements, but having some of your work peer checked and assessed to check that you're competent is one of those elements. But that's aimed at reconstructionist. But there are lots of people in our industry who don't do reconstructions but are very competent at what they do and play a crucial role. People that do vehicle examinations, for example. Not every collision investigator in the UK can also go and take a vehicle apart or go and examine it forensically. And so there are other people who would be heavily involved in those kind of things who would be better suited as the associate role. Those that do exclusively human factors, for example, but don't actually reconstruct a collision, again, that would be the associate role.
Lou (00:34:55):
Gotcha. I love it. So where are you guys getting the majority of your training? So here in the States, it would be like Northwestern University has a great program, IPTM has a great program, and then there's all sorts of just people like you who would come and teach video analysis or somebody like me would teach motorcycle recon. What is the source there?
Mark (00:35:18):
So, most of the training will be done for police officers. We used to have a qualification that was run by the City and Guild Institute, which is an awarding body, and I think that stopped probably about 15 years ago now. And it got taken over by a university, De Montfort University, and they offer a series of different levels of qualifications from a kind of an A level if you wanted to map it across, which an A level for us is sort of your 17, 18 year old school leaver. So pre-university, that would be the qualification they get. So end of college, I think for you guys, is that. End of high school, sorry, for you guys.
Lou (00:36:05):
Exactly.
Mark (00:36:06):
But pre-college or university, but all the way up to a degree level. So, there are four different levels and people can do those modules and achieve a bachelor's qualification. So that would be the route if you wanted to get the formal academic degree. And then there's a number of other different training that's done. As you say, those modular kind of bolt-on bits. And I think what's been really good, not wanting to harp back to COVID days, but with everybody offering these specialist courses have having to move them online, it's been much easier to do that share of knowledge internationally. Yeah, sure, you have to deal with time zones and be a little bit sleepy, and some of us are getting up for breakfast and towards the end of the day, other people are cracking open a beer. It's all just a bit weird to watch the [inaudible 00:37:03], but it's really helped in terms of sharing that knowledge internationally.
(00:37:13):
Nothing's quite as good as a classroom, I have to say. Being on the receiving end of training and also delivering it, that classroom feel for me is better. And the discussions you have with people at the end of the day in the bar are really helpful. So you lose that bit doing it online, but what you can do is deal with training that you probably not realistically have access to unless you've got the budget to jump on a plane and fancy a working holiday for a couple of weeks.
Lou (00:37:41):
Yeah, exactly. And I think that's how we originally started up our dialogue is I saw an advertisement somewhere for your five-day video class, which was geared specifically towards video analysis in collision reconstruction, and I had never seen a class like that before. So I texted Sam, who runs everything here, and I said, "Hey, look at this class. Get me in." And she said, "You know that that's going to be from 10:00 PM to 5:00 AM every day." And I was like, "Oh man, okay, good point. Okay, let's figure out another way." And then I reached out to you and I think ... So, I think the demand's going to be huge for it here in the States just because the majority of the video analysis classes that are available now, they're not really geared toward reconstructionists.
Mark (00:38:27):
No, exactly that, exactly that. Things like LEVA for example, great, great course by the way. I've done LEVA IV. Good courses. Would thoroughly recommend them, but they don't really deal with the vehicle bit. And that's where we're working, right?
Lou (00:38:45):
Yep. Yeah, so it is like one of my colleagues here took LEVA one and two and he's like, "It's great. Really well-run. I don't plan on ever really using any of that." It's three and four that he's going to be using more of, it sounds like. Whereas your entire class seems like it's really beneficial for the collision reconstructionist. So I know you and I have been chatting a little bit, I plan to meet with John Steiner and grab breakfast pretty shortly. He runs Mecanica here and he has a big training facility, and I think that that's where we're going to hope to get you. And I don't know, we'll probably sell out at like 40 seats or something like that, and I'm sure that we'll fill that up and have a good time. So I appreciate you offering to do that, and you're probably going to be that bleary-eyed guy in front of 40 Yankees in trying to stay awake.
Mark (00:39:39):
Complaining about jet lag. Yeah.
Lou (00:39:41):
Yes, exactly.
Mark (00:39:43):
I mean, you don't need to see clearly to do video analysis or anything like that. You could always do [inaudible 00:39:48].
Lou (00:39:49):
No, not at all. Not at all. Hey, the good news is with that accent, you'll sound way smarter than any of us anyway, so it doesn't even matter if you're speaking intelligibly. And so, video ... It sounds like in digging a little bit into your background, and like I was saying before we were recording, that's one of my favorite parts of the podcast is I get to spend three or four hours just looking into Mark Crouch or whoever I'm interviewing that day. But-
Mark (00:40:15):
Scary. Scary
Lou (00:40:16):
Yeah. It was all sorts of things that popped up, you ... Joke. It sounds like the video analyst thing kind of ... It made you do it. In other words, you were just being presented with video analyses so often that you're like, "Well, I've got to dive deep into this, learn all this stuff, make sure I know what I'm doing." And then out from that is my assumption, I'd love to hear you talk about it, came the book.
Mark (00:40:46):
Yeah, exactly like that. I sometimes describe it as video analysis found me rather than the other way around. I certainly never set out to do video analysis. I didn't walk into collision investigation and go, "I really want to do this video thing." Just wasn't like that at all. Sure, certainly through my degree we've done some machine visiony stuff, so I knew about optics and cameras and how computers do things and those kind of things, but you could argue that about any kind of physics drills I learned about how things hit each other and bounce off each other. So it wasn't necessarily me specializing in that route. But I found myself in London in the Metropolitan Police, and actually London was at the time divided up into five collision investigation units, each covering a section.
(00:41:39):
And the section that I covered was bang in the middle. So I was at the Central Traffic Garage at the time, and as we can all imagine, that is in the middle of London. So cameras capturing collisions, even in sort of 2008, there was loads of it. There was absolutely loads of it. And there were a few pretty basic techniques that people had learned organically, positioning a vehicle, for example. Well, if you could see where it was on the grounds because it drove over some white paint or something like that or a manhole cover or a stop line or something, well, you could position the vehicle, and do that twice, you get a distance and everything's good. So as long as the crash you had conformed to those very tight set of rules and it just so happens to be driving over the right parts of the road, you could do something with it.
(00:42:34):
What that meant conversely was we had a load of video that didn't conform to that and it was sitting in drawers. You know, not literally. It's sitting in evidence bags in the property store, but essentially as investigators, we were putting it down going, "Well, you can't do anything with that." And that sat really uncomfortably with me. You've got a window of a prescribed size capturing an area. All I need to do is work out where it is in that picture and the time it took to go between the two, and then you can calculate a speed. In many respects, video analysis in CCTV is one of the easiest things to do because you need two positions and a time. And what's one of the basic calculations we do in collision investigation is an average speed. You know, distance over time. And that's all video analysis is. Well, that part is.
(00:43:34):
So, I started with a very basic concept of I just need a distance and I just need a time and then we can do stuff with it. What happened out of that was actually, there are a few things that you can use if you think about how light travels or how a camera records or if you were able to somehow line up this 2D image against a 3D world, and then you could pretty much just point to where it was on the screen. And you can take these concepts and just start playing with them. And that's what I did. I didn't have a particular goal. I certainly didn't sit down to set out with a proper scientific method of going, well, this is what we're going to test, this is the hypothesis, this is how we're going to design the experiment. None of that. So I'm a very bad scientist. I can think of some of my lecturers at university just pulling their hair out with me. I didn't. I just played with it. I just played with it and seeing what could be done.
(00:44:33):
And on the back of that, if we fast forward a few years, myself and Steve came up with a few techniques that worked, and they really did work. So we set out ... We never set out to write a book. There's going to be a theme through this podcast. If I do things, then I never really set out to do. I just kind of do them because ... And then take them to the next thing and then wait until I stop or get bored, which hasn't happened yet. We wanted to put together a manual, and we still describe it more of a manual than a book. We wanted to put these techniques together as a training manual for colleagues. And what came out of the back of that was, well, we actually had a few of them and if you put them together and you wrap a cover around it, you can call it a book. So, that's really what we did. But if you read it and go through it, it's written very much so as a training manual. It feels more like a training manual than it does a book. And that's deliberate because it's meant to be guiding people through some of the concepts, what you can use, when you can use them, and how you can get some really good results. So that was how it was born really, completely unintentionally.
Lou (00:45:47):
Yeah. And kind of like we were talking about earlier, when you're writing a report for casework, it forces you to put down what you know and realize what you don't know, and then you can start to fill in those gaps. There's nothing that keeps your brain as organized as the written word. So, I appreciate you writing it. And then teaching, I imagine this is kind of the foundation of ... I have the book here, this is kind of the foundation of the class you teach. And I have found in my own practice that writing what I know about motorcycle collision reconstruction and then teaching it, you're exposed to a whole bunch of other people that have different methodologies and might take issue with something you're saying or add onto it, and then you just become better and better at it, having launched yourself headlong into that field.
Mark (00:46:40):
Yeah, for two reasons. Well, three actually. The first reason is that first edition was written in 2017, five years ago. It's why we've had to update it, really, because things change. It's a developing technology. And the second reason is you kind of always learn from students. You don't have the monopoly on good ideas. You listen to the way that people encounter things, and that's kind of how you keep your finger on the pulse, particularly in the civil world where we're getting things a couple of years later, often in cases. You know, you could argue that, well, the content that I'm exposed to is two years old now. Whereas if you teach people that are very much so going out there at live scenes, well, you can pick up on what the trends are.
(00:47:27):
And thirdly, coming back to the fact that it was written as a training manual, well, you understand how people think the problems that they're having. Well, what is it actually like for them to take that book and use it? So you're always tweaking it, and like any teaching material, the teaching material has almost driven the second edition because we've made so many changes to the content over five years that, well, now we've got another training manual that we can wrap a cover round and pass it on. So these things are, like any area of research, I think it's a living, breathing thing. It's not a, "Right, I've written the book, I'll put it down now and then that's great because for the next 30 years, if you always apply those techniques, everything's going to be fine." That doesn't work like that.
Lou (00:48:17):
Yeah, no, I totally agree. And we have a recorded version of my motorcycle class, and I learn new stuff so often that we're having to record specific modules over and over again. And like okay, well, the EDR module, motorcycle EDR, I knew that that was something that's going to essentially have to be updated every year. Some of the things, you know, if you're teaching mechanical engineering or something like that, you're pretty much good since a long time ago. But in this field, like you said, everything's evolving. And what are some of the biggest ... You don't have to go over everything, and I'm putting you on the spot here, but what are some of the most notable changes between the first edition and the second edition, kind of highlighting the evolution of the field and your interpretation of it?
Mark (00:49:03):
Yeah. So, there were a few things that have changed. In terms of the content of the book, we've put two new sections in that were ... Over a couple of sections, but they kind of bookend some of the content that was there. The front end has a lot more video theory in, because what we found was that whilst the positioning techniques are good, there was still kind of a lack of understanding of how a camera works, how compression works, how an image is stored, ultimately how a piece of video footage is designed to trick you. And just putting some of that understanding and a little bit more wrapping and some of the geeky theory stuff. But everything that we found about documents that you read, some brilliant stuff, but they're very technical, very good, but is it accessible to a collision investigator who has got to learn video stuff? You've got to be able to do video to be a collision investigator nowadays, but it's not a natural thing about video theory. So, how do we break down the key concepts of the theory and put that in?
(00:50:19):
At the tail end of the book, we've written a little bit about some guidance of how to give evidence. Now, this could be slightly controversial and I'm kind of waiting to see what the feedback is, but we spent a lot of time going ... Because we see people give evidence all the time. Some people give really good evidence, some people not so much. And it's more things about tips and tricks to take people who may not have had a lot of court experience, but taking them through the processes to hopefully be giving evidence better at the end. And been very fortunate that a high court judge has helped me co-write that particular section. So it's not just me telling you how I think you should give evidence. I've written it with a judge who hears it, and there are a few of his little bug bears in there.
(00:51:11):
So, those are the bits that are added on that are slightly different. What are the developments in terms of technology? Two, really. One is dash cam or vehicle-board cameras. It only appears very, very fleetingly in the first edition, but now we've put more sections on that because that is just becoming more and more prevalent. So, that certainly appears in there. And some of the additional software techniques that we can use with more 3D scanning, with some software assistance. We can try and add a couple more techniques in there that are a little more software based. So it's just updating it, really.
Lou (00:51:58):
Yeah. One thing that was interesting to me, and I'd love to get your thought on the dash cams, obviously they're a lot more popular now. A lot of cars are driving around with them as bolt-ons, and then a lot of cars are driving around with them now as OEM equipment. They're installed by the manufacturer either for pre-impact detection or autonomous operations of some sort. So, have you been getting ... Well, I guess I'll ask two questions in one because this is something else I wanted to get to is just how the sources of videos have just blown up over the past several years. In 2008 when you were in London, I imagine the majority of the cameras that you had access to were still CCTV surveillance type stuff. Then you start getting everybody strapping a GoPro to their head or dash cams and things like that. So how have you seen the evolution of just the frequency of actually getting video, the diversion of ... not diversion, diversity of sources, and then are autonomous vehicles giving you videos?
Mark (00:53:09):
Yeah, so dealing with those in reverse, autonomous vehicles are still breaking over here. But yes, we've looked at some video for the purpose of autonomous operation, and there are all kinds of issues with timings and things like that because the timings that they use, the time intervals can be all over the place and it's very difficult to work out whether the metadata's accurate in terms of calculating speed. So, the autonomous vehicle cameras and explicitly saying the cameras used for autonomous driving, they're an area that I need to look at in great detail because they're going to be around and I'm not sure how reliable they are at this stage. If it's a camera that's designed, even if it's OEM or something that you buy yourself in aftermarket and stick in, they're generally better. There are a few on the market, very cheap ones, that you don't necessarily know their origin. They can be problematic sometimes. But generally if it's a camera fitted to a vehicle that its purpose is to capture the road for a crash or whatever, you can typically do things with those.
Lou (00:54:28):
Okay.
Mark (00:54:29):
So how has there been a bit of a diversification of the videos that I deal with? What hasn't changed is that most shops, pubs, clubs, or evening venues will have something fitted to the outside of them. Quality has probably been upgraded that you can get some nice sort of HD five megapixel cameras on the outside of buildings now. So, the frequency of those is probably about the same, but with slightly higher quality.
Lou (00:55:02):
They've always wanted to capture that hooligan activity, and the hooligan activity still persists.
Mark (00:55:08):
Yeah. And certainly in London there are all kinds of licensing requirements, and I'm sure it must be the same for some of you guys, various states that if you want to get an alcohol license, for example in London, you need to have a certain type of CCTV system. Not talking about have to buy this brand, but it has to cover entrances and things like that as part of the licensing requirements. So, always going to get those. The two areas or two slash three areas that we've seen a vast increase in are domestic properties. So residential properties where things like Ring cameras, Nest cameras, doorbell cameras fitted to the fronts of people's houses and a variation on those themes. So Nest, Ring and a couple of other operators, they do relatively inexpensively, other wifi cameras that you can stick up. So people would have a doorbell and maybe something else covering their front drive, for example. So we've seen a huge increase in the number ... I don't know the numbers, but from experience a significant increase in CCTV video footage that is coming from residential properties.
Lou (00:56:20):
Yeah, who would've predicted that one, 15 years ago, that we'd be getting a lot of video from a doorbell?
Mark (00:56:27):
Who'd have predicted that you know, you'd tell somebody to drop a parcel off when you're at work just because they rung your doorbell? So there's lots of that. And again, there's generally a bit of a theme with CCTV systems, although there's always one that breaks the rule, and the more money that's spent on the system, generally speaking, the better or easier it is to work with. The cheaper ones have issues with timing or don't quite have the resolution. So you always tend to work harder with the cheaper system than you do do a more expensive one. But that being said, these expensive systems aren't that expensive anymore. They're relatively cheap. I think a doorbell over here is, I don't know, about 80 pounds, $100 or whatever, and the stick-up cameras go from half that price to about the same price. So they're around.
(00:57:24):
Dash cameras have just exploded. I'm trying to think whether I ever dealt with one in, say, the first five years of my career. So 2008 to '12, '13, something like that. I just can't think of using them. And now in my casework, everywhere. And the same goes for, as you said, cyclists, motorcyclists, because a GoPro will strap to the handlebars, strap to the top of your helmet or whatever. There's lots of that. There's lots of that as well.
Lou (00:57:56):
Yeah, I appreciate it when they document the crash like that, especially with the GoPros because now they have GPS data on board. They have accelerometers on board. So essentially they're riding with the data acquisition system now, including video. So it's better than if they had a VBOX on, unless it's a video VBOX. That's basically what they're riding around with now.
Mark (00:58:18):
Yeah, because there's actually quite a few studies, particularly with the GoPro 10s and 11s, and you always have the issue of what's happening to the rider's head if they've got it on the helmet or on handlebars, what's happening through the front suspension and dealing with that jitter and in the Z axis. But generally speaking, for things like speeds, that's an incredible source of data because as you say, the X and Y coordinates of the device are pretty good if you smooth the data a little bit. Sure, you have to be careful what you're using, but yeah, brilliant for getting a speed on the approach.
Lou (00:58:59):
Yeah, I love that that's out there now, and it makes me think of EDR because here in America, I think that might be a big difference between our venues in America. Basically every car that rolls off the assembly line right now has an EDR and we can access the vast majority of them with the Bosch tool. That's not true from what I understand over there.
Mark (00:59:25):
What is true is they'll all be fitted with an airbag control module. All of them will be having the data and storing it. There are just various restrictions about some of the vehicles that we can access because they're still locked down by the manufacturer. Now, that is getting a lot better and I think the pressure's coming on the manufacturers because sort of VW group, which is our DEO here, Volkswagen, those kind of people have just said, "No, we are going to share that data now." So for us, the real difficult vehicles are sort of BMWs and Mercedes, so kind of your luxury end of the luxury end of the market.
(01:00:02):
... Luxury end of the market probably, because they don't want to tell their customers that they're going to give up their data should they be driving like an idiot and crash. Whereas your yours is all locked down in legislation. We were making some real inroads in terms of effectively adopting the American legislation into EU law. And then without getting too political, in the UK, we decided that we didn't want to be part of EU law anymore. So we decided to leave that.
Lou (01:00:32):
I heard about that. Yeah, I heard.
Mark (01:00:36):
And things like that cause difficulties because it's still not clear what's going to be done. And also the time that got taken up by both sides, EU and the UK in that process meant that other bits of legislation that may well have been adopted pretty quickly like EDR just haven't really gone anywhere in the last five years. Which is a bit of a shame. It's a bit of a shame because it-
Lou (01:01:02):
It really is. Yeah.
Mark (01:01:03):
That is gold.
Lou (01:01:06):
That is the best. I always tell people it's like the best reconstruction, unless it's on video, doesn't compare to your average download nowadays. I'll never be able to tell you with scientific certainty what the driver was doing five seconds pre impact unless I have video or EDR. Yeah. So let me just make sure that I understand that. Say a GM car gets into a crash tomorrow, airbags pop, the data is there. In other words, GM own a car that is built for the European market has an airbag control module of course. And within that module, it has an EDR and the data's sitting there, but for legal reasons, you can't acquire it.
Mark (01:01:43):
Yep. So we plug in same Bosch kit, exactly the same, plug it in, but put a UK VIN number in it and computer says no, won't give you the data.
Lou (01:01:54):
So could you spoof it?
Mark (01:01:56):
You can spoof it.
Lou (01:01:57):
Okay.
Mark (01:01:57):
But not all of them, again, with the higher end of the markets are wise to the fact that that's just the easy workaround for us is spoofing it. So they tend not to. So some of them you can get data out, some of them and would always try it and I think it will eventually come for us. But yeah, it's very frustrating, particularly when manufacturers go, "No, that data isn't stored." And it is stored. I know it's got to be stored. Yeah, no, we don't keep it.
Lou (01:02:29):
I imagine that if we have a follow-up podcast in 2033, that's going to be a thing of the past and you'll look back on this time and you'll be like, "I can't believe that was happening."
Mark (01:02:42):
Yeah, I think that's true. I think that's true because it's so silly. It's so silly.
Lou (01:02:47):
Yeah. It really is. I mean at least here, you either have to get the owner's consent, whoever owns the car at the time of the investigation, they have to sign something or they have to record a statement, say you have authorization or if it's a criminal matter, then a judge has to issue a warrant. So it's not like willy-nilly, anybody can't just go grab the data. It has to be some person of authority or who has permissions. I don't see any harm in it.
(01:03:16):
Okay, so going to some of the more technical side of video stuff, we were talking a bit about it, like you were saying with your speed analysis at the beginning you're just like, "All right, well if I know when the frames are written and I know where the vehicle is at each frame, then I can get the average speed in that duration." Seems like there's a third component, which is video health. In other words, are those pixels that you're seeing tricking you in any way? And I like the way you put it, they're designed to trick you. And I remember when I first started learning about surveillance and I was just like, "Wait a second, what's going on here? Some of these are predicted pixels? They didn't actually... That's not what was visible at the time? I thought they were all I frames."
(01:03:59):
But I wanted to go through each one of those sections in a little bit of detail. Granted, your course is a five-day course, so it's not like we're going to get through everything but to talk about position, timing and then the health of the pixels.
Mark (01:04:14):
Yeah, sure.
Lou (01:04:15):
And starting with timing, maybe if that works for you, it's just like how do you figure out when this frame was written? And it seems like there's some pretty tricky techniques if it's not a straightforward case.
Mark (01:04:29):
So fortunately in the UK, most cameras work through a base rate of 25 frames a second and that's when they'll be going out to grab an image. And most of the time in the UK, there isn't really a significant enough delay between when it goes out to grab an image and when it actively records it or when it first starts recording it. But we'll talk about how, if we think about how our vehicles progresses through a piece of footage, we can actually do a check at the end to make sure those assumptions that we've made are true or not. So starting with the base rate of 25 frames a second over here, you guys would use 30, but principle's the same over here in the UK. That would give a sort of quantization to images of about point of 0.04 of a second between images.
(01:05:24):
But it won't always capture every image. It might not be going out for an image every 40 milliseconds. What it might be doing is going for every other one or getting that data, but choosing not to record it because we've set on our NVR or DVR that we just want to have maximum storage. We want to keep our footage for three weeks and therefore we are not going to go out and get every single frame. But we start from a premise where a fixed system should be trying to obey some kind of rules. Of course, if we make an assumption like that, we've got to come back and make sure it actually is because you could go down some pretty interesting assumption tunnels and get some terrible speed [inaudible 01:06:16].
Lou (01:06:16):
That car was going 200k. Yeah, exactly.
Mark (01:06:19):
Oh, you'd be surprised some of the things I've seen. Well, you probably wouldn't. But some of the things, because if you make, we do all of the time in any area, if we start from coefficient friction of a sliding motorcycle for example, unless we've measured it well we are going to start and we are going to say, "Okay, going to say it's fair, we go for a 0.3 or something here." But what you do is you would always go back to say, "Well does that make sense for the speed that I know it started breaking at and the damage I've got an impact? Does it make sense?" And do exactly, we do this all day in collision investigation for various different disciplines. Video's the same.
(01:06:56):
Video's just one cog and it needs to make sense with everything else you got. If you've got calculated your speed of 120 miles an hour one second before impact and all you've got is a broken wing mirror and a bit of cracked fairing, well that's wrong then, isn't it? Those kinds of things. So we think of video as one piece in the puzzle because everybody immediately starts thinking that it is the be all and end all and it's just a tool.
(01:07:28):
But anyway, I've got slightly sidetracked there. So timing, we are trying to look for conventions. Now it may be the case that we will have longer frames and shorter frames and those kinds of things going on. But if you plot a vehicle moving through a series of images, you would see if there was a longer frame or a shorter frame because we know that cars can't instantly accelerate or instantly brake unless hit something. But the transition through, even if it was an acceleration or a brake, that the movement of the vehicle should be reasonably constant. So if you see any jumps in the footage, well that was a longer time period and then you have to go and try and work out, well does that obey a convention? If I've got 15 frames a second, which means I'm going to have short long for example or those kinds of things.
(01:08:21):
But we are still just making those assumptions at the moment. We are making an educated guess according to a predefined convention. There is a risk there of course that we are just trying to force what we see on the screen into our preconceived idea, which is always dangerous. So there are different ways to check it. The metadata can help, although you have to be careful with metadata because sometimes the metadata can be very wrong because it's just doing an average, it's just calculating.
(01:08:50):
So you need to be a little bit careful about how you use that. And the final way would be using a timing device, something like a light board. I think Axon have got one, formerly iNPUT-ACE have got one, we design one in the UK over here. They work slightly differently, but the general principle is the same. If you position something in front of the camera that flashes or lights iterate at a certain rate by looking at image by image to image and performing a calculation on which lights are on and which lights are off, you can calculate the time that's elapsed.
(01:09:27):
Now happy days if that is nice and regular and then you have a predictable pattern, even if it's not regular in the sense that every single image is separated by the same period of time, that's okay, that's great, but even if isn't, you can predict the pattern and the timing interval where you just need to find out where that jump is or where the change of pattern is and plotting a vehicle through gives you that.
Lou (01:09:53):
That's awesome. I love that idea of you're talking about taking... So obviously like you said, if it's pinging on 25 frames a second, that's easy. Anybody can figure that one out and do their analysis. But when you have some sort of variable frame rate, from what I understand from CCTV, it's essentially managing resources. So it's going to ping one camera, then the next, then the next, and then that one and that loop might not be a consistent duration. So you're going to get funky frame timing.
(01:10:23):
From what I understand, correct me if I'm wrong, I'm going to go on a little bit of a talk here, but sometimes the metadata will tell you exactly when that frame was written. Maybe that's true, maybe it's not true. But if you have a light board and you put it up in front of that camera and you plot out, I don't know how many you can do, but I imagine with some of these computational programs that are available now or hopefully will be available in the future, you can plot out a hundred frames, 200 frames, 300 frames, see what that pattern looks like.
Mark (01:10:49):
Thousands.
Lou (01:10:50):
Thousands. Okay. Yeah. So then you can figure out, okay, well what is my average frame rate? What's my max, what's my min, what's my standard deviation? Is there a pattern? Can I tell if it's within the half a second it's going to always do X, Y or Z? And it's a bit of pattern recognition.
Mark (01:11:06):
All you're looking to do is for a predictable pattern that you can apply to your instant footage. That's all you were looking for. Something that you can predict that that's happening, something that you know what it is, you can quantify and predict it and then you can apply it.
Lou (01:11:22):
That's fantastic. And so does the metadata at times tell you exactly when, so I know the metadata will say, "Hey, this video is 25 frames a second." But will sometimes using iNPUT-ACE or Amped FIVE or something like that, can you say, "This frame was specifically written at this time to the middle?"
Mark (01:11:37):
You have to be careful. So some can. The presentation timestamp, PTS time, which some people may be more familiar, PTS presentation timestamp. That can be generated from a couple of different places. It can be direct from the camera or from the NVR depending on what the actual system is. But by telling you exactly when it went out for that frame it or exactly when it got it.
(01:12:07):
It can also be a case of, well I don't really know what it is, but I know I'm 15 frames a second, so I'm just going to take one second and divide it by 15. That causes you some problems, but you would see it if you plotted a vehicle because you would see a jump. Or you could go, "Well I know that I'm one minute 47 seconds long, this bit of footage, and I know that I have 223 frames or whatever." And then just divides one by the other and gives that as your average time.
(01:12:43):
So you have to be really careful. Metadata can be either really helpful or it can cause you all kinds of problems, but anything we do in collision investigation, you always want to go back and check it, don't you? You never blindly follow something and take a number that somebody popped up on a screen for you, punch it into your calculator and sit back going, "Well my work here is done." Yeah, you might come unstuck at some point.
Lou (01:13:09):
Yeah, exactly. And more trickery like you were saying before, we're getting tricked sometimes as far as what the frame is showing us. And some of those pixels are current and some of them are not. And you can also get tricked by the metadata. It's like yeah, confidently tells you when that frame is written and it's apparently not a given.
Mark (01:13:30):
No, no, not always. But check it.
Lou (01:13:33):
And obviously for anybody listening along whose doing reconstruction, if you don't know when the frame was written, then you can't calculate speed accurately and if that's what we're after, the vast majority of the time is speed.
Mark (01:13:46):
And it's also a really interesting thing talking about the civil side as well. It's often more challenging or more work to work out what your tolerances are in a calculation or what it can be. Whether you're going to pick your 85th percentile or you're going to go for min max or however you're going to present, one standard deviation, however you choose to present it is fine, but it's often more work to work out what your deviation is, what your tolerances are in your calculation than it is actually working out what the middle value or your medium value or your one you think is most likely. That bit is often the easy bit, the trickier bit is going, "Well, what actually is my confidence here?"
Lou (01:14:32):
Yeah, I agree. And so that process that I was mentioning is just spit balling, but is that what you're doing? You're measuring frame rate for a thousand, 2000 frames, looking at the average and the standard deviation and implementing that into your error analysis?
Mark (01:14:49):
And sometimes you don't need to do that many. The number that you need to do is dependent on how regular it is. If it is like absolute clockwork, I don't know, 100, 200, pick a number, ultimately until you are satisfied that you can stand in the witness box and say, "No, the camera was doing this."
(01:15:09):
So what does that number mean to you? For me, if this camera is all over the place and I'm going to do some kind of statistical analysis and I want to see whether there's a Gaussian distribution on this and we are all going to get stuck into the mathematical weeds, well I'm going to want a lot and typically that's going to be about four figures. But as we look at some machine vision stuff that's coming, you can have a situation where it's relatively easy to read these light boards because in the correct setup and they've been designed in certain ways that it's quite easy for a computer to detect on and off in the right circumstances that actually you can look through a big data set quite quickly to see what you're getting.
Lou (01:15:55):
That's cool. Yeah, it doesn't have to be done by hand if you're setting a light board up with a specific system that knows what it's looking for.
Mark (01:16:03):
Doesn't have to be. Sometimes there's a confidence in doing a good selection by hand so that you can trust the machine process so that you can almost validate that technique. So if somebody asks you the question in the box, but how do you know the computer was doing it properly where you go, "Well I manually did sample 350 of them and they were all spot on." And those kinds of things. Just how you feel comfortable presenting that evidence really.
Lou (01:16:32):
Yeah, I think that's a really important point too. It's like, well how much of this work do you have to do so that you can sleep at night so that you are comfortable with your analysis? There's a book that I love, Zen and the Art of Motorcycle Maintenance by Robert Pirsig and he said, "In a technical field, peace of mind is not just one part of it's the whole thing." And I think there's a lot of truth to that. It's like the yeah, I got to feel good when I'm done with my analysis and until I feel good with my analysis, till I've found that peace of mind, I'm not done. There's still work to be done.
Mark (01:17:07):
Ultimately what do we want? We don't want the expert on the other side turning up and have found something that we didn't. That's ultimately what it is, professionally and scientifically that that's the worst possible situation, isn't it? That your counterpart found something that you didn't.
Lou (01:17:27):
I'll probably die 10 years earlier than I would've otherwise in that situation, some heart attack or something. Yeah, that is the nightmare. So then the other uncertainty comes from, of course we know the frame timing now, to get speed we have to know position. I'd love to hear about your toolkit there. And then any uncertainty there has to be mathematically paired with the uncertainty, with the frame timing to come up with an ultimate uncertainty with respect to speed.
Mark (01:17:55):
And theoretically, depending on what you are dealing with is uncertainty or the believability of the pixels if you like. What what's going on under the hood of the, say, the compression algorithm? Do you have to apply something else there? But yeah, ultimately your error in speed because there are only two elements that go into it, distance and time, so your error in speed is made up entirely of your error in distance and time. So we're doing some groundbreaking stuff on this podcast, aren't we? Really, really [inaudible 01:18:26] stuff.
Lou (01:18:27):
Yeah, exactly. Like, "Wow, 2023, they finally found out what feet per second it is. It's feet in seconds."
Mark (01:18:35):
So distance. So what we're actually doing is we are doing positions and then measuring a distance. So how can we get that? Well, it depends on what you're dealing with. I quite like working out distances because you have a bigger toolkit to play with and there are some rules. There are some times when something would apply and other things wouldn't. Or you need to take a step like correcting for radial distortion before you can position. But ultimately, you can choose whatever technique you want to obtain position. Some will be better than others and lots of this is just practice and experience, which one's going to give you a better result here. But there's quite a few. Certainly in the book, there's about six or seven different techniques that you could use and depending on the circumstances you're dealing with.
(01:19:29):
So if it's driving over physical features on the road, you use that one. If it passes behind a lamp post for example, we just draw a line between the camera and the lamp post and extrapolate it across the road, that's where it's going to be. But we must remember when we deal with things positioning as collision investigators, we have a slight advantage here then say a pure video analyst that's looking at this. Because if you extrapolate a line across a road on a diagonal, well, where is it in the lane for example? Because that will change your distance if you've got a diagonal line across the road. Well was it in the gutter one side? Was it out towards the middle of the road? If you took your collision investigation hat off, well they would be the tolerances you would have to report if you were just purely a video analyst.
(01:20:24):
But we happen to know that a second later we've got perfectly straight wheels into a T- bone collision. Well then we can just work out roughly where it was in the road. Sure there's going to be a tolerance, but it's not gutter to middle of the road anymore, is it? We can refine those, and that helps us when we do that because we take our video analyst hat and we also wear our collision investigation hat and that's quite helpful.
(01:20:50):
And then we get those two positions and how do we measure between it? We can do that in a couple of different ways. We can physically go there, measuring wheel, tape measure whatever your particular weapon of choices is. Or something that's becoming a lot more popular for us, and I spend a lot of time dealing with, 3D laser scans, point clouds, however you get those, whether you do it the photogrammetry approach or whether you do a TLS, a laser scanner approach. Just look at some of the stuff like Eugene Liscio's put out on those. Just some really good resources about how you use that. But 3D environment is becoming the thing at the moment because we don't need to shut roads or as a private practitioner, I don't need to dance in traffic, which is generally better.
Lou (01:21:40):
Yeah, it is. And that's what I have found myself doing. One of the things that blows my hair back, I fell down the photogrammetry rabbit hole in a similar way that you fell down the video analysis rabbit hole. And my favorite current technique for establishing the position of a vehicle on the roadway is to go out there, laser scan the whole thing, bring it back, bring it into PhotoModeler and then I can train PhotoModeler and say, "All right, here are the 3D coordinates of all of these pixels." And I'll take as many as I can 30, 40 if I can. Sometimes it's five or 10, but at times I can get 30 or 40 then I can account for the distortion field.
(01:22:19):
PhotoModeler will correct for the distortion, it'll figure out the focal length, it'll know where the camera is, I can compare its calculated position of the camera to my scanned position of the camera and now you can bring in a car and over a point cloud of a car and overlay it on top of the pixels that relate to that car and you're like, "That's where it is."
(01:22:41):
So-
Mark (01:22:41):
That's one of my favorites as well. And one of the reasons why I quite like that is it gives you a nice visual deliverable at the end. You might go on to use that point cloud with a car, positioning it to do, I don't know, driver's view or something. Or you might want to get the point where the brake lights came on and then you could wind back a little bit and with a PRT on it and work out when it all started happening. So it takes you further for the rest of your analysis, but that can be quite time-consuming, that approach as well. And if it's passing behind lamp posts and things, well there's a really quick easy win if you know where the camera is and where the lamp post is and you have a good model of the road.
(01:23:24):
So it's just about picking the right tool for the job. Really like everything we do, it's about picking the right tool for the job. But I, like you, I quite like that because it's quite visual and it's actually if you get into the situation where you need to explain that to a court because it's pictures and because you can stage people for it and now I've lined it up and then I put a car in it and then I took the picture away and now we can... It's a really easy way of communicating your evidence as well. So yeah, that is one of my favorites.
Lou (01:23:55):
I completely agree. That's one of the big advantages to using that methodology is when you go to present your evidence, it's very easy for the jury to understand what you did. You can show them a fading video of the point cloud or the mesh coming over the pixels that relate to that car and then create a nice demonstrative. The other thing I like about that methodology, and I don't know how long we've been doing exactly this PhotoModeler Premium came out, they handle point clouds really well now. So I can bring point clouds in, account for the distortion of the image and now I can look at how my point cloud aligns for those pixels and if it's great in the middle but way off over to the right, then I know I'm not doing a great job of accounting for distortion yet and more control will help. Or, maybe I can never get the distortion properly accounted for out there, but I have to understand my positions could be a little bit skewed out there and I shouldn't weigh that portion of the analysis as much.
Mark (01:24:52):
Yeah, exactly. That come actually comes back to what we said a few minutes ago, actually sometimes working out your tolerance is the hard bit and there are times when you come in any analysis you get to the point of going, "Well that's just too unstable. I might as well just be guessing if I wanted to put that." There's nothing more than a guess at that point. Yeah.
Lou (01:25:14):
What does your toolkit look like? And I agree there's different tools for the different jobs and not everybody has the ability to pull out a hundred thousand dollars laser scanner and pull out $4,000 photogrammetry packages and 3D modelers and spend 40 hours. We understand that some of the law enforcement agencies especially, they have way too much work to be able to do that. So just lining up the car with points on the road. When you are doing those more advanced analyses, what does your toolkit look like? Software, hardware, and the combination?
Mark (01:25:46):
So we use a RIEGL TLS scanner, RIEGL laser scanner. I find them really good for road surfaces because... But any scanner will do this. You just need to know your kit and how you get the best from it. Use Amped FIVE for lots of the video handling distortion removal. But that helps us because we do other stuff with video as well. So we have that because of the toolkit that gives us with it. But again, you can do that with other things and what we use for the reverse projection and the lining up of things, we use PC-Crash for that.
(01:26:26):
Again, there are different models. You've got sort of things like AnalyzerPro or photogrammetry or drone based stuff instead of having an expensive 3D laser scanner and correcting distortion, well you can do that in Photoshop if you know what you're doing. Or there were actually a number of freeware programs that would do it. So it's not necessarily about how much money you spend on the bits of kit, it's about understanding what you do and how it works best for you. Yes, I'm in a very lucky position that we do lots of video work, so you might expect us to have some more of the top end video tools, sure. But what I wouldn't want people to go away with is saying, "Well, unless I have a hundred thousand dollars laser scanner and however much these bits of software is, I couldn't do it." Because that's simply not the case.
Lou (01:27:22):
Yeah, I think that's a good point. In distortion correction, like you said, there's some free things out there and it's all a level of reaching a level of hopeful perfection. Sometimes we calibrate exemplar cameras and does it have the exact same distortion characteristics as the subject camera? Probably not, but it's really close. It's the best we can do. If I can do a field calibration where I have 30 or 40 visible 3D control points, great, I'm going to do that. But sometimes-
Mark (01:27:55):
Or, put something up in front of the camera that takes enough of it that's got grids on it, lines. So the light board that we have is actually a checkerboard, and if you get that close enough to the camera, well then it becomes a bit easier to correct for distortion because you know that this is a series of squares that are all straight and lined up. There are many different ways. I think one of the things I still love about collision investigation is as a whole is no two jobs are the same. Sometimes as a practitioner, yeah, of course you're going to go to jobs that you've dealt with that are very, very similar, although that can be the death of an investigation if you think you've seen it before. But yeah, still that thrill of going, "Well, this is a bit different, isn't it? How am I going to deal with this then?" Is the fun bit of why we get out of bed in the morning.
Lou (01:28:42):
It really is. And I think one of the things that we're seeing more of, at least here in the states is body worn video. And that presents its own unique challenges for accounting for distortion because it's so extreme. What's your experience been there?
Mark (01:28:57):
Yeah, so you've got two things. Well you've got three, because you've got a massive distortion, you've got a camera that is just the nature of the way they're worn. Whilst they're meant to be nice and projecting forward, they're going to be at a weird rotation and a weird pitch that's just going to be fundamentally unhelpful to what you need to, because the bit that you want to capture is always the bit that's when somebody's turned and depending on which ones you have, they're also running different compressional smoothing algorithms and things underneath, which just make it extra exciting for you. But you work with what you've got and you can either do something with that footage, which you probably can do something with nearly every bit of footage. You just need to be careful how confident you are or how confident the court can be on your findings and calculations.
Lou (01:29:55):
Yeah, that makes sense. And we've done different things with those. It's a little bit harder to grab an exemplar version of those and calibrate that.
(01:30:03):
... bit harder to grab an exemplar version of those and calibrate that, but if you have enough control in the background. And then, like you said, you're doing a common sense check at the end. Okay, my photogrammetry analysis says that the tire mark is here. Is that consistent with the adjacent left turn arrow? Does it look to be about two feet to the left of the turn arrow? And you were talking about the compression and the smoothing, and I think that brings us to the third thing we've talked about, frame timing and then position. Then you have I-frames, P-frames, B-frames, different codecs doing different compression methodologies, and how... I guess it would be good to probably... Not everybody's going to know what those things mean. So if you could kind of introduce an I, a P, and a B-frame, and then how codecs affect the way that you approach an analysis.
Mark (01:30:55):
Yep. There's a real temptation with an image, particularly with CCTV image, to think that it is a series of photographs, if you like, completely captured images, 25 frames a second. There are 25 completely captured images, and they're all new, and every single detail in each of those individual photographs is new and correctly captured.
Lou (01:31:23):
I remember the day I lost... I was disillusioned and was like, "Oh my gosh, that's not true? Come on."
Mark (01:31:28):
That's not true. Hate to break it to you. That is not what happens in video. Because if we did that, simply because if we did that and captured new color data on every single pixel in an image in a video, our files would just be so big we couldn't do anything with it. So what we do is we cheat with some of those pixels, we cheat. And if we say, "Well, that pixel value hasn't changed much from the previous image," what I'm not going to do, I'm not going to recapture the color of them, all of the information about that pixel. All I'm going to say is, "Nick it from the image that came first." And when you play this bit of footage back. I've got nothing to show there, but I'm telling you, "Just repeat what was on the previous image."
(01:32:21):
And this can happen in a couple of different ways. We can have frames that just only look backwards, only look back to our image that we have, to an earlier image, or because when we're talking about it being recorded, not it's all coming into the camera in sequential order, but when we're going to store it somewhere, well, if we're able to mess around with the order and sort of steal pixel values from an image that was yet to come, we just store them differently on the computer, well, we can make a double saving, can't we? Because we can have some pixels from previously and steal some from the future. And that's one of the ways that it's done. And we hear this term I, P, and B-frame.
(01:33:05):
I or sometimes called a key frame, an intra-frame. So that is your photograph. That is a newly captured image, and that's brilliant. All of those images are new and all of those pixels are lovely and new and great. That's what we want. But if we start using P-frames, or predictive frames, which I really hate that term by the way, but we're kind of stuck with it because you go into court and tell them that your pixel values [inaudible 01:33:34] are predicted, and see how long you have to answer that question.
Lou (01:33:38):
Yeah. Are you a psychic, Mr. Crouch?
Mark (01:33:41):
And all the genius questions that attorneys make themselves feel brilliant about. But you just have to have a long explanation, a bit like we're going to have here, but slightly more formal [inaudible 01:33:52]. So a P-frame will do that. It will look backwards and say, "Well, is there anything that hasn't changed dramatically from the image before? And can I just steal those?" And a B-frame, a bidirectional frame is one where it looks forwards and backwards.
(01:34:07):
So there is a bit of work that we need to do under the hood to understand, well, are we looking at something that's completely new, or do we just need to be careful that some of those values might have been stolen from a previous image, they're not actually at this moment in time.
(01:34:26):
Furthermore, just to make things really exciting and just to blow everybody's mind that haven't really thought about this before, I can play one other trick on you if I want to with those pixels and values, before we talk about some of the other compression tricks. I don't need to copy them in the same place because if I think there's been a little bit of movements, I'll go, :Yeah, but I can move that colored pixel, the red door handle, for example, and all I'll do in this next image is copy that red pixel, but I'll move it slightly in this one." So it's not simply the case of just going, "Yes, but now that I can see it's moved, everything's good." No, not necessarily, because it might have copied it and moved it. Oh, now we're getting very, very, very complicated. Very, very complicated.
Lou (01:35:19):
How inconsiderate? That's very inconsiderate.
Mark (01:35:21):
And now we're sat in this situation where we're going, "Oh, so I can't really trust anything in the image." But I haven't finished yet, because what I'm going to do to that is I'm going to strip a load of the information out of the image because you only need a few bits and pieces to work out what's going on. The more I can strip out, the smaller my file size is going to be, and therefore that's better. But we're really good, our brains are far cleverer than we are. Like to separate those two because we're really good at picking up movement. We need relatively little information to detect there's movement in an image or detect there's changes. So if I can strip out a load of that movement information and leave you just enough that you can detect movement, your brain is going to be happy. If you play it back, you're going to be happy.
(01:36:14):
But that's a problem for an analyst because now not only has that... some of those pixel values don't belong here, they've been copied, they've also been moved to a different place according to an algorithm, not actually where they are, and a load of the information's been pulled out. Well, goodness me. Shall we all just pack up and go home, and let's not bother about doing video analysis because quite frankly, I can't even trust whether that's a car in an image because it could have gone past last week?
(01:36:44):
Well, the trick is not to panic because there are bits of software that can tell you where those pixels came from, where have they been moved to. And ultimately, if you do a thorough analysis, what we know is, and we come back to this movement of a vehicle, we know that vehicles can't dramatically change their speed. So if we change the way that we do an analysis and forget about going, "Well, I want my tolerances to be really small, so if I pick a big distance and a big time, that reduces my mathematical errors, doesn't it?" True. But what happens if you can check by taking a series of real incremental speeds, lots of them. And you will see if there's been a movement because you'll end up with an outlier, and you'll know that that outlier, if you plot an acceleration line through that is, well, no, it hasn't just suddenly pulled 3G, it just hasn't done that. So there's something going on here, time or distance or something that's happened in the pixels.
(01:37:44):
And when you do that, you actually end up with a sudden ability to start parking this large average speed and start talking about trends. Was it accelerating? Was it decelerating? Do you suddenly see when those brakes came on, because you might not be able to see the dive of the front of the vehicle, or you're looking at the front of it, can't see the brake lights or whatever. Do you suddenly see where this speed line trends, where this trend line changes? So we want to try and actually break a little bit of convention here and start thinking about lots of sequential speed calculations rather than a big one. And that's slightly counterintuitive because typically we want to keep our errors low, don't we? Big distances, big times reduces our overall tolerance or have small errors of change. No, we can flip our thinking a bit and get a lot more information.
Lou (01:38:36):
That's interesting. I like that. Like you said before, we're kind of cheating the system a little bit. Or I shouldn't say that, but using our skillset and recon to help us with the video analysis. So if we understand vehicle dynamics really well and braking capabilities, there's no way that that Toyota Camry just pulled 2 g's while braking. So something is-
Mark (01:38:56):
Exactly that.
Lou (01:38:56):
... up with my frame timing.
Mark (01:38:58):
Exactly that. Exactly that. And there was for a long time, and I think we've well and truly knocked this away now, but there was lots of times when people, the old analogy that you can't do a speed calculation from CCTV because of all of those problems. Issues with timing. You don't know when it's recorded. When did it capture it? When did it actually encode it? You're dealing with very small periods of time and distances and macroblocks and compression algorithms and everything, and just noise about lots of reasons why you can't do it. Actually, when you get down into the real nitty-gritty, you can, and it is very, very effective.
Lou (01:39:39):
So macroblock analysis is what you're talking about where that will help us understand what pixels are new, what pixels are old, what pixels move, and there's a couple tools?
Mark (01:39:48):
So probably a bit of explanation because I've just dropped the term in there. So we've talked about these individual pixels that are copied, and it is true that individual pixels are copied and can be copied, but typically what happens is it's blocks of pixels. Either 16 by 16 pixels, but you could have eight by eight squares or four by four squares, and they do different things. And we won't really get into the weeds of that, but we're actually copying blocks of pixels from images and moving them together.
Lou (01:40:22):
Yeah. So you have to be careful, if you're an analyst and you're looking at a video and you think that you are confident, really confident that that red light is red, where those might be predicted pixels and there wasn't enough of a change because it doesn't consume enough of the frame to tell the system, "Hey, I just went from red to green. I need new pixels here." And they might just be hanging out from before, and a macroblock analysis would help you understand that potentially.
Mark (01:40:50):
Yeah, you can look. So that macroblock analysis, what we said before is we can use a bit of software to tell us whether something's been copied or moved or copied and moved or whatever. So yeah, we can pay a bit of attention to that.
Lou (01:41:05):
And the three tools that I know of that are available for that, I'd love to get your take on what tool you're using and what tool you'd recommend. And if you don't want to recommend a tool, I'm not going to put you in the hot seat, but we have FFmpeg, we have iNPUT-ACE, which was recently purchased by Axon, and then we have Amped FIVE. We currently have iNPUT-ACE, but I got my eye on Amped FIVE as well. And it seems like it'd be really smart to learn FFmpeg, but I don't know if I have the bandwidth to do that.
Mark (01:41:33):
Yeah, so that's the thing. Well, what Amped does really well... So those three products, yeah, completely agree. FFmpeg, unless you can get some GUI to go on the front of it, you are looking at command line. And to be honest with you, it's been a little while since I've done that, so no hard questions on coding FFmpeg please, because I might fall from grace quite quickly. But yeah, those two products you talked about, Axon, or iNPUT-ACE as it was previously, and Amped FIVE. Interestingly, both of them essentially drive FFmpeg in the background. So you're still using FFmpeg indirectly, but with a much nicer interface.
(01:42:15):
As I say, I used to use and still do, I still have it, still use Axon iNPUT-ACE, and it's still a tool that I have in my tool kit. I probably use Amped FIVE more now. It's got more tools, it's got more things in it, but as you might expect, price points are different. So it's just whatever fits. But both of those products, yeah, no issues with using any of those. And FFmpeg. But with FFmpeg, you're probably going to have to spend a little bit time hitting the books to work out how to use it, but there's loads of blogs and tutorials. You would be able to do it if you were that way inclined.
Lou (01:42:58):
Yeah, I started down that rabbit hole and I pulled my parachute. I was like, "I'm good. This is not what I have the capacity for right now." And I guess that brings me to a question I wanted to ask. It's not the perfect segue, but it's like, okay, four firms on the private side, say, or even law enforcement, do you recommend that everybody goes fully down the video rabbit hole just because it's so important, or should they have a baseline understanding and then a colleague that they know that they can reach out to? How do you look at that as far as training and expertise? We can't all be experts in everything, but should we all know video enough?
Mark (01:43:40):
So this is a great question and a tricky one to answer. Like any forensic discipline, unless you know what you are doing, and I'm not talking about a really high advanced level of doing things, but if some of the words that we have spoken in this discussion about sort of PTS times or time intervals and things being quantized or macroblocks and I, P, and B-frames, if you don't know these things, I think my advice would be find out what they are before you start doing video stuff. Because if you don't know what they are, either there's a risk of you getting your analysis wrong, which is obviously the worst kind of thing, but you also can have a bit of a tough time giving evidence because these are fundamental concepts, and if somebody asks you a question about it and your answer in the box is, "Sorry, what's an I-frame?" it's going to end pretty badly for you.
(01:44:43):
So if you don't know what they are, go and find out. There are loads of courses, leaver courses. Those kind of things. Do some reading. There are some good texts on it. Find out what they are. But I do feel with just the prevalence of video, unless you are doing collision investigation way out in the sticks where you just don't see properties for 20 minutes of driving, in any kind of metropolitan built up areas, it's only going to become a bigger and bigger resource, the video. So yeah, I think it has to be part of the skillset.
(01:45:23):
In the same way, I'd also say that things like EDR data has got to be part of a collision investigator's skillset as far as I'm concerned. Because there's only going to be more of it, it's such a golden resource, and you've got to be over it, I think.
Lou (01:45:38):
Yeah, I would agree with that take as well. And I kind of followed a similar path to you just in that I did not plan on going deep down the video rabbit hole because I have enough to worry about already, but it just punched me in the face and it's like, "You have to know how to do this if you want to be a competent recon now."
(01:46:01):
So I remember 2005 or so, a lot of recons threw up their hands, and they said, "Man, this CDR stuff is complex. I got to buy that whole kit. I got to learn that stuff. I'm just going to outsource that when I need it." And I think that for the most part, 95% of the community has gone the other way and just said, "Well, this is so important to these analyses. I need to have that kit. I need to learn from Rick Ruth and Rusty Haight and Brad Muir and be knowledgeable about this." And I tend to agree that that's what you have to do with video to a certain extent. You don't have to be Mark Crouch, but you have to know the basics of video analysis if you want to do some things.
Mark (01:46:40):
Exactly that.
Lou (01:46:40):
And then you can just do the passing point analysis. It's not super complex if you know when the frame was written and what the health of the frame is.
Mark (01:46:49):
Exactly that. And like any area, you can't be excellent at every discipline. You just can't. One of the key things of being an expert is understanding when you've met your limit, or identifying that something isn't quite right here, but you don't know what it is. That's the time to pass stuff on. Absolutely that's the time to pass stuff on. And you certainly shouldn't feel bad or down about that at all. That's the very epitome of being an expert is knowing when you've reached your limit. But knowing when you've reached your limit, you have to have a certain level of understanding because otherwise it's just ignorance, isn't it? So you have to understand these concepts and then understand when things have gone a bit beyond that. So yeah, just understand what it is, understand some of those terms. Yeah, and then you might find, actually, you can do a lot more than you thought you could.
Lou (01:47:46):
So the next thing I wanted to talk about is codecs a little bit. And if you could first, I guess, just define what a codec is, and then how does that play into your analysis? Do you show up, see what the codec is and be like, "Oh, okay, this codec is going to require me to do X, Y, and Z"?
Mark (01:48:03):
So codec is... Is it an acronym? Can you call it an acronym? It stands for coder decoder. So code decoder, codec, C-O-D-E-C. Sometimes called compressor decompressor because of the function that it actually has. What that does is we've talked about this video file being a digital file because light enters a camera lens. Well, we haven't really said that, but light enters a camera lens and it falls on a sensor, and then we're digitizing that. We're digitizing that light on a sensor to try and get out the other end a series of ones and naughts that computers like, so they sit on their computer.
(01:48:47):
But there's a convention to how we can write those ones and zeros. And we talked earlier a little bit about, well, we're going to do some cool stuff with these pixels and copy some things. And without getting really into the weeds of it, that is essentially what a compressor is doing, that the code bit of codec is doing. It's giving the convention that this bit of footage is going to be written to a computer that's going to take this stream of light and get it into computer language. But in order to then play that back, we need to decompress it. I need to know the way in which I'm meant to read those ones and zeroes back. And so I need to know how it was recorded so that I can drag all of the data back.
(01:49:37):
If you think of it a bit like a sort of, I don't know, a French to German dictionary, if I write something down that that is in French and I want it to be in German, I use a French to German dictionary. In order to understand, because I don't speak German, I don't speak French either, but let's go with the analogy, in order for me to understand it again in French, I need to use the German to French translator. And we park all of the idiosyncrasies of language structure and things like that, but I need to make sure that I'm using the reverse. I need to know what it was transferred to and how I get it back again. In that process, nearly always, we are going to drop some detail. As we said, a bit like language, some of the sentence structure, some of the nuance we might lose, but that's okay. We understand that's going to happen in the process, but we need to translate it back in the way that it was written.
(01:50:33):
Now, sometimes if we translate it really, really badly, we use the wrong decoder, well, we just won't play it. Or we'll just get absolute gobbledygook out the other end and it won't play or it won't do it. And we've probably seen those scenarios on our computers where we've put in a bit of CCTV footage and you get that message that says you don't have the codec, or codec unrecognized. What your computer's actually telling you is, " I don't know how to translate these strings of ones and zeros into a video file that you can see. I don't know how to do it."
(01:51:09):
Now, that is not great for us, but it's not as dangerous as getting the slightly wrong codec. What do I mean slightly wrong codec? It's either right or wrong. Yeah, that's true. But you can have codecs that will play back some of it, that could translate some of it. And on your screen, it pops up in front of you a video file that you are playing back. So you think you've got it all. But if you've ever had that situation where you try and play something back and you get, I don't know, a certain frame rate, and then you come up against another expert who's got exactly the same file as you and has other images, if that's happened to you, that is probably almost certainly because you've been using not quite the right codec. And this is a problem for us because in that process, what we know is, or we find out quite late that we've managed to not really get all of the information out of that file.
(01:52:13):
So what can we believe? Did the bits that it did show me, are they believable? Has it messed around with any of the other data? So that making sure that we have the right codec to play back is really, really important. This is where some of the forensic video tools really do earn their money because if you're using something like VLC that comes with its own codecs or installing codecs on your computer, which you don't have a lot of control over if you want to use Windows Media Player or whichever particular flavor, it's only going to play that back as best as it can, and it's going to make a guess, best guess based on the file name which codec it's going to use. And if you go into Windows Settings, you can say, "Oh, this file name, use this codec or use this to play this back."
(01:53:07):
What if that's the wrong one? What the forensic tools do is they ignore essentially what the file name is and they look through how that video file is constructed, all those ones and zeroes, all that hex data as well as it gets converted from binary into hex, and goes, "Right, I know because my libraries tell me that a file that is constructed with that structure needs this codec to play all of the footage back," and it will actually use what it's got in its own library to pull that out rather than making a guess. So this codec element, this compression, taking that light, the camera file through all the digital processes that it does, and back again is what the codec does. So we need to be really careful when we use it.
Lou (01:54:00):
And is one of the only ways to really determine if you're using the right codec to use one of these forensic tools?
Mark (01:54:06):
No, there are different ways of doing it. You can play it on different systems. So if you have sort of a video system, this does come at a bit of a cost as well, having a separate system where that is not getting loads of codecs installed on it, almost like a raw system so you get the same, playing it back on a number of machines would be something to check. If I play it back on a couple of different players, do I get the same files? Do I get the same data? That would be the cheapest way of doing it. It's not infallible by any means, but this is why something like that, a forensic tool exists because there is a risk with getting that wrong.
Lou (01:54:47):
And obviously as the codecs had developed, their goal is primarily to show high quality video at the lowest bit rate. It's like how much can we compress this and-
Mark (01:54:59):
Absolutely.
Lou (01:55:00):
... still display something beautiful.
Mark (01:55:02):
So it begs the question, well, why don't we just have one codec? Or two or three codecs? Well, and you've hit the nail on the head with that, the number of different functions we need. In order to put something on YouTube, I need to get a video that could potentially be three or four gigs down to ideally under 100 megs to go on onto YouTube. We've got a huge compression to do. But we know we're only going to really play that back on a screen. What's the biggest monitor people have got? 27, 29-inch monitor. That will be wholly inappropriate if you wanted to broadcast that on your 60-inch telly, or even going to the cinema in an IMAX or whatever.
(01:55:45):
The codecs exist almost for the end product, and they are wild and they are varied. So I can't remember. There's some real anecdotal evidence. I don't think anybody knows, but there are numbers ranging from 5,000 to 25,000 when you ask people how many codecs there are. The short answer to that is nobody knows, but there's lots.
Lou (01:56:12):
Yeah. Exactly. More than you could ever keep track of. And two of the really popular ones that I know of anyway are AVC, advanced video coding, and then HV... HVEC, I'm sorry. And I think it's abbreviated H.264 and then H.265.
Mark (01:56:35):
Yeah. You see a lot more H.265 now.
Lou (01:56:37):
Yeah. So that's coming out a lot now. And does that change the IBP and the motion detection? Is there a big difference between those or have you found that they're pretty similar?
Mark (01:56:51):
Yeah, so the reason why H.265 exists, and its supersedes H.264 is to ensure that you can compress files more. As the cameras that we carry around in our pockets... What's the latest iPhone? 2024 megapixel camera, and it's recording at something like, a video something like 12, or is it 15 megapixels? That's only ever going to get more because camera manufacturers, they sell their latest phone on how many megapixels it's got. It's one of the things you'll see. So that data is only going to... You're going to get more and more color information, more pixels, more data. So in order to still be able to use it on YouTube, we're going to need to compress more. So it's different, it's a different compression protocol, but it compresses larger files more heavily.
Lou (01:57:50):
And your analyses of motion in them seem to remain generally consistent. You go through the same process and it behaves in a similar manner anyway.
Mark (01:58:02):
Yes. Because ultimately, I have two functions as a codec or if I'm designing a codec. What I need to do is I need to give you enough information that makes you feel like you've had a really good experience, and your brain can fill in all the blanks for me, and at the same time, strip out as much information, or as much of the file size as I possibly can.
Lou (01:58:30):
Yeah, it's a bit of a psychological trick. More trickery. I mean, there's a lot of trickery that we've identified so far, and that's probably not the end of it. I suspect you have other artifacts that can trick the brain. And that actually brings up something that I'm interested in for myself when I'm doing a video analysis, and that's motion blur. So if you're looking at a nighttime video and a car goes through, and you might see 12 pixels, the taillight spans 12 pixels, and then you go to the next frame and it spans 12 pixels again, when you're doing the motion analysis there, what's your take on... Do you start both of them at the beginning of the taillight, at the end of the taillight, somewhere in between? Or is there a range you have to account for?
Mark (01:59:13):
So we need to be careful of understanding what a rolling shutter is here in the different... that a part of an image is captured a different time to another part. But if we park that argument for a second and say that this is a global shutter where you're more likely to get that motion blur, well, you would get that blur-
Lou (01:59:36):
Yeah. Where the whole shutter's going to open at once and it's all going to close at once.
Mark (01:59:40):
Yeah. Which is how you get blur generally. What you want to be doing in that case as a general rule is picking the same part, because you know that the first part of the taillight was when the exposure first started. It doesn't actually have a physical aperture, but when the sensor started-
(02:00:03):
... it doesn't actually have a physical aperture, but when the sensor started recording and the end of it was when the sensor shut off. So as long as you're using the same part, either the beginning part of the exposure or the end part of the exposure, you should generally be okay. What you wouldn't want to be doing is mixing those two. Yeah, you wouldn't want to mix the two because the time interval's different.
Lou (02:00:26):
Yeah.
Mark (02:00:26):
Because you'd be accounting for a frame rate plus an exposure.
Lou (02:00:29):
That was a selfish question because I see that all the time and that's currently what my thought on that was, but I'm glad to hear it corroborated by somebody more expert in video analysis than me.
Mark (02:00:45):
We're assuming that that's not done sort of disingenuously. It's not done deliberately to try and get the speed of the vehicle up or bring the gap closer. You can reduce the speed. It's a lack of understanding, and it comes back to the bit we said before is if you're not really happy with the concepts or you don't really understand how an image is captured in its basic sense, there's an exposure, however that's done, and there's a fixed period of time that the sense is capturing. Well, you can easily fall into those traps. Not deliberately trying to be wrong, but you go like, "Oh, I can see a really nice line here and a really nice line here," and you just make that mistake without realizing.
Lou (02:01:27):
Yeah. We mentioned this before recording and it's something that we're seeing a ton of right now, is law enforcement officers specifically, and I understand why they do it because it's the easiest thing to do. They show up to the building, the business that has the video, and they don't have a USB drive, they don't know what to export the video, whatever it is, they whip out their iPhone, video the video, that's what we get, and then two years later we get the case and it's like, "Well of course that data's not still on that surveillance system, so this is what we have." One, is that the best way to record video? And then two, what do we do if that's what we get?
Mark (02:02:11):
So no, that's absolutely not the best way. I would consider that to be the very, very last resort if you can't do anything else. Well, it's better to get that than nothing, sure. But you are very much so in that territory. This is your last option if you've exhausted everything else, including seizing the video system. I know that that causes difficulties in the U.K. as it does in America, but I'd even put that above that choice.
(02:02:42):
If you don't know how to download it, okay, grab it on playback, but then make sure that somebody... By recording it that way, just to get the initial grab, but then send somebody who does know what they're doing to it. Don't take that as your evidential recording. "I've captured the CCTV," you need to think of that as being, you haven't done any of that. It'd be in the same way that you wouldn't use your body camera to record some fingerprints that you saw at the site of a murder scene in blood. You wouldn't go, "Yep, I've got those fingerprints."
Lou (02:03:18):
All done.
Mark (02:03:19):
You just wouldn't do it. It's crazy. But you might go, "Well, I'm just going to quickly capture it now and then send somebody who could," that's okay. But you'd never sit there and go, "Yeah, got them." The reason why you don't want to do that, there's so much information hidden behind a video file, in the metadata, all that stuff, so the bit that you see on the screen's only really part of the video file. So we need to make sure when we are capturing it, we capture it all.
(02:03:50):
Also, when we've got that video file, we can do some cool tips and tricks with it to make it more readily viewable because we've actually recorded those pixel values and we can enhance them in various different ways, which isn't for now, but that allows us to do that.
(02:04:08):
Thirdly, and finally, we've talked about how these compression algorithms, these predictions of frames, these I, P, and B frames and codex and macro blocks, and we've thrown a load of technical terms around, we can work backwards from those, generally speaking, if we've got the original file because we can then interrogate it to see where those pixels have moved. If you then record over the top of it, if we run any kind of analysis, all we are seeing is what your body camera did to that footage. We're not seeing what the camera did to the incident footage, and you've just completely obliterated any potential we had to go and work out what happened at the scene. So it is my biggest bug bear, so you've hit on it. Should you do it? Only if your other option is I'm never going to get this footage.
Lou (02:05:09):
Yeah. I'm with you, it's one of my big pet peeves as well. We see it a lot. I have a really big case right now where that's all I have. I reached out to the owner of the system, but of course it's two years later and he's like, "I don't have it. This is what you get." It seems to me that photogrammetrically speaking, we're now looking at two fields of distortion that we're going to have to correct for. I can do that depending on how much control I have so I could probably figure out positions, where is one vehicle with respect to the other vehicle? But frame timing, man, am I ever going to be able to get frame timing accurately pinned? What's your take? Is there anything you can do there? Or is a speed analysis just not feasible at that point?
Mark (02:06:00):
It gets very difficult. I have had some success. Generally speaking, if your secondary recording device is recording at a much higher frame rate, about three to one much higher frame rate than the incident footage than the original on playback, then you probably have enough to backwards-engineer it. But to be honest, we shouldn't be in the situation where we are doing that because that is... I mean, if you want to answer some questions in the box, this is a brilliant way of doing it. If you want to spend an hour answering questions just on timing, use your mobile phone to record everything, but we don't want to do that. We really don't want to do that. And actually, you don't know whether you can do anything with it until you've really spent a lot of time working on it. So the short answer to that is, can you do anything with it? Sometimes if you get-
Lou (02:06:58):
Yeah. So the biggest thing that we can do is just educate the community and let them know that that's bad for us. It's bad for anybody trying to analyze the collision, and the best thing they can do is find somebody who can download or poke your way through. In my experience, it's generally not that hard with the native surveillance system to figure out how to export something. Maybe you're not exporting the best thing, but at least you're exporting something that is native to the system.
Mark (02:07:30):
It will still be better than recording it with a mobile device. Depending on the nature of the incident, if you're dealing with somebody's death, well then you do really want to be sending the specialist to it. It's what they do. You wouldn't send Barbara from the bakery to do blood pattern analysis so you need to send the right people to it when you can.
(02:07:59):
So yeah, you can go through it... And I have to be slightly careful because somebody will always buck the trend. You have to go delete footage, properly delete it, because typically you have to go through two menus to delete something. That being said, there will always be somebody that will manage to delete something of footage. But the general rule is we don't want to be taking video of video. If you've been trained and have the knowledge to get it yourself, do that. If you don't feel comfortable doing that for whatever reason, get somebody who can, because video of video is just an utter night-
Lou (02:08:43):
I think that's important for everybody to hear. All right, so we're going to switch gears a little bit. We've already covered the future and how things have evolved a little bit, we're going to go into what I call a speed round, and I get made fun of for it because nothing I do is that speedy when it comes to having a conversation with somebody like you, but we'll start talking about the future of video analysis. To put that into context a little bit and to start our brains heading down that path, how have your analyses changed over the past 15 years when you started looking at video to now?
Mark (02:09:22):
Yeah, so software, I think, and 3D modeling has been the real change. There was lots of stuff that I used to do about re-attending the scene and repositioning things or looking for lines on a road surface or physical features. 3D laser scanning, being able to convert that 2D picture, which is CCTV, a 2D representation of the 3D world back into 3D has been the biggest change and is brilliant.
Lou (02:09:50):
Yeah, that's true for me as well. I've already voiced my love of photogrammetry using point clouds. It tickles my fancy. I know that I have it right when everything lines up. Great visual feedback there.
(02:10:09):
So what are you most excited about in the field of video analysis right now as far as development or capabilities that you didn't have before?
Mark (02:10:17):
Yeah. So I think we're on this ever-increasing line of quality, better resolution, better time intervals, so we are going to be able to see more and more and more. What I'm actually most excited about is what we can use the video for, which isn't just about calculating vehicle speeds, things like doing more work. There's already low type [inaudible 02:10:43] for example, in terms of human factors, we might be able to explore even more in terms of perception response times because we can see how people respond to various different things. So the data grab that that's going to lead to in understanding drive behaviors I think will just continue to grow.
Lou (02:11:01):
Yeah, I'm excited about that too. One of the things that I'm hoping to do is perform naturalistic studies of motorcycle breaking rates on their way into impact because that's something that's a big gap in the literature right now. And now with this very reliable video out there, and EDRs on Kawasakis... I was just given a presentation in SoCal yesterday, and one of the officers came up to me at the end of the presentation, and he said, "I have Kawasaki data," so EDR data from this motorcycle, "and I also have great video of it." And I was like, "Well, that's a perfect study to put into a paper if we can accumulate 20 of those cases where we can very confidently establish the breaking rate of the motorcycle on the way into impact, now we have the best data set available." So I agree, I'm totally excited about that.
Mark (02:11:51):
We know a few bits and pieces, if they lock out the front wheel we know that kind of thing, but what do riders actually do when they don't lock out the front because they're trying to balance it? How does that look? Is it .5? Is it .6? Do we know? We might know that the front wheel's scrubbing rather than sliding before we go down the ABS, but what are they actually doing? Why are they doing it? And does everybody hold it at that point or is it... Yeah, yeah. Absolutely, any of those kind of things.
Lou (02:12:21):
Yeah, it's a big gap in the literature right now where we have a lot of control testing. We don't have a lot of naturalistic testing, and the naturalistic data that we do have is very strange in that it's suggesting that riders are breaking at .35 to .4 Gs on their way to impact, and it's not consistent with what I'm seeing in my casework. So there's a lot more work to be done there.
(02:12:42):
On that front, just a call to the community if you're listening to this and you have Kawasaki EDR data or really good video of a motorcycle pre-impact that we could use to calculate pre-impact breaking rates, and you're able to share it, please shoot me an email. I'd like to incorporate it into an upcoming paper. And that brings me actually perfectly to my next question, are there any gaps in the literature with respect to video analysis that you're seeing?
Mark (02:13:10):
Yeah. It's going to sound ever so slightly sort of self-infatuating, isn't it, but that's what we're trying to write. There's lots of stuff that take forensic video analysis for various different forensic disciplines, and they're great, or there's loads of things that are out there for understanding how video works in computers and cameras and things. What we don't have a lot of is how we can then apply that video to collision investigators and those kind of things. And the more people that join and write and do papers on this stuff, because it's lonely to be out here on your own sometimes, the more people that get involved in that in our industry will be better.
Lou (02:14:05):
Yeah, I agree. Like I was saying before, there's not a lot of collision reconstructionists that at least publicly do a lot of work with video analysis and make that information available. So I agree, I'd love to see it. That's why I was ecstatic to see that your class exists and I'm really looking forward to sitting down and taking that. So for those listening, hopefully in 2023, I don't know Mark if we're going to make that happen, but probably 2023 we're going to hope to get Mark to Southern California for a five-day live class.
Mark (02:14:40):
I only come out when it's sunny.
Lou (02:14:41):
Yeah. Well, you should be good because we just used up all the rain that must exist in Southern California the past three months. I don't know how it's still cold and rainy here, but I feel like I'm in Seattle.
(02:14:54):
Are there any tools that are in your current kit or in a lot of re-constructionist's video analyst kit now that you don't think will be necessary in 5 to 10 years or won't be there?
Mark (02:15:08):
Yeah. I think not necessarily a piece of kit, but the need to re-attend a scene to position objects. I mean, there might be the very rare occasion, but to go back or physically walk down the road taking meter measurements and seeing what a a pole, what a meter looks like or trying to position a car back, I think that will go. We're probably not a million miles away from that going pretty soon just because of the things that 3D laser scanning brings to the party or any kind of photogrammetry actually in that sense that, any kind of point cloud generation.
Lou (02:15:48):
Yeah. How ubiquitous are laser scanners over there right now for law enforcement and private?
Mark (02:15:53):
Yeah. Nearly every police force has one. In fact, I think they probably all do have those and deploy those at scenes, although there are still total stations being used quite regularly, which is bit of a shame because I think the laser scanner will... Unless you've got a road surface that's covered in snow or is under two inches of water, the laser scanner's pretty much going to get you more information in those circumstances.
(02:16:28):
So really common for police. In the private practice, not so much. There are probably only a few of us that have TLS solutions. But things like photogrammetry as in creating a point cloud from a series of photographs is becoming more common, as are sort of LiDAR based iPhone type solutions. But anyway, I mean there are various papers on accuracies of those, and of course a really expensive TLS will win every time, but the other ones are probably pretty good for our practice. So have a look at some of those things because they are usable and they do work.
Lou (02:17:22):
Yeah, I agree. I think that you can get yourself a pretty reasonably priced drone and pair it up with something like reality capture from Epic and make a really nice point cloud and a really nice ortho. If you have all the money in the world, Pix4D does a great job too, or more money I should say. If you have all the money in the world I guess you're buying a RTC or a RIEGLor something.
(02:17:46):
And then the inverse of that question, what tool do you think will be in everybody's kit in 5 to 10 years that they might not necessarily have now?
Mark (02:17:58):
Yeah, I think that's the answer. I think it will be a way of generating a point cloud because it is over here just catching on, people using sort of GoPros on poles or drones or whatever. Yeah, I think I'd be fairly confident that that would be in everybody's toolkit in the next, well, five years.
Lou (02:18:26):
It's just getting easier and easier. I mean, I had a conversation with Anthony Cornetto a couple days back, and he's a big photogrammetry 3D modeling guy, and he's pulling out the stereo camera assembly, an array that's just in one assembly, and you could just walk around and use that as a photogrammetry tool to map anything.
Mark (02:18:46):
It's a stereoscopic sort of-
Lou (02:18:48):
Exactly. If you put two of those on the outskirts of your pickup truck, well, I know you don't have a lot of pickup trucks there, but in America that's all we do, pickup trucks everywhere, but they're like eight feet wide. You drive down a lane and you have a couple cameras on the sides of the windshield, and can we create a great point cloud just from that, just drive through real quickly? If you're at 200 frames a second or something, which is not too farfetched now with the GoPros, it seems possible. So it'll be really interesting to see how that develops, if you could just drive down the roadway and map everything out, not put yourself in harm's way.
(02:19:28):
So AI, obviously a hot topic right now with ChatGPT and all that, hasn't really been very practical until recently, it doesn't seem, but video analysis seems like a spot where AI could really dig in and start analyzing some of those patterns you were talking about earlier with frame rates and macro block analysis, which I guess already could be perceived as some sort of AI, but how do you see AI affecting video analysis, if at all?
Mark (02:19:57):
Yeah. This is a tricky one because obviously we are never going to stop AI rolling out, but there's a number of different papers... It's almost like it's split the forensic community, hasn't it, AI? You seem to have to be in one camp or the other that this is totally inappropriate to allow a computer to do a forensic investigation or this is going to change the world.
(02:20:21):
I think there will be areas where it will be very useful in video analysis. I think calculating a vehicle speed will be still not AI, as in it won't be computer-learned. It may well be computer vision, as in you give it some dimensions of the road and it calculates the speed from the pixel values, but that's not AI because it's not learning. I think things like understanding frame rates of cameras, building that database of, "I know what this camera is and I can then predict its frame rate from all of the other information that we've got from camera of its type." Things like vehicle ID, perhaps it's already rolling in terms of facial ID, another hot topic. But those kind of things, for fail to stop collisions, hit and runs where cars flee the scene and you've got to ID a vehicle, I think probably there. But nobody really knows so we don't quite know what's going to happen with AI. But video does seem to be areas where it would play. Digital data is what it loves, and video is digital data.
Lou (02:21:32):
Yeah. That's a really cool idea you're saying with vehicle identification in hit and runs, it's like a CAPTCHA. If you could have a human say, "Oh, that's a 1988 Oldsmobile," and you do that enough times and the computer's like, "Okay, I know what they look like now," and that would be huge. Those are really tragic cases when a pedestrian's hit or something and they flee the scene and nobody ever knows who did it. It's brutal.
Mark (02:22:00):
And it also fills into sort of the brief, you need something done very quickly. That's why AI works, isn't it? Because you need to do something much, much quicker than a human would ever do it. And I think that fits because you've got to trace that vehicle as quickly as you can. So that for me, I probably lean towards that way, but we're all guessing. It's going to be an exciting time.
Lou (02:22:25):
Yeah, exactly. Speaking of guessing, the next question is based on autonomous vehicles. It's funny, if you go back in history and look at futurists' predictions about when autonomous vehicles are going to take over, everybody's been extremely wrong so far and it's been a much more challenging project than people estimated it would be. But I wanted to ask you a question specifically about autonomous vehicles in video. I know you're already talking about every vehicle... You get a Tesla and it's got, I don't know, five or six cameras on there. Can we get access to that? Sometimes if the driver has a USB in there and they have the dashcam feature activated, we can. How do you see that playing out in 10 years? Are we going to have access to all of that video from every vehicle that has them?
Mark (02:23:11):
Yeah. So I think this is a very, very similar argument to EDRs that we were talking about earlier. There is data on that vehicle that would be helpful. Sure, we need to analyze it. Sure, we need to be careful with it. But it's data on a vehicle that could help dramatically. And I think there has to, there just has to be some work done around getting hold of that data, and I think EDR has already set the precedent to that. Yes, okay, there'll probably be some things about, well, it captures people's faces and it presents a slightly different sort of personal data element, but that for me is just so analogous to EDR data that it's got to, hasn't it?
Lou (02:24:01):
Yeah, I tend to agree. I really hope it does because there's some questions you just can't answer. Without information like that and if that data's sitting there and preventing us from performing a complete analysis, it seems tragic.
Mark (02:24:18):
The only thing that I've seen... I have seen camera from a Tesla, is that the timing issue is just all over the place. And I think there's a distinction here. I think as we said about the difference between a camera used for the autonomous function and a camera used to capture the outside world in a traditional camera sense. If it's being used as part of the vehicle system, whether that sits on conventional [inaudible 02:24:52] or something like a Tesla has a different communication protocol for the autonomous function, well then the usual issues play I think with, well, what's [inaudible 02:25:02] doing? What are we doing with timing? What is the signal that's going round? Is there a can high, can low for these kind of things? Does it prioritize data on it? Loads of questions, but a very exciting time.
Lou (02:25:15):
Do you think there's going to be any other sources of video that pop up in the next decade that we haven't really put our finger on?
Mark (02:25:20):
The only thing that could potentially do that would be some kind of aerial surveillance, I think, where maybe drones are permanently in the sky. Personally, I think that's a little bit too sci-fi for the actual likelihood. More and more cameras, more and more people capturing things on camera phones, more and more dash cameras, cameras on vehicles, so I think that's the way that it would go. But the only gap I think would be some aerial view, but we'll see. We'll see.
Lou (02:25:53):
Yeah, exactly. One of the things that I always wonder is all the satellite imagery we have now, is there come a point when they're all capturing video and we could watch any intersection at any time historically or currently? That would be nuts.
Mark (02:26:10):
That would be. Probably not in my lifetime, but I might eat my words.
Lou (02:26:18):
Yeah, we'll probably all be killed off by AI by then anyway.
(02:26:25):
So in 15 years from now, we're looking at 2038, what do you think a typical video analysis is going to look like? Is it going to be pretty much what it is now or are there going to be different tools, different techniques? What do you think?
Mark (02:26:41):
Yeah. I think I sort of alluded to it slightly earlier on the AI question. I think machine vision I think will help. I think we will give it the parameters as in the road sizes and then effectively let it calculate the pixel changes, the pixel displacements, and it would do it without our involvement. I think that that is coming. Yeah, there'd be various controls like anything that automates, but yeah, machine vision, I think, for speed calculations.
Lou (02:27:18):
So the human comes in, we provide the judgment, we provide some of the foundational information, and it cranks through and does all the work for us and saves us the 40 hours or whatever we're spending right now performing those analyses. I would love that. I'm happy to spend time to use my judgment and experience and help a case along, but the tedious, repetitive work, I would love to pass that off to a machine.
(02:27:45):
Well yeah, so we go in two and a half hours, wanted to start to wrap things up. I really appreciate you taking the time, especially I know what... It's pretty late for you there right now. I think it's going to be nine o-
Mark (02:27:59):
Quarter past nine.
Lou (02:28:00):
Yeah. I couldn't tell. It looked bright-eyed and and bushy-tailed.
(02:28:06):
So where do people find you if they want to reach out? What's the best way to follow you and see what you're up to?
Mark (02:28:11):
Yeah, so LinkedIn. We're on LinkedIn. So I'm on LinkedIn, FCIR. We're on LinkedIn as the full name, Forensic Collision Investigation Reconstruction Limited. And our website, www.fci.co.uk or just reach out. I'm normally around talking at something or other, people can't get rid of me at the moment, it's great. But yeah, just ask questions. As you've probably gathered through this podcast that I love talking about video. In fact, you can't get me to stop. My wife thinks I'm great fun at dinner parties. But yeah, no, always happy to talk if there are questions, things you come across, anything that's slightly odd, something that you need a bit of a steer on, please, please reach out.
Lou (02:28:57):
Awesome, thanks Mark. And then this is the book, I have a copy here at the office. And then Mark is publishing the second edition, so I'm going to have to get my hands on that and see what else has been updated other than what we discussed. But thanks again, Mark.