JEFF MUTTART | HUMAN FACTORS

Lou sits down with Jeff Muttart to discuss driver response, autonomous vehicles, training novices, and the evolution of recon.

You can also find an audio only version on your favorite podcast platform.

A rough transcript can be found below.



Timeline of Topics:

00:01:22 The origin of a data junkie

00:13:02 The 16-year-old driver

00:23:52 Moving from the police department to academia

00:31:36 Jeff’s first success

00:40:07 The PRT of automated vehicles

00:52:14 Why are ped fatalities on the rise even with the increase in ADAS-equipped vehicles?

01:04:42 Analyzing the SHRP-2 data

01:17:53 Where are humans most vulnerable?

01:29:27 Access to video

01:39:39 Age as a factor

01:54:30 Jeff’s current toolkit

02:12:55 Autonomous vehicles

02:28:33 Predictions for the future

02:36:27 Motorcyclist PRT research

02:44:58 Applying the literature

02:53:22 What tool will be gone in 5-10 years?

02:57:02 Driver Research Institute and the future


Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.

Lou (00:00:19):

This episode is brought to you by Lightpoint, of which I'm the Principal Engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analyses. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, you know it takes hours of searching for the perfect model, awkward conversations with dealers, and usually some cash to grease the wheels. Then back at the office, it takes a couple more hours to stitch and clean the data, and that eats up manpower and adds a lot to the bottom line of your invoice. Save yourself the headache so you can spend more time on what really matters, the analysis. Lightpoint has already measured most vehicles with the top-of-the-line scanner, Leica's RTC360, so no one in the community has to do it again. The exemplar point cloud is delivered in PTS format, includes the interior, and is fully cleaned and ready to drop into your favorite program, such as CloudCompare, 3ds Max, Rhino, Virtual CRASH, PC-Crash, among others. Head over to lightpointdata.com/data-driven to check out the database and receive 15% off your first order. That's lightpointdata.com/data-driven.

(00:01:22):

All right. My guest today is Dr. Jeffrey Muttart. Jeff Muttart began his career in crash investigation and reconstruction as an accident reconstructionist for the Groton Connecticut Police Department in 1985. Since then, he's earned a master's degree in experimental psychology from the University of Hartford and a PhD in industrial engineering and operations research from the University of Massachusetts. At both universities, his research interest was related to driver's response behaviors, which we'll get a lot into. For the past 30 years, he has compiled and conducted scientific research to determine what caused drivers to respond as they did. He has authored more than 70 technical book chapters in scientific studies on traffic safety topics and has been a sought-after speaker, giving more than 200 lectures throughout the world. He has also earned several awards for his research and contributions to driver safety, and his opinions have been frequently sought by reputable sources such as automobile manufacturers, government safety agencies, and standards committees. So thanks for taking the time out of your day... I'm sure you're slammed on a day-to-day basis... to spend some time with us.

Jeff (00:02:34):

Good to talk to you.

Lou (00:02:36):

So I thought we'd start at your beginning, which seems to have predicted your current career a little bit. You and I, we've talked about this in the past. We're both data junkies. It's what makes us happy. It blows our hair back. But I'm not sure that my obsession for data manifested as early as yours.

Jeff (00:02:58):

So it took you 30 seconds to do a hair joke.

Lou (00:03:01):

Yeah, exactly. I figured yeah, and there probably will be more. So I didn't really get obsessed with data until later on in my career, but from what I understand, it really took a hold of you early to the point where your mother identified that you were a nerd when you were very young.

Jeff (00:03:21):

Yeah. Well, in my high school baseball, teammates, I annoyed them as well because they would make fun of me, the fact that I'd know my batting average before I even got to first base or before I even got back to the dugout after the bat, or I'd know how my ERA changed when a player got another hit off me.

Lou (00:03:48):

Yeah. So were you documenting that somehow or was that just all in your head?

Jeff (00:03:52):

In my head, yeah. Well, and yes, even though we played seven-inning games, I always calculated my ERA based on a nine-inning game. That would lower it.

Lou (00:04:04):

Yeah. That's what the pros are doing. Oh. Okay. I gotcha. Yeah, and it doesn't shock me that you can keep that information straight in your head. Anybody who's ever taken one of your classes, it seems pretty clear that one of your super strengths is really memorizing the research. And we've talked a little bit about that in the past, and I know you said your memory is not great for everything, but when it comes to the things that really matter to you and to your career, they just stick.

Jeff (00:04:31):

You know what it is? It's a big story to me, how drivers behave. When I teach, when I describe it, to me, I'm describing a story of what affects drivers and likely what's going on in their minds. We don't know what's going on in their minds, but we can see how they're behaving and how different things change that behavior. And it creates a story, and it makes it easier when I read another study to add that study to my story. And so I think that's a little bit what helps. I'm not memorizing studies. I'm taking studies and adding them to my story.

Lou (00:05:25):

Yeah, I got you. And it seems to me that you are able to also memorize simultaneously the quantitative values. In other words, the average response time in this situation was 1.4 with a standard deviation of 0.5 or something.

Jeff (00:05:41):

Well, that comes from just repeating it over and over again. So for example, a couple of new studies came out. As a matter of fact, we haven't even discussed this, but a couple of motorcycle studies came out this past year. And so I put it into my course materials, but then we have the advanced class, and then we have the software class, and then we have the introduction class. And then we have the book, and then we have the software. And I keep repeating these numbers over and over again in my head and in our spreadsheets and in our data, and it makes it easier to remember.

Lou (00:06:27):

Yeah. I've noticed that as well. I think that my skills and ability to recall studies have grown substantially since I started teaching them to other colleagues. That's been a big help. So-

Jeff (00:06:44):

And that's what I tell everybody that takes a class, is the first time they take one of our classes, is sometimes it can be daunting. And I say to them, "Just hang on. It grows. It grows." And once you have that one or two studies, and if you are a data junkie, it becomes 6 and 10. Then next thing you know, it's dozens over time.

Lou (00:07:16):

And in my experience, that is one of the things that really elevates your ability to accurately reconstruct a crash, or in your case, analyze human response, is familiarity with the research and the ability to call on the appropriate research when necessary. It's having the study that really hits the nail on the head makes such a big difference for analyzing a specific case.

Jeff (00:07:43):

Well, each crash gives us another... That's the other thing is now, I was always the youngest one in the back of the classroom, and some time, I ended up being one of the older guys in the front of the classroom. And I really don't know when that happened, and it just did. It just seems like yesterday that I was the young guy in the back of the classroom. But having years of experience at this field... They always say professional athletes have an advantage if they started at a younger age. And I think there is some advantage to that, that because I started so young in this field. The first fatal crash I was at was I was 23 years old, and that was 1983. And 40 years-

Lou (00:08:48):

From what I recall, you got your bachelor's in economics and then went straight to law enforcement. And you started the accident investigation unit or reconstruction unit at Groton PD. Is that right?

Jeff (00:09:03):

Yeah. Well, it even started before then at the police academy, second day of the police academy. Day one, I called my wife, and it was the regular hazing of the new recruits. And I went, "I don't know. I don't know."

Lou (00:09:22):

This might not be for me.

Jeff (00:09:23):

"I don't know." Yeah. And day two, first class, it was 7:00 in the morning, first class. It was crash reconstruction, crash investigation. And I called my wife that night and I said, "I know what I want to do the rest of my life."

Lou (00:09:42):

That's amazing.

Jeff (00:09:45):

And then I just annoyed everybody at the Groton Town Police until they sent me to every training class that I could take, and I took classes on vacation, and it resonated with me.

Lou (00:10:01):

And the combination of the data and the mathematics, the analytics, do you recall what it was exactly that drew you in so quickly?

Jeff (00:10:11):

You know what? There were so many things. Number one, I got a murder case right after getting out of the police academy, and I got a fatal crash right out of... I was one of those guys with the black cloud over me. So no murders in the town of Groton for 10 years, and I'm out of the police academy, and three months on, I get a murder that I go to. But the murder, of course, gets ripped away from that line officer. But I get a fatal crash to investigate about a week earlier than that, and it's mine to keep. And I was like, "Oh. I can work-"

Lou (00:11:02):

Big responsibility, but also exciting.

Jeff (00:11:03):

Big responsibility. Yeah. It's the same thing. A person died and they're going to allow me to explore this case, to investigate the case. And so that was one thing that was very enticing to me. The other thing is that it was so much data and so much math, that just like you were saying, my mother noticed that. And all the neighbors' mothers also would say, "Jeff, can you just play without keeping statistics on everything?" But it's what I've always done. I just love statistics, whether it be sports or playing or calculating. I just always loved... Because it gives you insight into behavior. And early on, even when I was 9 and 10 years old, I could see that having statistics gave you insight into behavior. And that's really what I find so fascinating with driver behavior statistics, is you can see what drives behavior and driver response times and driver response choices.

Lou (00:12:35):

Yeah, I always found that really interesting too. And like you and I have discussed in the past, you're establishing that baseline with the stats and the research. And then in a particular case, if you are able to quantify the response time, if you have enough evidence to do that, then you can compare them to that baseline and make some sort of observation or calculation, I guess, that helps everybody understand what happened.

Jeff (00:13:02):

There's even more now with the advent of the eye-tracking equipment where we can predict with pretty good certainty where an experienced driver is going to glance next. And we cannot predict with any certainty where the 16-year-old driver's going to glance next. Right?

Lou (00:13:29):

Yep. I remember that. Taken to your class, and there were people looking at planes when they're entering an intersection.

Jeff (00:13:36):

Yeah. A 16-year-old's glance pattern is like a random number generator. It's like you just don't know. And it amazes me when still to this day, driver training manuals say drivers should scan. And the only drivers that scan are 16-year-old drivers or drivers who aren't paying attention. Because if you're scanning, you're not predicting. You're not anticipating. Drivers who are experienced, and usually over age 25, by the time they've gotten to age 25, they are anticipating who is going to interfere with their driving next. And so their glance patterns reflect that. And you can see just by their glances what they're thinking. They don't trust that guy. Oh, they don't trust... Oh, look at where they're looking now. They don't like that. You can see the pattern of glances. And when you see... Drivers over and over and over again will look to the same areas. You can see we do develop certain behaviors, certain anticipatory behaviors when we drive. And it's fun to see how similar we all are in many ways as well.

Lou (00:15:10):

And can you bring those novice drivers closer to the 25-year-old or all the way to them. And if so, how quickly can you do that? In other words, could you work with a 16-year-old for a week and get them pretty close to a 25-year-old if they're heeding your advice?

Jeff (00:15:27):

Here's the thing. You can't make a 16-year-old 25, but we can try to make them close. And if we give them enough information, if we see what experienced drivers, what they do... Where do they put their foot? Do they come off the throttle? When do they come off the throttle? When do they move in their lane? Where do they look at what point in time? And if we can look at the teen drivers or commercial drivers or whoever we're looking at and see where they're looking, and if we can fill in the gap... Where are the experienced drivers looking who do not crash? And then where are the novice drivers looking who tend to have an average of one crash a year? And if we can teach that teen to be better at glancing as if they're a 25-year-old, their performance improves. Now, in the research we've done, we've had very good success in getting the novice drivers to behave as an experienced driver has. But in real life, I would say if we can get close, that would be a nice goal in real life

Lou (00:16:59):

Yeah. Because they don't have that same mental maturity. They don't have the experiences. They don't have the same judgment level. There's just a lot. Like you're saying, you can help them make that search pattern and that's going to help, but you're not turning them into an experienced driver.

Jeff (00:17:13):

Correct. But there's been good research. The University of Massachusetts has worked on the RAPT, the risk and perception training, and this is the only driver training that I'm aware of that has been validated and shown to actually be associated with reduced crash risk. And in teen males, which are the most susceptible to crashes.

Lou (00:17:49):

Yeah. I remember being one. Yeah.

Jeff (00:17:57):

Yeah. Twas testing out in California, and it was found to drivers who received the placebo training had the typical crash risk. Those who had the RAPT training had reduced crash risk. And that was some of the research I was doing at University of Massachusetts, is I was working on the mitigation training, the tie-in with the RAPT training. So RAPT was risk and perception training, so teaching drivers to glance better, more efficiently.

(00:18:34):

And then my part of the training that I developed was, all right, after you anticipate the hazard, where should your foot be? Should it still be on the throttle or should be off the throttle? Should it be on the brake or should it be on the brake hard? And so we were looking into the risk mitigation training. And what really motivated me towards that training program is that a bunch of friends on the police department had a charity event, a scramble golf tournament. Now, as you can probably imagine, I don't get to golf too often. And so I decided to go down to the driving range and get some training, and I just didn't want to suck. I just didn't want to be really bad.

Lou (00:19:27):

I've been there. Yeah. I've done that same thing,

Jeff (00:19:29):

And I figure, "Hey, if they can use just a couple of my balls, I win in the golf tournament there." And so I go down there and he says, "Well, come back every Thursday night at 8:00 for eight weeks." I'm like, "Wait, you're going to spend 45 minutes with me on eight nights to hit a stationary ball." And he's telling me where to put my feet, where to put my legs, where to put my elbow, how to hold the club. It occurred to me that we're not giving our novice teen drivers anything like this kind of training, and they're driving a 3,500-pound weapon. And so-

Lou (00:20:17):

And everything is moving around them. The ball is not stationary.

Jeff (00:20:20):

Right. My research in the driver training realm was to identify specifically where do experienced drivers come off the throttle? Where do experienced drivers hit the brakes? When do experienced drivers hit the brakes hard? And where do they swerve? How do they swerve? And to teach the teen drivers, to teach younger drivers the proper way to brake, the proper way to steer, rather than having them learn by trial and error for the first nine years of their -

Lou (00:21:07):

Exactly. And the consequences are so grave. I mean, that's one of the biggest killers, if not. I mean, you probably know better than I. I'm not a safety expert, but between 16 and 24 years old, I think it's one of the most common ways to pass, unfortunately.

Jeff (00:21:24):

Well, graduated licensing has saved almost as many lives, and some studies would suggest as many lives as the seatbelt. And so we can see we can make inroads with driver training and techniques for reducing crashes. So I know they have that vision zero, that someday cars won't crash. That's a very high endeavor, high goal that I don't think we're going to get anytime soon, but it's a nice goal.

Lou (00:22:09):

When I had my kids in 2013, I probably envisioned at that point that they would be driving along in a nearly fully autonomous vehicle that was unlikely to crash. Now that we're at 2023, and they're 10, so they're only six years away from driving themselves, I'm starting to become more and more terrified. And I bring home some of my casework and I don't show them gory photos or anything like that, but I tell them about what happened and the misjudgments that led to that crash occurring in the first place. So if this research from UMass Amherst could come out, I think every parent in the country would appreciate it. It's-

Jeff (00:22:48):

Well, it's out right now. It's available online. I believe the state of Wisconsin is using that training. The state of California is using the training, and some parts of Canada. It might be a few more as well that is using the risk and perception training. And so it's encouraging to see that there are improvements. There's a lot more driver training improvements that are need... For example, in the commercial vehicle area, that's woefully outdated training published in 1952. So there's a lot more that needs to be done. But it's exciting to see that a lot of people are looking into better techniques for training drivers.

Lou (00:23:52):

Yeah. And a little bit later on, I do want to talk about the fatality statistics and how those have really been worse and worse over the past couple of years. But I want to go back a little bit just to your background, because that shift that you made from law enforcement, from collision reconstruction, to torturing yourself with a master's and a PhD in seeking out the UMass program, is obviously not a common one. Very rare. And you also made the shift from collision reconstruction to human factors specifically and continued to narrow your focus. So was that a gradual process or did a switch flip at some point? What sent you back to academia?

Jeff (00:24:40):

In 1993, well, like I said, I loved crash reconstruction. I think one of the reasons I left police work, or the primary reason I left police work, is to do it full-time. And then in 1993, the Supreme Court Daubert versus Merrell Dow Pharmaceuticals ruling comes out. And it says that we as experts have to be able to report our error rate. We have to be able to explain what method we used, how we applied that method properly to the facts of our case, and we have to be able to report the error rate of our analysis. And I started going through my crash reconstruction and my measuring tools. They give you the error rate right on the owner's manual. And then you can do, whether it be Monte Carlo analysis or finite difference analysis, or it's just standard deviations, you can take care of error rate with your speed calculations. But the number that I didn't have an error rate to is what we were taught, was use 1.5 second perception response time.

Lou (00:26:15):

And that's right, right? That's what we should do?

Jeff (00:26:20):

Yeah, yeah, yeah.

Lou (00:26:23):

We'll get back into that. I didn't mean to derail you.

Jeff (00:26:28):

And so clearly the crash reconstruction community didn't get the memo from the very first reaction time study conducted in 1868. We knew different stimulus leads to a different response, leads to a different response time. And going back to that 1.5, what event does that lead to? What event is that for? When is the starting point of that? When is the ending point of that? And if you can't answer when it starts, when it ends, or what crash type it's for, then you should not be using it because you're just making stuff up. And so that became clear to me, that we didn't know the starting point. We didn't know the ending point. We didn't know what crash type it was for. And according to Daubert, we couldn't cite the error rate to that number.

(00:27:31):

So if 1.5 was the average, what's the range of normal drivers? At what point does a driver become unreasonable? At 1.6, at 1.7, at 1.8? And if you don't know the standard deviation, if you don't know the distribution of response times, you have no idea the value of a 1.6.

Lou (00:27:55):

Yeah. You're flying blind.

Jeff (00:27:59):

So that became obsessive, almost like, "Oh, my God. I got to get the answer to this before I get killed and embarrassed on the stand." And I just started collecting studies. Like some people collect coins, I collected studies, and that was in '93. And I just kept collecting studies, and now we're way over 1,000 perception response time studies that have been published. And we just keep collecting them. And by doing that and breaking it up, and how was each study done, and what were the conditions, and what was the response, we can see there's obvious trends that crash type, the response scenario, drives what the number is. So drivers respond very differently in a cut-off than a head-on. I think we all knew that.

(00:29:08):

But when you see the numbers and you categorize them by crash type, you can clearly see that response times have a trend, and that trend is very simple. The comparative probability of the event determines what the response time is. So think of this. What is more probable, a intersection path intrusion in daytime or a mid-block path intrusion in daytime?

Lou (00:29:42):

Yeah. Intersection, for sure. Unless I'm 16. I might say something else.

Jeff (00:29:47):

So if the comparative probability of a conflict is greater at an intersection, then driver response times has been faster at intersections than at mid-block locations. So if you think of this, then at nighttime, is nighttime response time slightly longer because it's nighttime, but because it's less probable? And so that's not a really clear answer. We know response time is about a 10th of a second longer at night than daytime. And so some people say, "Wow, a 10th of a second longer. Oh, my God. It's nighttime." Well, understand we got to define terms. And if we're measuring response time from a point where the hazard is easily identifiable, conspicuous, discernible, well, why is it going to be so much longer at night? So I ask everybody, do you respond a lot longer at night to a yellow traffic signal than at daytime? And they say, "No." I go, "Well, why not?" "Because it's conspicuous." Exactly.

Lou (00:31:02):

Yeah. And that seems one of the bigger challenges with nighttime, is what is conspicuous? What is detectable or discernible?

Jeff (00:31:10):

Right. And once something becomes exceeds a recognition threshold where a driver can know the true character and the true location of that hazard and the true path of that hazard, then we can start to clock on perception response time. But not until then can we start the clock on perception response time.

Lou (00:31:36):

Yeah. In your master's thesis, I know as an SAE 2003 paper, I'm pretty sure, was that your first attempt, first of all, and potentially even the whole community's first attempt at quantifying the inputs that affected perception response time and what things actually mattered like a statistical analysis of, well, what is going to change a human's response time?

Jeff (00:32:06):

That was my first success, I should say. There were-

Lou (00:32:14):

A lot of attempts.

Jeff (00:32:19):

Well, I followed Edison's advice that every failure is another step closer to the answer. And I have led a blessed life with some great teachers. And so one of my professors at University of Hartford, I'm running the stats by him. I'm showing him my huge database of all studies, and I had a study by Gazes, response time to traffic signals, and study by Olson, response time to a yellow piece of foam in the road, and a study by Neil-

(00:33:03):

A study by Neil Lerner from Westat, of a barrel being rolled into the path of a car. And Elizabeth Mazzae from NHTSA, her and Dan McGeehe from Iowa did studies together. I had them all in one database and she had vehicle path intrusion studies. Dr. Breher looks at my database and he says, "My god, Jeff, you're averaging avocados and elephants."

Lou (00:33:40):

I never heard that one.

Jeff (00:33:41):

I have no effing idea what you're talking about. That was my thought. But now I look at it and I see the crash reconstruction community and still, I just peer reviewed a paper not even a month ago, where somebody took the SHRP-2 data, took all the different studies, all the avocados and the elephants, and put them in one pile and tried to average them. And I'm thinking, "Well, that's a worthless number."

Lou (00:34:17):

Was it a lead vehicle? Was it a path intrusion? Is it a cut-off? Is that what you mean by separating out a bit?

Jeff (00:34:25):

Right. So he concluded that the average response time was somewhere near 1.66 of all the SHRP-2, all the naturalistic driver research that was done throughout the US from 2012 to 2015, there were 3,500 drivers that agreed to allow us to monitor their driving. And that data was processed by Virginia Tech. And so here's a problem when you lump it all together is what's more common, a cut-off or a head on?

Lou (00:35:10):

Cut-off. Got to be.

Jeff (00:35:11):

Cut off, right? So in fact, there's 540 cut-off near crashes and crashes in the SHRP-2.

Lou (00:35:20):

And those, from what I learned from your teachings, your response time to those is going to be a lot quicker than a head on.

Jeff (00:35:26):

That's the fastest response time event. There are 17 head on events, and so when you average all of them, the head on events, if you have a time to contact of eight seconds and a head on your response time might be six and a half. So it's head on events are just the most uncertain event. Remember what I said before, probability or uncertainty drives response time. What's the most uncertain event a driver could face? Head-on, right? So you don't know what he's going to do. You don't know what you're going to do, you don't know the outcome of that, and it's a very low probability event. So the uncertainty is infinite in a head-on. So therefore response time is nearly infinite in a head-on.

Lou (00:36:22):

Yeah, if you know it's going to happen, you're about to get into a crash, your response is going to be quicker and more decisive in what you choose to do as opposed to that area I imagine in a head-on, where you have time to think, are they going to go back to their lane? Are they going to stay in my lane? What's going on? Should I try to stop? Should I try to swerve to the left, to the right? What do I have for room on the side of the road?

Jeff (00:36:46):

Yeah. Well, and here's the thing. If you got a head-on vehicle, are you going to spend time to look over to the side of the road, see what's over there? Or are you just going to go, there's something over there. I'm not going over there. And so that's the other thing we've found in our head-on studies is I put a tuft of grass over on the right. Drivers still didn't go over there, a tuft of grass. They could have easily, if they looked over there, they would've seen that it's a tuft of grass. I put a utility pole one time, I put a mailbox one time, one thing-

Lou (00:37:21):

Simulation study.

Jeff (00:37:22):

Yeah, in simulator studies. Now if there was nothing over there, almost everybody went right. If I had a nice smooth transition that went from pavement to grass, just about everybody swerves right if somebody's coming from their left, but if I-

Lou (00:37:41):

How often is that true in the real world? Do you always have a guardrail or a telephone pole or buildings or something?

Jeff (00:37:47):

Right, and so that's the other thing. We put a guardrail over there, no one goes right, no one, and they went left. Now that's great, going left, as long as the other guy doesn't correct. It's scary to see some manufacturers and some standard institutes are still lumping all events together and then saying, "Well, our standard, our reaction time approach is good for across all crash events." And it really isn't because here's the other really scary thing. When we look at the SHRP-2 data, the naturalistic data, and there's 540 cut-off events and 269 rear end events, something like that, something near 270, 235 intersection events, where an intruder flies in through the intersection. Those are higher probability events. The average response times for those events tend to be 1.3 seconds or less for perception response time. But when we look at those, we have a lot of near crashes, very few crashes, but when we looked at the low probability events like stopped vehicle on a highway, U-turns, sudden unintended accelerations, low probability events, backing events, there's very high response time, very high crash risk.

(00:39:53):

And so anybody that's looking at any database and they're not considering the low probability events, they're not looking at crash risk, they're not looking at the serious crash risk. And I saw we might be talking about automated vehicles. That's one of the problems when saying our automated vehicle is as good as a human. Well, it's as good as a human does in near crashes. Does it do good as a human in crashes, in crash events? And sometimes having looked at this, sometimes the answer is yeah, sometimes an automated vehicle can think faster than a human can. And sometimes humans just have more common sense than a computer does.

Lou (00:40:49):

Yeah, exactly. They've seen more situations that they've been able to intelligently digest and put into their data bank.

Jeff (00:40:58):

Now I'm criticizing automated vehicles. Really, the way they've progressed in the last five years even is remarkable. But there's still a couple crash types that are very concerning and it doesn't seem like they're really looking at the crash data to try to solve those problems.

Lou (00:41:22):

Interesting. And I do want to talk about the ADAS systems, so the advanced driver assistance systems and autonomous vehicles and things because it seems like that is going to affect our work. I'm sure it's already affecting a lot of your work. As I mentioned before, I think a lot of us thought by now that fully autonomous vehicles Level 6 or Level 5 autonomous vehicles would be available to us now. They're not really, but we do have a lot of ADAS systems and they are, I imagine, rearing their head in a lot of the cases you're seeing where hopefully the EDR data is detailed enough to tell you, and I'll be curious because you work a lot more of these types of crashes than I do, who initiated the avoidance maneuver and did it do a better job than the human would've done? And there's also a lot of research related to that, so tossed a lot at you there. But what is your practical experience with ADAS at this point and how is that research affecting you and maybe even the research that you're thinking of conducting?

Jeff (00:42:42):

Well, when you have an automated vehicle involved in a crash that is, and likely it's going to be Level 2 automation. That means lane centering and adaptive cruise control are active in the vehicle when it's in the crash. Now, when some vehicles might identify when it was a vehicle initiated action, many that I've seen don't indicate whether it was a vehicle initiated action or a driver initiated action. So when you see brakes on at minus 0.4 seconds before impact and the driver tells you, I never saw the pedestrian before I hit him, or I never saw the hazard before I hit him. Now here's the problem with that statement. You know as well as I do, we've heard that statement before and then seen 40 feet of skidding pre-impact.

Lou (00:43:45):

Exactly.

Jeff (00:43:46):

And so you get what the driver gives you because this is a wives' tale that should help us remember to use a proper methodology because we are as biased as our witnesses are if we're not using a proper method. So I hear a lot of crash reconstructionists say, "Well, witnesses are unreliable." And I said, "And so are experts if they're not using a proper method." That's the conundrum there. I don't know what to do. He said he didn't see it. The court is going to treat that like fact, and that if you as an expert don't accept that as fact, you're rolling the rock up the hill without any success.

Lou (00:44:40):

I mean, how many times have I been to depo and they say, "Mr. Peck, your reconstruction is inconsistent with the driver's testimony." And I'm like, "It always is." And anytime, like you said, to your point, when we do get black box data and we have information with respect to the vehicle's behavior that is objective and we compare that to the testimony, it so rarely aligns. Every once in a while. But that one specifically that you mentioned, I see all the time.

Jeff (00:45:07):

It's funny you say that. In my career it seems like every time I see the reconstruction and I see the evidence, I can see why the witness said what they did. And more often than not, I see why a witness said what they said. So I guess I've had pretty good luck in my career. Not too often do you get somebody that's just completely contrary to the evidence.

Lou (00:45:45):

The one that you mentioned specifically, I think I see almost universally where they say, I didn't have time to brake. I didn't have time to do anything, and when I get video or I get black box data, they usually do. They usually have done something pre-impact.

Jeff (00:46:01):

Yeah. So going back to that AV case, that's when the human factors person or the automated driving expert person really has to work with the crash reconstructionist and you know how they always say take the download but reconstruct the crash. And that's another reason why you should reconstruct your crashes even if you have an event data recorder report, because sometimes the evidence can tease out that had to be a vehicle or that had to be a human that did that response when you don't have, and there's been a few cases that I just looked at and I say, "I have no idea who did the braking here."

Lou (00:47:06):

And is that a question that you're being asked a lot at this point?

Jeff (00:47:10):

Well, every case I have. For example, when I teach crash investigators, I will tell them if I work a case with them, if I'm consulting on a case or if I'm just helping them, if they're calling me in and I'm just helping them out, one of the first questions I'm going to ask of them is, how long before impact did the driver start swerving or braking? How long before impact did the driver's maneuver begin? Because what we want to do, we want to have that classical scientific approach of comparing an experimental sample, what this driver did, with a baseline sample of what drivers have done in research, in the same response time task, in the same crash type. How have other drivers behaved in that same situation?

(00:48:10):

The only way you can do that is if you know what your driver did. So if you don't know what he did, you can compare that to what others have done. But you know the question's coming, how do you know he responded at all? Unless you know who caused the braking at 0.4? And I say 0.4 because it seems like when the automated emergency braking, many times that's around the time it's going to kick in. And so if they have AEB in the vehicle and it kicks in, it's probably going to be in that 0.4 seconds before impact timeframe. But a lot of drivers, for many crash events, the 85th percentile responder is somewhere near that 0.4 seconds before impact.

Lou (00:49:17):

Yeah. So if you see the response well before that 0.4, it suggests to you that the driver was probably involved.

Jeff (00:49:24):

Yes. Yeah.

Lou (00:49:25):

Okay. That's interesting. So I haven't gotten too deep on any of these ADAS cases yet, but Part 563, as far as I know, does not require them. Obviously they don't. And maybe that's something that we should look into and talk to the NHTSA about, but it seems like Part 563 should mandate that if you have some autonomous system, there's a flag in the black box data, the EDR that says hey, we did something here so that it'd be easier for all of us to understand, and then we could go back to that huge data set and figure out how much the systems are actually helping.

Jeff (00:50:03):

Well, we know some systems do do that, and then some other systems might do that but not report it.

Lou (00:50:13):

Okay. Because they're not mandated to, so they're essentially reporting what they have to. Okay, that's really interesting. It seems like that is going to be omnipresent before long. And again, I think that a lot of us thought we'd be just reconstructing fully autonomous crashes at this point, but it seems the more likely scenario is that in five to ten years, almost every crash will involve a car that has some assistance and might've played a role in the response.

Jeff (00:50:48):

Well, one manufacturer said to me that there's generally a 15-year time gap between new technology and when younger drivers get the vehicle, and if younger drivers are much more likely to crash than experienced drivers than if we want to see the 16 to 24-year-old drivers in more advanced vehicles, that's usually going to be like a 15-year delay on that.

Lou (00:51:26):

You can't go buying your kid a $60,000 car, which is nowadays, if you want something that's got all the bells and whistles safety wise, that's where you're at, 50, 60 grand probably. Although I will say I have a Toyota Tundra and I think Toyota made some sort of pledge by 2022, all their cars were going to have at least ADAS on board. And that is helpful. And that brings me to my next point, which is interesting, or next question is not, I don't have a point. I'm very curious to see if you do.

Jeff (00:52:00):

I'm used to having conversations with you where you have no point.

Lou (00:52:03):

Exactly. That's pretty much every conversation. Except this time there's no beer, so there's no excuse for me having no point. So when we go from 2020 to 2021, we're seeing these huge spikes in fatalities, and so it's like 10%, I think it went up and then ped impacts, pedestrian impacts for those not with the lingo, went up 13%. And when you look at that combined with the ADAS systems, it's this strange disparity that doesn't make a lot of sense to me. Shouldn't we be getting better at not hitting pedestrians when a lot of these cars are equipped with pedestrian impact detection and pre-impact braking? Do you know what's going on there or do you have some sort of speculation?

Jeff (00:52:53):

Well, the pedestrian problem is an urban, more mostly urban problem. And so we don't see ped crashes increasing too much in suburban and rural areas, but it's more ... Well, I take that back. Here's the problem with ped crashes. Arterial roads, 80 to 85% of all pedestrians are going to be hit on an arterial road. So Main Street USA kills pedestrians. And so that's the tagline here. So if you have a 45-mile an hour speed limit road, and it's a main connector road between two other connector roads on arterial, pedestrians are very highly likely to be struck on that road.

(00:53:45):

So we know where they're happening and if we can do a better job maybe of lighting the roads, a better job of funneling pedestrians into certain areas, we see that when, for example, children pedestrians are likely to be struck in a place where they don't have protective play, where there's multi-family dwellings, and so they are playing, the road happens to be in their playground. We call it a road five and six and seven-year-old call it their playground.

Lou (00:54:32):

Yeah, it reminds me of Wayne's World when they're playing street hockey in the street and it's "Car, game off," and then there's balls rolling, I imagine into the roadway and there're kids chasing balls and things.

Jeff (00:54:43):

And that could be a natural thing for their behavior in how they treat roads. And so perhaps build a park right next to that area to get them off the road or perhaps put restrictions that limit. For example, I was in New York City during the Christmas season and I noticed that they don't let you cross, you just can't jaywalk In some areas. They have fences and walls that you can only cross in certain areas. And so that's going to improve the probability of a driver. So if the driver has lower probability of a pedestrian in any area, his response time is longer than if it's at an intersection. So we just look at the math and if we can have everybody behave in a certain way and cross in a certain location at a certain time, then probability goes up and response time goes down and likely crash risk goes down.

Lou (00:56:01):

That makes sense. Back to what you're talking about before, it's just that kind of expectancy term, where are you expecting at night somebody in the middle of a block to cross where there's no crosswalk? No. So your response time is going to be longer, and the odds of hitting that pedestrian unfortunately are increased. The car doesn't see.

(00:56:23):

So maybe we had a 13% increase from '20 to '21, but if we didn't have the driver assist systems, it would've been even worse. And why the increase in the first place? Do you think that distraction has something to do with it? I know a lot of your research has been analyzing how drivers respond differently when they're engaged in some sort of cell phone task. Do you think we're still seeing increases associated with just the availability and potentially addiction to shooting off texts and whatnot?

Jeff (00:57:00):

Well, we do see in the pedestrians, there's quite a few of them who are also engaged in cell phone use. And so it's not uncommon to have a crash involving a driver who's intoxicated, a pedestrian who's intoxicated, a driver who's on the cell phone, and a pedestrian who's on the cell phone. And the two unsurprisingly sometimes meet. And so now you say, okay, he's done a lot of things wrong and A has done a lot of things wrong and B has done a lot of things wrong, but who has done the most things wrong?

Lou (00:57:44):

Wow, I didn't even consider that as I was putting this data together and just looking at it. I didn't even think of that, but just walking around the airport or any city street, that makes a ton of sense.

Jeff (00:57:58):

Well, we know for example, texting walkers have a 17.9% slower walking speed. So in one study concluded that 17.9% slower if you're texting. So just like when we're driving down the road and somebody's traveling 63 in the high speed lane and we go by them and they're sure enough they're on their cell phone and doing something on the cell phone, well, we see the same thing when we're walking on a sidewalk. And so I know Swaroop Dinakar has his funny videos of people texting and falling into holes or running into utility poles. So yes, we know they're more distracted, but we also know they also walk slower.

Lou (00:59:00):

Because of that mental burden associated with trying to accomplish whatever task they're trying to accomplish on their phone.

Jeff (00:59:05):

You nailed it, right on the head. That's exactly what we believe is going on, is they're reducing the workload by moving slower. So they're already got cognitive resources going to the cell phone task, and they only have so much more to the walking task. They got so much going to the cell phone task. They only have so much more to the driving task. So that's why one of the symptoms is driving slower when you're texting.

Lou (00:59:42):

That's really interesting analogy. And now I'm thinking, okay, well how does this tie in with COVID? Because that's where we saw a lot of the spikes. And is there a chance that a lot of people just got more used to consistently interacting with their phone during that period because they're quarantined and then they get out?

Jeff (01:00:00):

That's some of it. But you know what? Let's go back to the one thing, the Occam's Razor. The obvious thing is we've always known that the further away from normal speed you travel, the higher your crash risk. And so if these COVID free road drivers are traveling much faster than all other traffic on the road, we know that increases crash risk. And it's a U-shaped function.

(01:00:36):

So if we graph it, when you go much slower than average, you increase crash risk by an astronomical amount. If you go much faster, the further away you get from normal, you increase crash risk at an astronomical amount. So if you go five, ten over ... Five over might not even increase crash risk, but ten over, you're going to increase it maybe one or two times the normal rate. But when you're like 40 times, 40 miles an hour faster, well now it's a logarithmically greater crash risk.

Lou (01:01:20):

Then back to the distraction. How do we solve that? Because it does not seem like we as human beings are able to overpower the social engineering associated with a lot of these platforms, and we will text while we're driving. We will scroll through Facebook when you're stuck in a traffic jam or something like that. Have you seen any notable efforts? It seems like there's two potential things you can do here is one, just limit phone use via some technology or two, have the car be there for you.

(01:02:00):

I have some experience with the latter. I had a Tesla Model Y for a while and it was great. I'm driving from here in Southern California to Arizona or something, and I'm on a really straight road and there's nobody around me and I trust the system and I have it start to "autopilot" and I could fire off a couple texts and I'll just save it. Even just switching a song on Spotify or something like that, if I had that little cocoon of the Tesla to just take over for 30 seconds, it felt great and I could do some things. And when I hop in another car that doesn't have any of those systems, I hate to admit it, but sometimes I'm still doing those. I'm switching songs and things and it just doesn't feel as safe. So removing the cell phone entirely seems very tough to do. Maybe the car can help. What's your take kind of on that tension?

Jeff (01:02:51):

Well, you know what? Just like seatbelt usage didn't catch on until we started teaching it to kids in second grade. And perhaps that's the same approach is this has to be part of training growing up, is that it's not about you. You just have to imagine we have seen this is that families involved in somebody who was driving and killed somebody, there's as much stress in their life. Well, I don't want to measure stress, but there's a lot of stress in their life as well as the family who lost somebody.

(01:03:42):

And I'm not comparing, I'm just saying it's a lot more stress for having a family member involved in a fatal crash than not having a family member in a fatal crash. And so it's just terrible for everybody that's involved in a crash, and I wouldn't wish it on anybody. And if we can somehow teach that how horrible it is for some families to experience some of the things that we see, I think that's the only thing that we can offer is some of the things that we've experienced.

Lou (01:04:41):

In the SHRP-2 data, which maybe we should get a little bit more background out there because it's so phenomenal. But I'd be curious to hear, I imagine you have some funny stories related to that too, but the SHRP-2 data, can you just give everybody a little bit of background about who led that project, exactly what it is, what instrumentation was on the vehicles, and then how that has ... I imagine it's a gold mine for you, so how it has affected your understanding of what's going on the roads?

Jeff (01:05:12):

Well, let's give you a little background to this. So in 2003, Virginia Tech does a pilot study. Well, they published a paper in 2003, where they had drivers travel 20 minutes to and from work, and they put a data acquisition system in these drivers vehicles and just collected data from these drivers for a two-week period. And at the end of two weeks, they looked at how many times the drivers ... They just looked at a couple things in the data. For

(01:05:54):

drivers, they just looked at a couple things in the data. For example, how many times the driver came upon and passed a slower-moving lead vehicle. And they found there were 295 events, and the average driver made so many mirror glances and so many glances forward and so many glances to the left and right, and that they closed to within about 124 feet of the lead vehicle before they started changing lanes. And so clearly, they said, " Wow, we can get a lot of data from this."

(01:06:23):

So they then performed what they called the 100-Car Study. So over the course of a one-year period, they equipped 100 vehicles with accelerometers, GPS speed, network speed. Network speed is like streaming the OBD-II, streaming the CDR report. Forward camera, camera at the driver's face and hands, and allowed these drivers to drive over the course of one year, and collected the data. At the end of one year, there were approximately 69 crashes, 28 police-reportable crashes, and 741 near-crashes. And a near-crash is defined as somebody basically locked up the brakes or yanked the steering wheel in an emergency response, but for the grace of God or for some other reason, they didn't crash.

(01:07:32):

And so Virginia Tech saw all this data, and so the logical next step is ... The data was beautiful, but the one criticism that could be made is, well, that's great, those were all drivers in Northern Virginia. Do drivers in Seattle and Tampa and Buffalo, and in Indiana, and in the Carolinas ... If we tested other people, would they respond similarly? And so they more than doubled-down on the 100-Car Study. And so the 100-Car Study was called the Strategic Highway Research Program. So the second Strategic Highway Research Program, SHRP-2, they equipped 3,500 vehicles that involved more than 3,500 drivers. And they collected data for three years. And so there's been more than 1,000 crashes, more than 228 severe crashes. There's been more than 3,000 near-crashes.

(01:09:00):

And so Swaroop Dinakar and myself, we decided that in 2015, when data collection for that dataset completed, we signed a user agreement with Virginia Tech to process this data. And I'm on the expert committee for this, and so we decided we're going to process this data. And we thought we'd publish these data probably ... We had the data in 2015. By 2017, we'll have six or eight studies done.

Lou (01:09:45):

Yeah. It's always easy at the beginning.

Jeff (01:09:49):

A lot of data. Well, put it like this. We've had universities say they wanted to be a part of the data collection with us and we told them what's involved, that essentially ... There's no easy way. You got to go frame-by-frame analysis on every one of those 4,600 events. We have a huge Excel database, E-X-C-E-L, the Microsoft Excel database. And in there is the data that we have to ... So we have to extract the data from a spreadsheet. And for each event, we're talking at least one, probably closer to four hours of data processing of each one of those events.

(01:10:46):

So we've been extracting that data since 2015. We're almost done with every crash type. So what we did is we handled this. We ate the elephant one bite at a time, and so we handled it by crash type. So we took all the intersection path intrusions, then all the mid-block path intrusions, then all the lead vehicle crashes. And then we saw different other crash types. So U-turn isn't categorized by Virginia Tech, so we then had to look for the U-turns, and then head-on. Sometimes head-ons were categorized as something else. And so you have to just spend the time and extract the data and categorize it.

(01:11:37):

And so we're almost all done. We just have a couple crash types left to report. And so we see how the drivers have behaved in real life. And I can report, based on everything we have seen so far, that the simulator research, the high-fidelity simulator research is alive and well, that we do not see any differences between the simulator data and the real-life driver response time data when it comes to driver behaviors.

Lou (01:12:15):

That's fantastic.

Jeff (01:12:15):

Now, there might be some differences in the way we collect the data in a simulator, in the way we collect the data in a naturalistic study. But for the driver behaviors, I see both. See, simulator studies are more precise. We can exactly target the behavior we want to measure. Naturalistic studies, we get what we get, but it's real life, right? And so when you have both, and they're both telling us ... Studies done in China, in Taiwan, in Japan, in Europe, in the US, simulator studies, naturalistic studies, really doesn't matter. They all say the same thing if we account for crash type and their methodology. And that is just so cool to me.

Lou (01:13:20):

It doesn't matter what your culture is, doesn't matter what your ethnicity is. You are human, and we can predict how you're going to respond to certain stimuli.

Jeff (01:13:27):

And it's based on where the crash happened, where the crash event happened. Intersection path intrusion, mid-block path intrusion, stopped vehicle on a highway, or a following closely behind platoon rear-ender. If we categorize them, put them in categories, studies done all over the world come to the same results.

Lou (01:14:00):

That's cool. And you might remember this as well, maybe a decade ago I came to the HPL Lab, Human Performance Lab at UMass Amherst.

Jeff (01:14:06):

Without your wallet, yeah.

Lou (01:14:07):

Oh, yeah. I might have forgotten my wallet. Thanks for the lunch, by the way. That wasn't part of the payment for being a subject, but I did let you poke and prod me and put me into the simulator study. And it was really interesting to see how that whole process went down. I mean, we had the eye-tracking system. Back then, I don't know if it's still the case, but you had to calibrate it so that you could get a really good idea of exactly what I was looking at. And I think I behaved a little bit abnormally compared to the rest of your subjects, if I remember correctly. I will allow you to talk about it. I will not call the IRB.

Jeff (01:14:52):

Yeah. We can't talk about human subjects, but there might have been one or two that ... Maybe one or two subjects, male subjects, as they aged, their testosterone level dropped just a little bit and they'd become more normal. And that was my hope of one or two of those subjects. And so it does appear that one or two of those subjects that we tested have matured very nicely.

Lou (01:15:31):

That's what my doctor tells me as well. That was an impressive setup you have over there. And doing all this research, looking at SHRP-2, looking at the simulator studies, I mean, you probably looked at more research than anybody. Where are humans most vulnerable? So when we're driving, what situations, when they're presented to us, are we most likely to get in trouble with?

Jeff (01:16:00):

Well, here's the thing. God made us good at a lot of things, but just like ... Remember Volvo back in the, I don't know, '90s had a seat issue. You make your seat too stiff, you give people whiplash. You make your seat not stiff enough, they break their neck in a high-speed crash. Then Ford in the 1970s, I think it was, with the A-pillar issue. Make your A-pillar too big and you reduce your search area. Make it too small, and you crush your roof. So we always have that give and take.

(01:16:44):

Well, the same thing as when we were developed. God put our eyes too close together. He put it together. So hey, we're really good at detecting motion, a lateral across our visual field. Not too good at detecting motion and depth, right? Eyes are two and a half inches apart. We can judge movement this way, but not as good this way.

(01:17:12):

So really, there's two things we really aren't designed to do. The other thing is God gave cats a tapetum. A tapetum is the reflector in the back of the eye. So when light goes into the eye of a nocturnal animal, it goes in, flashes off the tapetum, flashes back, and it echoes, and basically amplifies the amount of light and allows nocturnal animals to see much better in darkness. We didn't get one of those, right? Apparently, that shelf was empty. Maybe it was during a COVID period, and maybe it was with the toilet paper.

Lou (01:18:05):

Yeah, yeah, exactly. I notice when I stub my toe on the bed in the middle of the night that I do not have that device on board.

Jeff (01:18:14):

Right. So no tapetum. So we're not nocturnal, and we need light. And if we don't get light, we don't do too well. So the two areas that we don't do well, and that ... Just like an airbag is called a supplemental restraint system. It's not like, oh, I got you restraint system. It's a, hey, you wear your seatbelt and I can help you out a little bit restraint system. Same thing with perhaps a mitigation device in vehicles. How about if we work on supplementing what drivers already do well? So there's a lot of things we do okay at, and that's detecting path intrusions, cutoffs. We respond very fast in the cutoff. This I doubt an automated vehicle is going to improve on our ability to respond to somebody cutting us off because we're already pretty fast in most instances. But finding a dark pedestrian at night, that's not our forte. On a dark, unlit road at night, that's a difficult task for a driver.

Lou (01:19:38):

It's easy for a car to handle with infrared camera or lidar or something.

Jeff (01:19:43):

Well, yeah. Well, with LiDAR. Not with a camera system, because the camera system's probably not going to be any better than an eye. But if you have lidar or some kind of system like that, you can detect movement, that would be encouraging. The other thing we don't do is judge closing speed well. And we can see in the owner's manual of several manufacturers that this vehicle doesn't avoid stationary vehicles. So if you're going to go, and you're really one of those slick guys that found some way to put a clip on the steering wheel of your Tesla so you can go no-hands, well, then you're also a Darwin Award winner, because you're so smart you found out a way to kill yourself even when the car was trying to save you. Think of it this way. If your vehicle is detecting hazards with a camera system, are you aware of a camera that's better than the human eye right now?

Lou (01:21:08):

No.

Jeff (01:21:09):

I'm not. So if we can't judge closing speed until ... If closing at 65 miles an hour, the average driver's going to detect closing speed, the rate that they're closing on a stopped vehicle on a highway somewhere around 100 meters away, 340 feet away, something like that, is where the average driver goes, "Oh my God." So if that's the first point you can detect it, is a camera system going to be that discerning that far back out in free-flow traffic, and there's one vehicle out there that stopped? So we can see that ... I'm not aware of a vehicle that can do this right now, an automated vehicle that can avoid that stopped vehicle on a highway right now.

Lou (01:22:05):

Yeah, they're still going to hit it. They might get the brakes on pre-impact, and that's what the manual of my car says. It says, "Hey, we might be able to reduce the severity of that crash, but we're not going to avoid it." And my truck is equipped with radar. So if you get the cameras farther apart, maybe they have a better ability to detect closing speed, but they also have to determine whether or not that's a car or a bridge abutment or something like that. And we're probably better at that than the cameras and the AI. So a combination of systems, maybe. LiDAR, radar, cameras is probably the best, but then it has to be also attached to ... And I'm not an autonomous design engineer, but also has to be attached to really sound logic and sound software.

Jeff (01:23:00):

Well, leading us to sound logic. So back a few years ago, it was fairly popular to look into headway warning systems in vehicles. And the thought was, well, if you don't follow closely behind, then you're going to reduce crash risk. Well, when the authors say that, they're not citing the research because in actuality, if we look at the crash data, you are more likely to be in a crash following farther than three seconds than if you are following closer than three seconds. So think of that. So we've been told our whole lives following closely leads to crashes. And perhaps. Perhaps it does. I'm not condoning following closely. That reduces-

Lou (01:24:02):

I almost got out of some arguments with my wife just there. I guess I'm not winning that yet.

Jeff (01:24:07):

You're still reducing your resources by following closely. You're reducing your available resources by following closely, but that's not leading to crashes. That's leading to near-crashes. So yeah, you will piss off your wife, you will lock up the brakes, and you'll probably avoid the crash, but everybody goes home angry, but everybody goes home. The crashes-

Lou (01:24:34):

Because it's so easy to detect that change in closing speed from that distance.

Jeff (01:24:38):

You nailed it. Think of it. There's a higher probability when you're in a closer distance. And so now, you're following one second behind, the brake lights go on, you got an immediate change in following distance, immediate change in the visual size of the vehicle ahead of you. You have everything but a sign going up, saying, "Stop, stupid." And so that event is a lot of information. Very little uncertainty when that happens, so we respond quickly and we avoid most of those events.

(01:25:13):

But when you're following more than three seconds behind and coming upon a very stopped or slow or ... Very stopped, or a vehicle traveling less than 10 miles an hour, that leads to deaths. In fact, when we average the speeds of all fatal crashes in the United States, I think people would be stunned to know that the average speed of a driver who got struck at five, six, or seven o'clock ... In other words, got struck in the rear, their average speed in a fatal crash was 12 miles an hour.

Lou (01:26:00):

Wow.

Jeff (01:26:00):

The average speed of the car that struck them was 58 miles an hour.

Lou (01:26:06):

Oh, wow. So I mean, that indicates highway stuff, right?

Jeff (01:26:10):

So when we look at the fatal analysis reporting system, our US government statistics for 2019 and 2020 that just came out ... I've spent quite a bit of time with that data. We see that if you're on a speed limit of 25, 30, 35, and 40 miles an hour and you're involved in a rear-end crash, if you're looking at a rear-end crash, on those speed limits the following driver is usually always over the speed limit and striking the vehicle ahead. He is more likely to be unlicensed, and he's more likely to be driving an older car. In other words, he's likely to be a younger driver. And we know teens are overrepresented in those crashes.

(01:27:08):

When we look at 50, particularly 55, 60, 65, 70, and 75 mile-an-hour and 80 mile-an-hour roads, those drivers tend to be traveling less than the posted speed limit, are typically licensed drivers, typically experienced drivers. And it's telling us these are drivers that have fewer crashes, fewer driving issues, have better driving records, and they're facing a human limitation.

(01:27:41):

So in the US today, 4.5 people are going to die today traveling on a 55 to 70 mile-an-hour speed limit road. They're going to be striking a vehicle traveling less than 10 miles an hour, or stopped. 4.5 deaths today and tomorrow and the next day and the next day.

Lou (01:28:09):

That's brutal.

Jeff (01:28:10):

And it's telling us there's a human limitation. There's a huge problem here. And I'd like to call for more research into this crash type, particularly of automated vehicles. And if your system doesn't reach out that far yet, well, maybe you should start looking into how can we reach out farther? When I say reach out farther, detect earlier. And so if it's truly a supplemental system, this is something humans are having problem with, so it's something I'd like technology to try to look into more.

Lou (01:28:54):

Yeah, that's the perfect area to supplement. Like you said, if you can combine the strengths of the human with the strengths of the machine, then you have a much more robust, safe system that we can hopefully start bringing these stats the other way.

Jeff (01:29:07):

Exactly. So the two areas that I'd love automated vehicles to look more into is nighttime and stopped and very slow vehicles, because stopped and very slow vehicles account for nearly 80%, over 70% of all rear-end crashes.

Lou (01:29:26):

So I wanted to get into some of the tech and trends, and where you start seeing things going. And one of the things that I've noticed in my casework, and I suspect you've noticed it as well, is the ubiquity of video, whether it's a dash cam or a surveillance system or a Ring doorbell camera, just getting video of a lot of crashes at this point. And has that allowed you to have a more detailed and conclusive analysis? How has that affected your analysis, I guess, is a better way to ask it?

Jeff (01:30:02):

Well, you know what? That was really one of the reasons why I chose human factors over a mechanical engineering degree, because I could see, even back in the '90s, that there's more video coming out. I knew they were going to be putting event data recorders in vehicles. And so I thought back then, "Well, we're going to just download the vehicle. It's going to tell us the speed of the vehicle, and then you're done with your recon. And you get a video. The video clearly shows you how the crash happened, and you're done."

(01:30:55):

And now, I look at video and go, "Oh my God," because I know every case, you're going to get cross-examined on frame 16. Five seconds in, frame 16, notice this. Is that really a tail light? Or based on this frame, don't you see this or that? And so I know going in on every one of those cases that you really got to break it down. You got to break down that video. And so it's actually more time in the video cases. And sometimes, somebody will call up and ... Hey, I got a video of a crash, so it should be easy for you. It's like, no, that makes it more difficult for us. But like everybody says, you get out what you put in. And for the same reason, I think it really advanced our knowledge of driver behavior, because now we get so much more information.

(01:32:18):

And I think back when ... Remember I was saying about my master's thesis? I started that research in '95. And thanks to a lot of good professors on my committee, I finally came out with mathematical models for driver response time because I categorized them, but I had to learn how to do that. And the only way to learn rules for behavior is to get a lot of data to see violations of the rules.

(01:33:03):

And that's how I learned. Well, we know drivers perform in this certain way. And then you get another study or you get a video and you say, "That's an exception to the rule." Then you get another video, then you get another video. Then maybe somebody publishes a study. And normally, you'd look at the study and go, "It's an okay study," but now you look at it and ... Wow, that conforms to the couple of videos I've seen. And then the next thing you know, I have another study, and now you got a new rule. And if you are not worried about being right, then you can always end up finding more things. So I don't mind being wrong as long as I'm doing everything I can to find the right answer. And I think that's the way that I've always looked at things.

(01:34:17):

So video really gives us tremendous amount of data, tremendous amount of information. So we have the SHRP-2 data, and then we have dash cam video of more than half the cases that we get offered. And it's just tremendous. So we have the three prongs. So we see the consulting cases. It helps us design our research questions, which helps us in our teaching, which helps us help crash investigators who then help us by bringing us more crazy events.

(01:35:13):

And so every time I teach a class, somebody in the class will come up to me and go, "Jeff, what do you call this one?" And sometimes you go, "Wow, that is cool." That's sort of like a duck-billed platypus. You got a little bit of this, but it lays eggs. And it helps me teach, because it helps me define even better. What is a path intrusion? What is a lead vehicle event? When do you start the clock? And so the more I teach, the more I gather information, the better I get at defining when perception response time starts, what triggers it, what causes an emergency response over a non-emergency response? What is the response choice of drivers?

(01:36:12):

And the more data you have, the more you say, "Well, it's usually this, except when you have that." One perfect example is a friend of mine, Jeff Hickman, formerly at Virginia Tech and now he's in the private sector. He's doing human factors forensics. And he did a paper looking at the SHRP-2 data, and he concluded that not many drivers swerved in the SHRP-2 database. And that's essentially what we found back in 2015. Swerving is not a real common response.

(01:37:00):

But then when we look at the high-speed rear-enders, which SHRP-2 didn't have quite as many of but dash cam videos do, we see that nearly 70% of the events at 55 to 70 miles an hour involve swerving. And so it allows us to explore even more data sets. So yes, we only get data from crashes, but the SHRP-2 data gives us data from non-crashes and crashes. So again, just like with simulator studies, simulator studies help us understand real life. Real life helps us understand simulated results. And then near-crashes help us understand crashes, and video evidence helps us understand the other research. It just all feeds in. Like I said to you in the beginning, it feeds into that story. And it's beautiful data, I think. It just all resonates when you put it all together.

Lou (01:38:18):

Once a junkie, always a junkie. Yeah. And I've seen that too, with the video on the motorcycle stuff. It makes me really happy to be able to quantify a rider's braking capability as they approach the impact, because there was a 100-motorcycle study conducted by MSF and Virginia Tech. It is very limited. It's been analyzed by Williams, and published. But 50% of the cases I'm getting now, I'm probably in a similar ratio as you are.

(01:38:53):

As I'm getting now, I'm probably in a similar ratio as you are. I am getting video that is detailed enough where I can quantify via photogrammetry and video analysis what the speed of the rider was at frame by frame, essentially. So I can quantify braking rate and that it is fantastic. And in the cases I'm working so far, the numbers seem to be high. Now, granted, it's a very small sample size, but that's something I'm hoping to publish with the help of black boxes that are now being installed on Kawasaki's too. And I imagine that helps you because there's two things with the black boxes. One, obviously they're nearly ubiquitous now, and I imagine that with that data sometimes you can quantify perception response time where you wouldn't have been able to otherwise. And then the other thing just to throw in there is I see that there's a notice to propose rulemaking by the NHTSA to now capture 20 seconds of pre-impact data. And I hope that that comes to fruition. But how has the five seconds generally that we do get now helped you with pre-impact data? And would 20 be even better?

Jeff (01:40:00):

Well, yes it would because now one of the things we've seen with emergency response time data is many times, and I'm going to repeat this twice, age has not been a factor.

Lou (01:40:18):

Which everybody's who you talk to who's a layman, thinks that it would. And I understand why.

Jeff (01:40:23):

I've even had judges look at me and put the glasses down and say, "You can't tell me age isn't a factor. I know because I'm older." Well, it depends how you define terms. If you start the clock when you don't see something and have it tick off until you do see something, well yeah, it's going to be longer. So older folks and younger drivers have problems finding problems. Their problem is hazard anticipation and slow cognitive... When you age, you slow cognitively. So where do they crash? Busy intersections. Where's the best place to put an older driver with slower cognitive resources is fast moving state highways in Florida? There we go. It's our version of putting old people on an ice flow.

(01:41:34):

If going left and going right and going left and going right, well, your eyes go left and right when you're age 70 and above, but your brain does not. So your eyes go left, your brain then catches up, your eyes go right, your brain catches up and so they're going to have difficulty. Whenever they have to look left and right quickly, they're going to have difficulty with that. But that's not our perception reaction type problem. Now, if there's something there they have to respond to, driving is a neck up task, not a neck down task. So 18-year-olds, they're probably great neck down, but neck up, no.

Lou (01:42:22):

No, not yet.

Jeff (01:42:23):

Not yet. So people said, well, an 18-year-old could respond much faster than an 80-year-old. Oh, really? Do they know what to look for? So hazard anticipation, going back to that RAPT training, risk and perception training developed by Don Fischer and his group at the University of Massachusetts, Human Performance Laboratory. We know that if we can train drivers to anticipate hazards better, so how do you anticipate a hazard by, well, number one, look at it. If you're looking at the hazard, you're anticipating it. And then do you have a mitigation to go with that? So do you come off the throttle? Do you move or is there a need to mitigate in any way other than just look at it? And 90% of the time just looking at it's going to help you mitigate the hazard so you're anticipating what your next hazard is. If we can look back 20 seconds before the event, we get better hazard anticipation information. Now we can better identify what does a 40-year-old driver who has no crash history, what is his speed choices in the time period, 10, 9, 8, 7, 6, 5, 4 seconds before impact? And what is the teen drivers' speed choices in that same time period? I think what we'd find when we collect more data back there is that if...

(01:44:14):

Well, let's just compare the routine 40- year-old driver with the routine 17-year-old driver on a straight highway with a 45 mile an hour speed limit. The experienced driver in that road will likely be traveling about one mile an hour faster than the average 17-year-old. And they're probably both slightly above the speed limit, but the experienced driver is probably about one mile an hour faster. And that's generally what I found in some of the studies I've done. But when we then have some kind of hazard, recognizable hazard present, so there's nothing to lock up the brakes for, but just something that looks a little uneasy up ahead.

Lou (01:45:12):

Squirrely, yeah.

Jeff (01:45:23):

Squirrely up head. Maybe a pedestrian walking along the fog line up ahead. So the experienced driver is likely going to come off the throttle for that. Likely give a little anticipation, might move slightly in his lane for somebody standing on the fog line and he's in the right lane. But a novice driver, if he does mitigate at all, it's likely going to be very late in the event. And he might not at all, he might say, "Well, I have the right to do this. I'm in the middle of my lane." They're very right and wrong oriented. And that's what we see sometimes in the studies that we've done. That's what we see in psychological development.

(01:46:13):

Kohlberg, the famous psychologist, he called kids in their teens, that was the good boy, good girl orientation. So in other words, teens view themselves as good or bad, never halfway.

Lou (01:46:25):

Kids.

Jeff (01:46:37):

I'm either the most popular or I'm not popular. I'm either the funniest or I'm not funny. So thinking back to high school, I think we all thought that way. If you weren't the most popular, you were not popular. Where you might've been a 90th percentile, that's pretty cool. But you don't view yourself as a 90th percentile. You view yourself as, oh yeah, I'm one of the...

Lou (01:47:08):

It's funny that you say that. My mom growing up would always say, "You might be right, but you're going to be dead right" on the road with respect to the road. I remember that and now certainly I see things more as, well, I don't care who's right or wrong, I just want to make sure that there's no incident.

Jeff (01:47:26):

Well, I think I told you one time about one of the studies I did where we were testing novice drivers at nine events that we know are the nine most likely crash events by teen drivers. And I set it up so if you gave me a little mitigation, in other words, if you look towards the hazard and reduced your speed just slightly, you never got the emergency response time event. So if you gave me mitigation, you got no immediate crash hazard. When I ran experienced drivers through these nine events, it wasn't uncommon that somebody got out of the car, looked at me and said, "Jeff, I thought that you were going to give me the big whammy." They got in simulator. They're all ready, they're ready to do well. And particularly when I had crash investigators because I wanted experienced drivers who drove for a living. I wanted exemplary drivers with no crash history to be my template for what a good driver is. So I had taxi drivers, bus drivers, anybody that drives more than 15,000 miles in a year generally drives as part of their job and had no crashes in the previous year. What did they do? Then I got the teens in there. In the experienced drivers, they got out of the car and they said, "I didn't get any whammies." And I said, "Yeah, I gave you nine, but you gave me the ounce of prevention and you didn't get, nothing materialized."

(01:49:23):

But I had one young woman who was in this study, she crashed four times and she got out of the vehicle and she started yelling at me that I was getting my kicks out of making her crash. And what really was insightful for me with her was that many teens, and Matt Ramoser, a colleague of mine, did a study, him and Willem Vlakveld from SWOV over in Europe, a driving safety agency over there. They did a study to look into if a teen driver crashes, does he learn from the crash? And in some studies, we found that that's true. In my studies, I found that if they crashed early in the study, they got better as the study progressed because they figured it out, oh, a vehicle can come from where I don't see it. Oh, I don't have to necessarily see the vehicle to have a hazard.

Lou (01:51:00):

Just an environment or a situation.

Jeff (01:51:00):

So for example, the left turn across path opposite direction, in the SHRP-2, there were I think something like 269 events where a vehicle came out from behind an obstacle in the SHRP-2. So that's a common crash or near crash type. But still, you have to have a recognizable hazard there. For example, what we did is we had a big truck blocking the view of the intersection and the driver had a green ball traffic signal and clear sailing based on what they see. But if they anticipate, they can know, well, why is the truck stopped in the left lane? If he's going to turn left, just turn left. But if he's stopping, why is he stopping? Well, experienced drivers gave us the ounce of prevention. They slowed down. Sure enough, a car turned in front of that obstacle. Experienced drivers usually didn't crash. Teen drivers usually crashed. They went through that intersection at speed and they said, "Well, I had a green light. I was right."

Lou (01:52:14):

Yeah, I was right. I was right.

Jeff (01:52:16):

I'm right. Again, going back to that Kohlberg's moral stage right and wrong, I'm either right or I'm wrong. And to understand when we show that, and then they go, oh yeah, so somebody can come, okay, I got to make sure the intersection is clear before I enter. And so it's teaching them a rule. So now they face a different scenario, and maybe it's not a left turn across path, but they know they can't see around a vehicle and maybe they give us an ounce of prevention and nothing materializes.

Lou (01:52:55):

It sounds like it would be great. Obviously it's probably not financially practical, but to get a lot of teens into simulators and teach them that way by presenting them with real situations that are perceived to end in a crash and they can learn quickly from it.

Jeff (01:53:11):

Well, the research suggests that has been very effective. If we can get teen drivers in a simulator for 10 to 20 hours, their crash rate does drop. The only problem is we can't really expect every driving school throughout the country to get a driving simulator. So what's the next best thing? And that's essentially what guided our research towards RAPT, to risk and prescription training. And then the research I did on ACT, the ACT program, the mitigation program. And it's more of a video game, getting teams to play a video game that they learn through clicking on the screen where to look, where put your foot, and where to move in your lane if that's necessary.

Lou (01:54:08):

Yeah, no, I love it. And I would absolutely, my kids are already addicted to video games, so if I can have them playing one that is going to potentially save their life someday, I'm game for that. So with respect to some tools, right now, I imagine when you started your career and you had to do nighttime photography with a film camera, it was a nightmare. But what tools are you using now to document sites and vehicles, scanners, drones, fancy cameras contrasted? What tools are in your kit for that?

Jeff (01:54:41):

Well, there's a couple of things, but I'll say there's one that really stands out for us. And well, of course we use this contrast gradient to help document our photographs. But the Sony Alpha 7 SII and the Sony Alpha 7 SIII that just came out last year is a game changer. It's the best investment I've made in my career. If you are a poor photographer, it'll make you a good one. If you're a good photographer, it'll make you a great one at night. And so with me, I'm very interested in documenting how the site looked to me and in a fair and accurate way. With that and the contrast gradient, I think it really does a great job. And I think to myself, oh my God, it was just so hard before, it was just...

(01:56:02):

You always had your choice. You can choose a photograph that looked really grainy, but the lights didn't glow. Or you could have big blooms, big blooms of light and a nice photograph, and so you had to pick and choose. But now that camera is so light sensitive that I can drive down the highway, and we do, sometimes we're left to driving down the highway at 65 miles an hour or 70 miles an hour with the camera mounted to our vehicle and taking video and getting... We can stop frame and zoom in and see gouge marks and skid marks, and it's just beautiful.

Lou (01:57:01):

That's amazing.

Jeff (01:57:05):

It's only 12 megapixels. So if you're doing fine detail, your daytime camera, your 24 megapixel camera's going to be better than this 12. But the strength of this is, we do a lot of nighttime investigation, so we love the high speed. With film speed you'd have 100 speed film and 400 speed film. And then the Canon and Nikon came out with the 6,400 speed ISO. Well, it's not uncommon that we have it set over 1,000 ISO. I'm sorry, 100,000 ISO.

Lou (01:58:02):

Oh, wow.

Jeff (01:58:04):

And so I'll set the ISO to 102,000, set the shutter speed to less than 1/400th of a second and drive down the highway. And sometimes I've been able to take photographs of vehicles coming right at me, shooting right into the headlight of a vehicle coming at me. And yeah, you need speed for that. You need your camera to be fast and light sensitive, otherwise you're going to have big blooms and nothing else. Big white light and a lot of black stuff. This allows you to do that. And so we've been very, very pleased with that. We're very pleased with the light meters now, whether it be the handheld reflector meter or the Konica Minolta spot luminance meter, or the light meters we have. We probably spend, goodness, we probably spend about $4,000 a year just in certifying light meters. Not even the purchase of them, just certifying all the light meters we have. And it makes it really, really fast to get the lighting of a headlight. So we have a series of light meters, in one click we can get 10 readings at once.

Lou (01:59:44):

Yeah. I remember mapping headlights with you probably 10 or 15 years ago in some cold New England day, outside in a big parking lot. And we just had one on a total station stick, and we'd go out and map out the 0.3, I guess it was.

Jeff (02:00:00):

Remember that. And it would take us, I think the night we were out mapping headlights, you and I, I think we probably spent two and a half hours per vehicle, something like that. I think we were out there at least four hours for two vehicles. Now we could probably map a headlight in high beam, low beam, and one headlight out in something like 20 minutes.

Lou (02:00:33):

Oh, wow. So you've really refined that process.

Jeff (02:00:36):

You want to refine the process, hire somebody who grew up in India and have them come out and map headlights with you in February in New England.

Lou (02:00:49):

Yeah. We are going to find a better way to do this. This is ridiculous. Yeah, I'm cold.

Jeff (02:00:53):

And the next day, Swaroop Dinakar had a whole setup sitting next to my desk, and I went, "So is this the new way we're going to be mapping headlights?" And he said, "Yes." And I was like, "All right."

Lou (02:01:05):

It only costs 10,000 extra dollars, but...

(02:01:09):

I've been tangentially involved in a lot of these human factors analyses and seeing experts like you mapping the effects of overhead lamps and just ambient lighting and the headlights of a car. And that data obviously means something to you. The numbers mean something to you, but then you have to portray that information to a jury and help them understand what it means, so taking the right photograph. So what tactics are you currently using to present the information to a jury so that they can, I imagine they have to see it for the most part. And if that's the case, how are you doing that?

Jeff (02:01:45):

And that's always a difficult task is some crashes call to be reenacted and some just can't be. And so that's always the problem we face with that. Sometimes the best we can do is draw diagrams and say, "Here's how the math worked." Here's the wonderful thing though, is light is additive. And that's a simple concept. For example, if you put your light meter...

Lou (02:02:41):

Oh, you got one.

Jeff (02:02:41):

I had the feeling that at some point we're going to talk about a light meter.

Lou (02:02:45):

That's funny.

Jeff (02:02:46):

So this is one of the less expensive light meters. This is the X-TEC LT 300. And so here's the external probe, and you can get this NIST certified for about $300, but I can get it online for about $140 without a certification. And you know what? Good bang for the buck. We tend to use the Konica Minolta. They're more expensive, but a lot of times this will do the job for you.

(02:03:25):

So if I get my light meter here, if I turn this on, so I can get a reading here, so how much light is coming at me? I have lights aimed at my face so I don't look as old as I am. If I turn it away, if I turn it more down, you can see the reading goes down and I turn it more towards light, the reading goes up. So you can get the amount of light from a streetlight, and then you can get the amount of light from a headlight and you can add it up and is it enough light to illuminate that pedestrian? You can do it graphically. I've seen some people do just wonderful jobs drawing graphs and showing when the car is this far away, there's X amount of light here. When it's this far away, there's x plus 10 lux, and you can just add light to what light is there, wherever the pedestrian was or the object. That's usually the easiest way to explain it is.

(02:04:52):

One thing that we caution when it comes to nighttime is light is only one measure that we really have to consider the acronym.

(02:05:04):

CLAPS, contrast, lighting, remember probability, uncertainty. If you have a satellite falling from the sky, that's really an uncertain event and it might be a white satellite with a light on it, and you're going to hit it full speed.

Lou (02:05:21):

Yeah, I remember that tennis ball study that you did a long time ago where you hung the tennis ball with retro-reflective tape, right in the driver's path. And everybody hit it, right? I think everybody hit it. Everybody hit it though. It's very detectable, but it doesn't make any sense to you. There's nothing to tell you to stop.

Jeff (02:05:38):

And the surprising thing is the people that hit it, we ask them, "Did you see it before you hit it?" And more than 70% said, "Yes, I saw it before I hit it." And so you say, "Why?" And they go, "I thought it was attached to something off the road." Which you look at it and you say, "How could you think something right in front of your face is attached to something off the road?" And that's one of the biases that proper methodology will eliminate. A lot of post-impact investigators might look and say, "Oh my goodness, I can see that light from 692 feet away, which is what our subjects could see that light from, 692 feet away, but they all hit it. And so there's a difference between visible and recognizable. Now, I sent you a sentence last night.

Lou (02:06:39):

Yes, I saw that. I hearted it because it was fantastic.

Jeff (02:06:39):

How many Fs?

Lou (02:06:47):

How many Fs? Yeah. Finished files are the results of years of scientific study combined with the experience of years. So how many F's are in here is the question? 1, 2, 3. I say 3.

Jeff (02:07:10):

Okay.

Lou (02:07:12):

Failed.

Jeff (02:07:13):

Is that your final answer? I'll give you second chance.

Lou (02:07:17):

Okay. Finished files, two, are the results. Oh, interesting. I didn't count the ofs as far as I can tell.

Jeff (02:07:26):

There we go, 1. 2.

Lou (02:07:28):

So what is up with that? You set me up for podcast glory here.

Jeff (02:07:35):

So that's a common psychological thing. So read that sentence again. Read that sentence again.

Lou (02:07:45):

Finished files are the result of years of scientific study combined with the experience of years. So did I miss three F's then during my first count?

Jeff (02:07:55):

Yeah. And so now as a post-impact investigator, we look and we go, there's six F's out there. How did he miss six F's? Well, we are programmed to look for certain things in certain areas, and if we have a violation of our expectation, if we have greater uncertainty or lower probability, our response suffers. And so of that acronym, CLAPS, contrast, lighting, anticipation, pattern and size, pattern is the big P. And so if something is different than the pattern, so for example, animals would camouflage fur or skin, that pattern allows them to evolve. And now we have a pedestrian carrying a flashlight, wearing all black clothing and carrying a flashlight. Let me ask you, is that a pattern?

Lou (02:09:04):

No.

Jeff (02:09:04):

And the answer is no. Next, pedestrian wearing all black clothing, carrying a white bag, big white grocery bag. Now, I hear a lot of times, well, it's a big white grocery bag. You'd see the white grocery bag. And I'd ask them, "Well, I'm driving down the road and let's assume I see a rectangular white object floating down the road, do I know what it is? Do I know where it is?" And this is one of the things that people neglect. A driver has to know where it is. If you don't know what it is, it's impossible to know where it is. Because it could be a big, big, white thing far, far away, or a small white thing very near-

Lou (02:10:03):

Which that tennis ball study-

Jeff (02:10:05):

Or a flashlight. So if we don't know what it is, it's nearly impossible to locate where it is. And if we can't locate it in our path, when's the last time? If you are truly driving with a hairpin trigger and you are really anticipating as best you can, and highly alert, when's the last time you got bug-eyed, white-knuckled, locked up the wheels, smoked the tires, only to learn that it was a false positive, only to learn that there was nothing there to respond to.

Lou (02:10:47):

It's never happened.

Jeff (02:10:48):

It doesn't happen. And the reason it doesn't happen is because the driver has to be relatively sure of what they see before they're willing to lock up the brakes. And if we understand that is the way we drive, that's what an ordinary driver does. No matter what state you look at, whether it's Texas and Illinois that says ordinary driver, or I think Connecticut says prudent driver or something, or reasonable driver. Other states use some kind of term like that ordinary, reasonable, prudent driver. If this is what ordinary, reasonable, and prudent drivers do, then that's the standard of care that we had hold drivers to.

(02:11:37):

That's the numbers we were comparing our driver to. Right? Now, can auto manufacturers and road designers improve on that? Well, yeah, sure they can, but can we improve the human? Well...

Lou (02:11:55):

It doesn't seem like it's happening. Not for a million years or so. Yeah.

Jeff (02:12:00):

Yeah. Well, we'll have to evolve a tapetum or something to see at night.

Lou (02:12:04):

Yeah, that would help with my nighttime woes as well.

Jeff (02:12:08):

Or blue eyes.

Lou (02:12:10):

Really? So people with blue eyes can see...

Jeff (02:12:12):

Yeah, one blew this way and one blew that way.

Lou (02:12:16):

I figured it must be a setup. I'm like, I'm not a biologist on any, especially imagination.

Jeff (02:12:21):

So every time I see this one character on a SpongeBob that has eyes out here, and I think, he would not hit a stopped vehicle on a highway because he has eyes out here, right?

Lou (02:12:37):

Yeah.

Jeff (02:12:38):

But for us...

Lou (02:12:41):

Yeah. Unless you can move those, you can't detect that swell rate, that closing speed accurately.

(02:12:50):

That's interesting.

Jeff (02:12:51):

Not in time to avoid the crash in many instances.

Lou (02:12:55):

So that's where we need that machine help. Like we were saying earlier, I love that, that idea of combining the strengths of the human with the strengths of what machines are capable of. And it seems like we're heading down that path with ADAS and it's becoming omnipresent and not that expensive. You can get on a brand new Corolla, you know?

Jeff (02:13:16):

Sometimes. Sometimes. But then other times I'll sit through a conference of human factors people and safety people and automated vehicle people, and they'll spend half the conference discussing how a car can better communicate in curtsy to a pedestrian. And so I ask, how many crashes does that lead to?

Lou (02:13:47):

Yeah.

Jeff (02:13:48):

So there's a lot of research now to get an automated vehicle to behave like a human. And so as humans, we look at pedestrians, sometimes we can make eye contact. We go, "Gotcha. Okay. Yeah, go ahead." But how many times do we investigate crashes where the pedestrian says, "I looked him right in the eye and then I started crossing and he hit me."

Lou (02:14:15):

Yeah, no, it's usually I looked in the cockpit and I saw that they were going the other way, but I went anyway.

Jeff (02:14:23):

Yeah. Or we get both or I saw the pedestrian and he looked like he was going to stop. So I kept going. Right?

Lou (02:14:29):

Yeah.

Jeff (02:14:30):

And so we know there's miscommunication among drivers, but sometimes I hear researchers act as if drivers got it and automated drivers should get it. And they're looking at these jockeying events that if they do lead to crashes, they're very low delta-v events and not likely going to seriously injure somebody.

(02:15:02):

And so we think, look at the gap acceptance research that most pedestrians, if a cars within, if the car is more than six seconds away, he's going to cross. If the car is less than five seconds away, he's probably not going to cross. And so there is no behavior back at that distance. There is no "I see you" six seconds away.

Lou (02:15:27):

Yeah.

Jeff (02:15:27):

So we see some researchers are researching what they think is important rather than looking at what the crash data tells us is the problem. And I think we should first look at the crash data.

(02:15:47):

And so I really praise some researchers that... There's some great researchers doing some great research and they're looking at the crash data and saying, "How do we fix this problem?" And that approach that really excites me. And because I view, going back to before, data tells us a story. Data tells us behavior, and data is... Real life crash data is like scouting the enemy. And so if we want to better attack the enemy...

(02:16:30):

If I were to tell you that all crashes, all attacks are going to come from your front wall. Well why would you put equal amount of troops on all four walls?

Lou (02:16:48):

Yeah.

Jeff (02:16:48):

Right? So why would you put all your research into all four walls if you know, for example, 80% of all pedestrian fatalities occur on arterial roads, why would you then do a research study on a residential road? Right?

Lou (02:17:06):

Yeah. And we're all subject to that in all of our work where you're very efficiently pursuing something that's not the problem. It's like, "Well, I worked all day on this and I did a great job." "But actually that didn't matter at all. You should have been working on this instead." And identifying what this is very important before you start marching down that path.

Jeff (02:17:29):

Same thing with rear end crashes. When more than 70% of your crashes involve a lead vehicle that's traveling less than 10, why would more than 90% of your research be two vehicles traveling the same speed?

Lou (02:17:48):

Yeah, yeah. And those are only going to be minor. Yeah, minor incidents.

Jeff (02:17:52):

Yeah. 90% of all the human factors rear end crash studies involve two vehicles traveling the same speed, which doesn't really lead to crashes. And as we know if it does, it is a very low change of velocity and likely very low speed and likely no injury if there is a crash. And somebody could even argue that those crashes improve the gross national product. And because it employs auto repair shops and it employs taxi drivers and it employs...

(02:18:29):

So, I'm concerned with all events that I can't press control Z. So in other words, if I can't do an undo, if I can't make everybody back a hundred percent in a month or so, then I'm concerned about that event and...

Lou (02:18:50):

And it's worth your effort.

Jeff (02:18:53):

That's where my effort is going to be focused on.

Lou (02:19:00):

I love it. I think that's really important.

(02:19:01):

So switching gears a little bit more of kind of a pointed question in going with, in attending these conferences and rubbing elbows with a lot of autonomous vehicle researchers, do you have a take right now, and it seems like we're all going to be wrong when we make this estimate, but when fully autonomous vehicles will be the majority of cars on the road?

Jeff (02:19:22):

Good question. My personal opinion, not based on any research...

Lou (02:19:28):

It's all feeling it seems at this point.

Jeff (02:19:30):

Yeah. And I'd say the manufacturers could give you a better answer if they are allowed to be honest. But I view partial automation very much like Volvo did. For example, one of their researchers, he's now at Waymo, Trent Victor, gave a presentation and he said, "The automated vehicle issue is the pop- up problem." So anybody that follows baseball or watches baseball knows that every once in a while, the pitcher will throw the ball and the hitter won't hit it well, and it'll go straight up. The ball will go straight up in the air, and it's so easy to catch that it creates a dilemma and the pitcher looks at the catcher and the pitcher looks at the first baseman.

(02:20:39):

And the first baseman says, "You got it." And the pitcher goes, "No, I think you got it." And then the pitcher goes, "No, I got it." And then the first baseman says, "No, I got it." And then they go, "No, you got it. I got it." And then they both watch the ball as it lands on the ground, right?

Lou (02:21:00):

Yeah. Seen that.

Jeff (02:21:00):

And so Trent Victor referred to automated vehicles as the pop-up problem because... And it's even worse because the better the vehicle is at earning your trust, the worse you are going to do when it fails. So when that automated vehicle feature goes off and then something transpires, right? If it went off, it went off for a reason, probably because you're going around a curve or because now the road lines aren't there anymore or your degree of difficulty for your car just got more complex, it probably got more complex for you as well.

(02:21:55):

So now you're entering a more complex area and the car just dumps you back to manual mode. And the more engrossed you were in whatever you were doing when you were in automated mode, the slower you're going to be to come out of that. And so that's the problem, is the better the car, the slower the driver.

Lou (02:22:21):

Yeah, that makes sense. They're trusting the system and they're kind of checked out.

Jeff (02:22:25):

Right. Now... And this is why some manufacturers, I've noticed there's a couple manufacturers that came out with vehicles in the last couple of years that they used to have lane centering. Now they have lane keeping. Now basically, why would a manufacturer take away a feature? And the reason they'd take away the feature is they saw their drivers who were trusting them too much. And so now rather than having lane centering where you go, "Yoo-hoo, I got no hands." Now you have lane keeping. So now your vehicle bounces between lines. And so now your car's telling you, "Dude, you got to be on your steering wheel, otherwise you're not in control."

Lou (02:23:19):

Yeah.

Jeff (02:23:20):

So unless you're going to just bounce between lines and trust the system, when there's no reason to trust it, right? So you're more likely going to keep your hands on the wheel there and you're going to say, "Okay, that's a nice feature because supplementing the driver, it's telling you, "You drive the car, we'll help you stay in your lane." And so that's one feature.

(02:23:54):

I know there's one vehicle that did that. I think there's been a couple models that went away from lane centering to lane keeping. And I think it's likely due to that reason to let the driver know you're in command. I've heard some engineers from companies suggest that they think the best approach is to, if you're going to attack a hill, conquer it. If you are going to take control of a vehicle, take control of the vehicle, it's either the driver's in charge or the car's in charge.

(02:24:39):

And so driverless vehicles, some people, well, one or two manufacturers have gone right to, you know what? They're not even interested in the levels of automation. They're just looking into driverless vehicles because they've just flat out deemed that they can't get there from here going partway.

Lou (02:25:06):

That handoff is too cumbersome. And that makes sense is you just either supplement and help them in a time of crisis or drive the car fully. But if you're going to ask them to take over in an emergency situation, the research it sounds like has been pretty clear so far that that is not going to end well quite often.

Jeff (02:25:29):

Well, you know what? I don't want to say it doesn't end well quite often, but there are instances that it might not end well. Right?

Lou (02:25:42):

Okay.

Jeff (02:25:42):

Rare instances, but there are still instances. Everybody should be striving to get a little bit better all the time. But I'm sure you've experienced the same thing. A vehicle with level of automation five years from now is a completely different vehicle than a vehicle level of automation today.

Lou (02:26:08):

Yeah. Yeah, I think that that's going to be interesting to watch that evolution and see how far they go. And I know some companies have done it pretty well. Tesla obviously has the full self-driving out now, and you see if you're on Twitter, scrolling the internet, people driving from San Francisco to LA without ever intervening, and that seems to be going well at times. And then you have GM with their super cruise and they're like, we're going to map out every road. And once we get that road mapped out, then you can use the system. And that seems to be working pretty well as well. But once you get into the more free for all and are asking for full autonomy on any street with a lot of autonomous vehicles interacting with each other, it seems like a, well, it's obviously been a very difficult task to handle, and I think everybody thought it'd be a little bit easier than it was at the onset.

Jeff (02:27:05):

Well, and then what about the owner that has a Cadillac Super Cruise and then they go buy a Tesla? And both manufacturers have a very, very different approach to how they handle that. And so Cadillac has the approach that they're going to be the school marm and they're going to make sure you obey the rules. And they have their ruler and they might slap your knuckles every once in a while and, "Put your hands on that steering wheel young man, otherwise we're going to turn off on you." And they're constantly, if you don't keep your hands on the steering wheel, you're going to get a warning. And if you're a little overtired, you're going to get a warning. So I was doing a reenactment, probably two o'clock, the typical reenactment, two o'clock in the morning, and I think probably driving to the airport and I get the warning that, "You should pull over and take a nap."

Lou (02:28:25):

Oh, wow.

Jeff (02:28:25):

And it's like, I'm not tired.

Lou (02:28:29):

I swear.

Jeff (02:28:30):

I don't understand why this went off on me.

Lou (02:28:33):

Oh, man. On that front, what's going to change in the next five years with the autonomous vehicles? How do you see the industry as a whole, not just autonomous vehicles, but collision investigation changing in the next five to 10 years? How do you think that that evolution is going to continue?

Jeff (02:28:57):

Well, I came in at a beautiful time that it's... I feel like I learned, as the profession grew, I learned with the profession. There wasn't as much to know when I started in crash reconstruction. Now it's far more sophisticated than it was, and it's only going to get more sophisticated going forward. And so I just see the need for a team approach that, and we're starting to see it now more in litigation as well, that it's not uncommon.

(02:29:51):

Well, early in my career, it wasn't uncommon that it'd be me, one expert, against one expert. And now it's not uncommon that there's one automated vehicle expert and one expert that downloaded the vehicle and imaged the vehicle. And another expert that looks at biomechanics and a third one that looks at accident reconstruction, the fourth one that looks at human factors. And I would say almost half the cases we have, there's teams of experts, many multiple experts in the case. And I don't see it getting any simpler than that because there's just a lot of information in our field. And now we got people going out mapping vehicles that we can compare. This Lightpoint company.

Lou (02:31:00):

Heard of it. Yeah.

Jeff (02:31:01):

And then somebody else going out and teaching how we can do photo-grammatical analysis.

Lou (02:31:09):

Heard of that too.

Jeff (02:31:12):

And I sit through that class and I say, "Yeah, I'll hire Lou if I..." I look at that and I go, "Okay, I've reached my bandwidth." You know what? I think actually the growth of our industry has helped me. It's encouraged me even more to stay in my lane. And I do what I do all the time. I kiddingly say all the time is my goal is to be like Kentucky Fried Chicken. I do one thing.

Lou (02:32:00):

I think that's... Yeah. That's so insightful and it's very consistent with the way that I'm currently running my consulting practices. I'm sure you know, where I only do motorcycle collisions now unless it's a photogrammetry project. Those are my two specialties. But should I be the guy that goes and downloads a 2022 Freightliner to tell you or image it, which is like you're saying the correct term, we'll get in trouble for saying the word download nowadays, but should I be the guy to go download that Freightliner? No, I will probably miss some small subtlety that somebody who's made an entire career of just downloading heavy trucks will pick up and it might be useful to the case. So like you said, I've learned, stay in my lane, and it's a multidisciplinary effort at this point, and I've never really thought about it in the way that you just mentioned it, but it's so true.

(02:33:02):

With how sophisticated it is, who has the mental bandwidth to be good at all of that? Everything that goes into a current reconstruction, very few people.

Jeff (02:33:15):

Well, and you know what? I've seen some police agencies, have multidisciplinary teams like have community... Where a group of four or five police agencies get together and form a crash reconstruction unit, and they'll have one person doing the mapping and another person doing the recon, the math, and another person doing the human factors end of things. And I see that being an approach, and that's effective for many different ways.

(02:33:53):

Number one, it's also effective on the budgets. That police department A only has to fund maybe two or three guys, and yet they have a whole team that they can come out. And so I've seen some very successful teams. Now, some state police agencies just have the bandwidth. Larger state police agencies like California Highway Patrol, Wisconsin State Police, Michigan State Police, they have numbers and they can form their multidisciplinary teams. But smaller police agencies, I really like the idea, and I've seen some good work out of some of the agencies that they get together and one or two guys are specialized in each different area, and each department has two or three guys that are trained, and then they keep them focused on what they do well.

(02:35:00):

And that's worked out very well for some agencies, both financially and it's not a burden on your budget. And also the quality of the work.

Lou (02:35:19):

Yeah. And it's great in this industry, there's some things that I try to stay hands off on like heavy vehicle download because it's so sophisticated and so specialized, but as a reconstructionist, I have to download a 2020 Camry. Which generally is not very difficult, but at times things can get a little tricky. And I feel very fortunate to have colleagues that are willing to help out, Rick Ruth, Rusty Haight, Brad Muir, people who specialize in EDRs or anything where I can call them up and say, I think I can get trained up on this little specialty within this discipline very quickly if I just reach out to somebody like that.

(02:36:00):

And that allows me to at least remain broad enough to do what I think is essential and then pass off everything else. I'm not touching biomechanics, I'm not touching heavy trucks, I'm not touching human factors, especially at nighttime. I'll do rider stuff because of all the work you and I have done together and published, but I am with you. It's rare that I'm the only guy on the expert team. It's tough to do that nowadays.

Jeff (02:36:27):

Yeah. And off-topic a little bit, I don't know if you've seen the Kondapalli study.

Lou (02:36:36):

I haven't. No. Is that new motorcycle study?

Jeff (02:36:39):

Yeah.

Lou (02:36:40):

Wow.

Jeff (02:36:41):

Yeah. And they looked into response time of several different events, real life. Riders, two-wheelers out there, and dummies being pulled out into their path.

Lou (02:36:57):

Oh, hey, that sounds familiar.

Jeff (02:37:00):

Yeah.

Lou (02:37:00):

That was some research that Jeff and I did. I'm wondering, I thought we did the research in 2009 and then it got published subsequently, that motorcycle behavior study.

Jeff (02:37:12):

It was published in 2011 and 2014, right? 2011, 2014. And then we met in Massachusetts in 2015. Right.

Lou (02:37:27):

That's right.

Jeff (02:37:28):

And then that got published in 2017, I think.

Lou (02:37:31):

Yeah. And that was, we won't go too deep into it since we're three hours in. But that was analyzing the response of drivers and riders, so people who both drove vehicles and rode motorcycles. And we sent them around this course to see how they would respond to certain stimuli and if it was different, whether they were on a motorcycle or driving a car. And that was the beginning for us.

Jeff (02:37:57):

And to clarify, when you say course, a course we told them to drive through a town, right? Open roads.

Lou (02:38:05):

Yep.

Jeff (02:38:07):

And you know what? That was very insightful in a lot of ways, and particularly when we are starting to, again, the data directs you to better research questions for your next study. And so as we did the study out in Arizona, and it wasn't a lot of traffic out there, and then we did it in Massachusetts and it was a fair amount, normal traffic in the area, and we're starting to see different data. And then we started discussing, "Well, why would we get different data? Why different results?" And then we started testing riding alone versus riding in a group. And...

Lou (02:38:57):

That's right.

Jeff (02:39:00):

I think that really told us that our riders in Arizona were told to follow a lead vehicle. And so they were riding very much like a rider in a group, trusting the lead driver to do some of the work. And then in Massachusetts, we were told them, "No, here's the route. Go out on your own."

Lou (02:39:25):

Well, that was always a little bit nerve wracking. Equip their motorcycle with $30,000 of data acquisition system. They're under your watch and you're just counting the seconds until they get back.

Jeff (02:39:37):

Yeah. Yeah.

Lou (02:39:38):

But that was great. Great data.

Jeff (02:39:40):

It was great. There's still a lot of data there.

Lou (02:39:47):

There is. And so one of the questions I don't think I've asked you yet, but is on my list, and this is a call for anybody who's watching this. First of all, there's like Jeff is saying, there's so much data in that dataset, and I think we'd both be totally willing to share for somebody who wants to pour over it.

(02:40:03):

But one of the things that we don't even have in the literature now from my end, is the average acceleration rate of motorcyclists. What is a typical acceleration profile of a motorcyclist? And that data's sitting right there, VBOX data with video, and I just haven't had time to look at it. Wade hasn't had time to look at it. You haven't had time to look at it.

Jeff (02:40:22):

Well, I did look at the acceleration rate of the left turn across path.

Lou (02:40:27):

Of the riders.

Jeff (02:40:29):

So the riders and drivers. So if you recall, they were on, I think Amity Road turned onto Lincoln, when they were turning. Where we had the Sasquatch cam.

Lou (02:40:47):

Oh, yes, I remember that. Yeah. And that was a pretty tough turn where they had to get across quickly if they wanted to reduce any injury changes,

Jeff (02:40:51):

Acceleration was identical. Cars and riders.

Lou (02:10:54):

Okay, that's interesting. And that's consistent with my thought is it's a human element. Nobody is accelerating as quickly as they possibly can every time they take off on a motorcycle.

Jeff (02:41:06):

Yeah, I'm sorry, that was Route 9, turning on to Lincoln. Right.

Lou (02:41:12):

Okay.

Jeff (02:41:13):

And then they drove up Lincoln and they got to Lincoln in Amity, and that's where that big hedge was on the right-hand side. And there was a site obstruction. And so we're looking into particularly the glance behaviors there. And so again, in the first couple seconds of acceleration, riders and drivers were identical. In the second second of acceleration, riders gained about two miles an hour on the drivers. In seconds three, four, and five. Again, identical accelerations. And so yes, riders did accelerate faster, but not in every phase of the acceleration.

(02:42:00):

So think of it this way, when you're starting out, I don't care if you're in a rickshaw or a motorcycle or a bicycle or a car, you still have to make sure it's clear to go. And so you still have that edging out and time to accelerate, that first couple second or so. And then that's where you have the maximum acceleration, and that's where riders beat drivers. But then after that, you're reaching your desired speed. So it's a behavior again. So where you have driving behaviors, we don't see much of a difference at all. Where we see acceleration behaviors, that's where you're going to likely see riders go a little faster.

(02:42:57):

I think I looked, we had that first intersection. They came from our meeting location, the parking lot where, and they went down a hill and they got to a traffic signal and then they turned left at that signal. I looked at the acceleration there, and then I looked at the acceleration at Lincoln and Amity with the obstruction, and then I looked at the turn in and that's all I saw. That's the only time I saw riders were different than drivers.

Lou (02:43:35):

So was that published?

Jeff (02:43:37):

No.

Lou (02:43:38):

Oh, okay. So the data exists. It just needs to be written up. I mean, the data's been analyzed. It just needs to be written up.

Jeff (02:43:45):

Well, basically you got me thinking. One time we had a conversation and I decided to look at a couple of the intersections and to see... We were concerned about the data and if we could calculate, did we have the data to do it? We certainly have the data to do it. And I was wondering if there was a significant difference. And it does appear that there will be when you have other than the left turn across path.

Lou (02:44:27):

Yeah. Yeah. That's cool. I'd love to, still, let me know. I got plenty of free time as you do. I'm sure. So let's get together on that one.

Jeff (02:44:37):

So yeah, basically what I was doing is I was looking at a pilot.

(02:44:41):

just seeing if it's feasible-

Lou (02:44:43):

Yeah, see if it was worth.

Jeff (02:44:44):

Yeah, and I expected that we were going to get back in touch and you and Wade were talking about doing it. I'm telling you now the feasibility, yes, it's feasible.

Lou (02:44:58):

Okay. Yeah, we got to get that together. On that front, are there any other gaps in the literature that you currently see that should be addressed that other researchers help could be valuable on?

Jeff (02:45:16):

Oh, goodness gracious. Almost every class I teach, somebody brings up something. I can't think of any instance right now, but sit through one of my classes and I guarantee you somebody's going to go, "Hey Jeff, what's the average response time for this crazy situation here?" Sometimes I just have to say, "Well, there isn't any research on that," but fortunately we're seeing that's fewer all the time, because there's been some really beautiful research that, like I said, it just all sings the same tune.

(02:46:10):

When I see current researchers trying to model driver response times, and lumping all different events together as if they're one or speculating that "Well, response time is based upon from when the driver could recognize to when something else can happen and if it's a surprise situation or a non-surprise." In that sentence is four subjective words that you can't quantify. I'm going, "All right, great." Somebody with hindsight bias is going to go look at some video and they're going to categorize what a response time is from some arbitrary starting point to some arbitrary ending point and put very different events together and use that as a comparison.

(02:47:14):

It frustrates me that there's data out there for I would say just about every crash type. We can see how drivers have responded in just certainly more than 90%, probably 95% of all crash types, somebody's done a response time study on it. There's data out there. The question is, are you too lazy to get it?

Lou (02:47:50):

Are you analyzing it properly? Yeah, that lump sum thing sounds like a problem. Then like you mentioned before, the ambiguity with when to start the clock. That's so important. That's one of the biggest things that I'm always looking at when I am utilizing your models of your research is, okay, in this collision type, where are the researchers starting the clock? If it's left turn across path, is the initiation of the leftward movement? If you're coming into a roadway, is it when the car passes the stop line or is it when the car enters the intersection? Where you start that changes everything.

(02:48:29):

It's really important, and I've seen in some of the research people mess around with that, so that it becomes very difficult to compare to other research.

Jeff (02:48:39):

That's exactly it, is they're messing around and not offering an expert opinion. When somebody takes my class and somebody asks me to be an expert, I take it very seriously and I take it as, no one wants to know what Jeff Muttart thinks, they want to know what Jeff Muttart knows. I view that a big difference. What I know is what the research says drivers did in that situation. What I think I know could be biased.

(02:49:22):

Who cares what I think? It's how drivers have responded. It's what I know that's important. I hear quite often experts say, "Well, some experts can be biased." And I say, "Well, that expert could be you. If you're in this profession and you are giving your personal opinion then that expert's you."

Lou (02:49:49):

It's like, I don't want to hear your opinion. I want to see your data is the way that I feel about that. Hopefully by the end when I'm done, nothing here is my opinion, it's all just the data applied to this crash.

Jeff (02:50:01):

Right. Well, and I'm sure you've heard me say before where I kiddingly say " Blame somebody else for everything." What I mean by that is cite the unbiased research, not personal opinion. Don't tell me that you think the response time is blank, don't tell me that you think response time changes due to age, don't tell me that you think response time would change because "This is a lot different than what the research would be." If you don't know the research, how do you know it's different than what the research would be?

(02:50:38):

Somebody said to me is when I was discussing the problem, sometimes experts are very willing to accept the low hanging fruit of a 1.5 second response time, an arbitrary number, except when it's somebody in their family involved. Then they want the right answer, they want everything. That's happened a couple of times, where I know experts have used just arbitrary numbers and then all of a sudden it's a case that they're really interested in. All of a sudden, they want the research. My advice to all of us, including myself, is to treat every crash like you want to get it right and it's really important to get it right, because it's somebody's family. If we handle it that way, that we're going to give what we know, not what we think.

(02:51:50):

I think that will go a long way to improve the situation. The saying is "Data gives everybody justice." That somebody once asked me, "Do you try to be fair in your analysis?" It's like, fair is way too stressful for me. I don't know what fair is. Is it fair that these people got in a crash? It's not fair. I can't deal with fair. All I can deal with is, is it right? Did I follow the right procedure and am I given good data? That's all we can do. We just got to be a cog in the system.

Lou (02:52:48):

Yeah, I love it. Follow the scientific method, and apply the data that is available, and admit where there are holes like you're saying in the classes, where there's holes in the data or their current literature and you just can't answer that part of the question.

(02:53:08):

A little bit more of an easy question here. Is there a tool that's in your kit right now that you don't think will be there anymore in 5 to 10 years? Something you're using now that the industry is going to evolve away?

Jeff (02:53:22):

Well, you know what, I can imagine some crash reconstruction programs might become outdated as we get more information and more data. If the images become more sophisticated, then maybe some become outdated, or some have to move a little bit better as supplemental programs rather than replacement programs for crash reconstruction. I think of the years that I do a SMAC analysis. You find yourself at 4:00 in the morning, you started at 6:00 in the evening, and it's now 4:00 in the morning and you're now running your 48th iteration to try to, I changed the friction between the two metals and now I changed the tire friction and now I changed this. You're trying to get the vehicle to land on the final resting position.

(02:54:50):

I think that kind of analysis, it has become outdated, and especially now we know the answers and we have to prove that the answer was given to us in a proper way.

Lou (02:55:11):

From EDR data and then confirming that via either momentum calcs or a simulation. It's a good point because I know PC-Crash has built this optimizer so that you can give it certain parameters for each variable and just say "They have to stick within this. They hit like this and they go over there, tell me what their speeds were." As AI is of course becoming more and more commonplace and more advanced, I wonder if there's an integration between the simulation platforms and AI, where we can leave at 7:00 PM instead of 4:00 AM and just have AI help guide us anyway and use our judgment and its power to create a simulation that is much more efficient.

Jeff (02:56:04):

That goes back to the field becoming more sophisticated and moving with the evidence that we have, but there's a lot of, well, the PC-Crash, and then the HVE. I look at how each one of those have evolved from the first iteration of those programs. It's just they're nothing now like what they were when they started.

Lou (02:56:43):

They've really advanced. Now with point clouds and photogrammetry and meshes and building all sorts of photorealistic environments within the simulation platforms at times, it's amazing.

(02:57:02):

Wrapping things up, I knew that you and I would talk forever because this is just like a normal dinner for us, except we were forced to stay on track a little bit more than we might if we're out having a beer. But just to wrap things up a little bit, I'd like to talk about what you're excited about for the future. I know you just changed the name of your company and you're doing a rebranding and you have a lot of changes going on over there that sound super exciting. So I'd love to hear a bit about where you're heading and what's blowing your hair back.

Jeff (02:57:40):

What we are excited about in the future is because we now have data for most crash types, we can look into more specifically what drivers need for good road design. What do drivers need to respond well? How do we decrease the uncertainty of drivers? Because we know, think of it this way, if I ask you to slap the table every time my finger goes up. If my finger goes up exactly once every second, so you know at one, two, three, four, five, six seconds, and if you know every second on the second my finger's going to go up, you can get your response time down to 0 because at 100% probability you can learn how to match my time. Response time is zero with 100% probability, but imagine now you're a driver and the average driver faces a near crash once every 10 years. Now what's your probability? Now you're seeing the finger and you go, "What was that finger for again?"

(02:59:06):

That's exactly what happened in the Johansson and Rumor study. The study that AASHTO is relying upon for the road standard, they had drivers, this is the unalerted drivers. They asked drivers, "Hey, would you mind being a part of an unalerted driver study?" But they're still unalerted, they know they're in a study. We're going to put a buzzer in your vehicle, but they're unalerted. We want you to break when you hear the buzzer, but they're unalerted. The buzzer goes off, they don't break.

(02:59:45):

Oh, let's come back. Let's throw out those first three times that you guys didn't do well, let's practice. They know the stimulus, they know the response, they know it's a buzzer, they drive down the road. It's just a question of when the buzzer goes off. Then they respond to the buzzer after practicing. They then determine well, when they responded to the buzzer and they didn't know when it was going to be, they responded 35% slower. They said, "Well, 35% plus times two seconds, that's 2.7 seconds. Let's make it 2.5 second standard." That's what your standard is based upon. They're basing it upon drivers responding to a buzzer that they knew was coming, they knew what to do about it, and they got to practice. That's what our road design standard is based upon. That's the only study that's cited right now in the current AASHTO standard. That's not a really good comforting feeling that our entire road system is designed on a study of drivers who knew everything that was coming.

(03:01:10):

What we want to do going forward is to encourage auto manufacturers to at least look at the data we have. Don't speculate, don't write a paper suggesting that, "Well, when the driver perceives," you don't know when a driver perceives, you can download the CDR, you can download the ECM, you can download an EDR, you can't download the EEG. We don't have that feature, so we don't know when the driver responded. We don't know when the driver's neurons fired. Wouldn't it be better if we had some baseline, some scientific baseline like is required of forensic scientists to use a classical scientific approach of comparing what your driver did to what other drivers have done? That's what we strive to do. We strive to be that data source for crash investigators and for safety people.

(03:02:39):

We don't consider ourselves safety people, we consider ourselves data people. Our job is to help safety people. We collect information. I think our goal is to collect every driver response study that's been conducted and distill that and create a software program called Response, where they can put in the crash type they're interested in, and we give them the results of the studies for that crash type, how drivers have responded, how drivers have reacted, how drivers have accelerated, depending on what their question is. We just came out with our book for crash investigators.

Lou (03:03:31):

Yeah, I got the old one. I think this is the third edition.

Jeff (03:03:35):

This is the fourth.

Lou (03:03:36):

I was going to say this is really helpful because ... Oh, okay, there's the new one. I think this one's autographed by you so I would prefer if you could send me an autographed version, but this is really handy for me and I imagine a lot of people who don't have your background, but this version, I can very quickly find the literature that helps me to understand how I would expect the driver to respond in a certain situation and where the clock should be started.

Jeff (03:04:04):

Well, I even tried to do a little better job in this book of taking you through the steps of a classical scientific approach. We also have a word search at the back of the book and a lot more data. For every crash type, I give the studies, I give the common response time for that crash type and the studies it's based upon and what each study said. I view myself as sort of like a Cyrano de Bergerac or something like this. I'm not good enough looking to get the princess, but I can help you get the princess. I can tell you ...

(03:05:05):

When you're testifying, that's my goal is to be in your ear and to give you advice and help of what research has been done and what has that research said, so you can tell the court what you know and not what you think. That's my goal is data. I've said it before. Data does everybody justice. The more science that gets in the courtroom, the more science that gets for automated drivers, automated driving companies that are claiming that their vehicle is better than what a human does, but they don't know what a human does, we do know what a human does. If you really want to know what a human does, take a look at the data in the book, take a look at the data in the software, and you can see what humans do.

Lou (03:06:11):

Where do people go to find you in that data?

Jeff (03:06:17):

Well, we're the Driver Research Institute. If you go to driverresearchinstitute.com or their number is (860) 861-1418 or email at info@driverresearchinstitute.com. We've helped quite a few crash investigators and safety professionals and automobile manufacturers. We get calls from quite different locations and we're more than willing to help people if they want to know what drivers have done in different situations.

Lou (03:07:14):

As evidenced by your willingness to donate three and a half hours of your time to talk to us when I know you're extremely busy. I really appreciate you taking the time. It's been a fantastic conversation. I learned a lot and I'm going to have to listen back to this and take some notes and grab some things to apply to my own recons, that's for sure.

Jeff (03:07:26):

Well, you know what? It's been fun Lou. Like I say, I am very proud of the way you've developed and you are very special in this field, and it's pretty cool to just kick back. You know what? I was saying, "Why do I feel so comfortable with some people?" You know what? You always feel comfortable with the kids you grew up with, your neighborhood kids. But the other thing is you always feel comfortable with the people you did research with. When you've spent day and night working together and then crunching numbers together-

Lou (03:08:23):

On an ironing board as a desk.

Jeff (03:08:29):

In hotel rooms and ironing boards and a lot of bad food and late nights in hotel rooms and stuff. You get what you put in. When you put in a lot of work, you tend to develop bonds with the people you put in a lot of work with. I do look back fondly as to the studies we've done together. I thought we did good work and I think we learned a lot.

Lou (03:09:10):

Absolutely, yeah. The respect and pride is mutual. Not that I had anything to do with the development of your career, but you absolutely did have something to do with the development of my career. I was thinking back to it when I took your class, I must've been like 26 or 27 when I said, "Hey, do you want to do some research together?" Who the heck says "Yes" to a 26 or 27 year old to do some big research together? So I appreciate that.

Jeff (03:09:37):

If you recall, my first answer was "Hell no."

Lou (03:09:40):

Oh, okay. I blocked that part out.

Jeff (03:09:43):

Yeah, it's like, "Oh my God, I'm so busy right now." It's like, "I don't know." Motorcycle study? Then you impressed me because you went, "No, look it. I can develop this automated system that triggers an emergency response for drivers." I went, "Oh, that's pretty cool."

Lou (03:10:08):

That's right. I remember that. That was before I had kids when I had the time to actually build something like that and that turned out really good. I remember showing up to the research and you had a backup methodology because you're like, "Well, just in case yours doesn't work," because it was like remote, wireless and for back then it was pretty darn tricky.

Jeff (03:10:30):

I thought so. By the way, like I told you right from the beginning, and was I right or was I wrong? But you lose data in studies, and you want to do everything you can not to lose data, because subjects are valuable and so you usually have backup.

(03:10:54):

All right, this is my number one way. The last study we had, we had those arm bands that you had.

Lou (03:11:03):

Zero data loss on that study, which is one of my proudest professional accomplishments.

Jeff (03:11:08):

Zero data loss in that last study. For a live study, that's spectacular.

Lou (03:11:25):

All hail data. For a data junkie, you need to maintain it all.

(03:11:19):

Well, until next time, hopefully we'll have a chance to do this again. I really appreciate you taking the time and I'm sure I'll talk to you shortly.

Jeff (03:11:28):

All right.

Lou (03:11:28):

All right, thanks Jeff.

Jeff (03:11:29):

Okay, thanks Lou.

Lou (03:11:31):

Hey everyone, one more thing before you get back to business, and that is my weekly bite sized email to the point. Would you like to get an email from me every Friday discussing a single tool, paper, method, or update in the community? Past topics have covered Toyota's vehicle control history, including a coverage chart, ADAS, that's Advanced Driver Assistance Systems, Tesla vehicle data reports, free video analysis tools, and handheld scanners. If that sounds enjoyable and useful, head to lightpointdata.com/tothepoint to get the very next one.