ALAN MOORE | AUTONOMOUS VEHICLES
Lou sits down with Alan Moore to discuss autonomous vehicles, ADAS, the battle between safety and distraction, getting to level 4, and advice on learning.
You can also find an audio only version on your favorite podcast platform.
A rough transcript can be found below.
Links from the Show:
Timeline of Topics:
00:01:23 Alan’s background
00:03:26 What is ADAS and how much should reconstructionists know about it right now?
00:12:25 The omnipresence of electronic data
00:12:26 Levels of autonomy
00:18:06 Do we need LiDAR for Level 3?
00:28:29 Facilitating autonomous driving with V2X communication
00:38:00 Fleet corroboration
00:41:14 Autonomous systems and distracted humans
00:50:06 Diving down the “ADAS knowledge” rabbit hole
00:55:32 ADAS intervention and the event data recorder
01:02:19 Testing ADAS for research, classes, and cases
01:21:07 Will autonomous vehicles kill recon?
01:29:32 How to keep up with changes and developments in our field
Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.
Lou (00:00:19):
This episode is brought to you by Lightpoint, of which I'm the Principal Engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analyses. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, you know it takes hours of searching for the perfect model, awkward conversations with dealers, and usually some cash to grease the wheels. Then back at the office, it takes a couple more hours to stitch and clean the data, and that eats up manpower and adds a lot to the bottom line of your invoice. Save yourself the headache so you can spend more time on what really matters, the analysis. Lightpoint has already measured most vehicles with the top of the line scanner, Leica's RTC 360, so no one in the community has to do it again. The exemplar point cloud is delivered in PTS format, includes the interior, and is fully cleaned and ready to drop into your favorite program such as Cloud Compare, 3ds Max, Rhino, Virtual Crash, PC CRASH among others.
(00:01:13):
Head over to lightpointdata.com/datadriven to check out the database and receive 15% off your first order. That's lightpointdata.com/datadriven. All right, my guest today is Alan Moore. Mr. Moore is a mechanical engineer and principal of AB Moore Forensic Engineering. He specializes in vehicle accident reconstruction, vehicle design analysis, and mechanical engineering consulting. He's the developer and instructor of SAE's course, Accident Reconstruction, The Autonomous Vehicle, and ADAS. Mr. Moore holds a bachelor of science in mechanical engineering from Michigan State University and an MBA from the University of Florida. He's a licensed professional engineer, a board certified forensic engineer, and an ACTAR accredited accident reconstructionist. His experience includes nearly three decades of accident reconstruction in automotive engineering, having previously worked as a design engineer at Ford. Alan also serves as a high performance driving coach for aspiring race car drivers through the Porsche Club of America. Thanks for coming, Alan. That sounds fun. The instructing.
Alan (00:02:18):
Yeah, that is good, fun, scary sometimes, but good fun.
Lou (00:02:21):
Yeah, when you're not the one with the steering wheel, I imagine that yeah, it could be tough, like teaching your 16-year old how to drive or something at times.
Alan (00:02:29):
Almost more scary sometimes, but I've got a pretty high risk tolerance, so that helps.
Lou (00:02:34):
Okay. You're autonomous expert. This is what we're going to be talking about mostly today. You teach SAE's course. I've heard great things. One of my engineers has been there and I'm hoping to take it soon. So I was wondering, as an ice breaker, what car are you currently driving and I'm hoping you're going to tell me it's a 1965 Mustang or something like that.
Alan (00:02:53):
Nope. Nope, it is not.
Lou (00:02:55):
Okay. A Tesla then?
Alan (00:02:58):
I have a Tesla Model S, yep.
Lou (00:03:00):
Oh, cool. That's awesome. Yeah, I had the Model Y for a really long time and enjoyed it. Got rid of it when I sent my kids to private school and I'm hoping to maybe pick up another one before too long. See how that goes.
Alan (00:03:14):
Well, you might want to wait until they figure out how to keep the steering wheels on because they just had a recall on the Model Y, the steering wheel's falling off.
Lou (00:03:20):
It seems like an important component on a vehicle that is not Level 5.
Alan (00:03:25):
You'd think so. You'd think so. Exactly.
Lou (00:03:26):
And I thought we'd kind of start there with some of the definitions because Level 5, you and I know what that means, but probably a lot of other people don't. But even before we get to the Levels of autonomy, so to speak, let's talk about ADAS in general that seems to be more popular right now and it's kind of a stepping stone to things like Level 5. What is ADAS and how's it going so far?
Alan (00:03:52):
Yeah, so ADAS is the acronym for Advanced Driver Assistance Systems. Once we all got used to calling it ADAS, SAE changed their terminology to just be ADS, or Advanced Driving Systems. I'm set with ADAS, my class is named, I'm just sticking with ADAS. But the idea of ADAS is it's anything that helps the driver monitor the roadway and control the vehicle and that can either be a full-time thing, like a self-driving system, but it's much more common to have it as an intervention system, like a collision avoidance system. So that's roughly what ADAS is.
Lou (00:04:30):
Yeah, and I remember I have a 2018 Toyota Tundra now that I drive, and when I was going to buy that, Toyota was like, "Hey, by 20...", I think 2020, they said, "Every vehicle that we pump out is going to have some sort of autonomous braking system, collision avoidance, lane monitoring and things like that". So the majority of cars, it seems even the medium Level cars that are being produced today have some sort of a ADAS, I don't want to say ADAS system, but some sort of ADAS, I guess is the right way to say it.
Alan (00:05:01):
Yeah, that's absolutely correct. So of passenger vehicles in the US, more than 90% of the vehicles sold today have some ADAS system on them, usually collision warning and brake assist. Toyota was the first to lead that charge. In 2019, 90% of their vehicles were equipped with that, so they were the first to really put it across their fleet. So another way to look at that is if you think of the adoption curve of how quickly vehicles come into the fleet, it's fairly slow, but by my math and research done by others about next year, I anticipate that half of our two vehicle crashes will have some type of ADAS componentry involved. So in half of our two vehicle crashes, someone's going to ask, "Hey, why didn't this work or did it work?" So I started teaching this class four or five years ago, and then it was kind of, who cares? We're not seeing this. It's on a couple of cars. Well all of a sudden now it really matters.
Lou (00:06:00):
I remember hearing you when you started the class, say, "Hey, let's get ahead of the curve." If we're just learning about it now, are we still getting ahead of the curve or are we kind of catching up? What's your take on that?
Alan (00:06:10):
So if you haven't heard of it until today, you've got some catching up to do. I've been talking about this now for about four or five years, so people have heard bits and pieces of what I have to say on the subject, so I expect your clients today are going to be asking you what this stuff is and why it worked or didn't work. And one of my nightmares as a forensic expert is to be asked something I don't know about. I always want to know more about it before I hear from a client that they need to know what it is. So this is a last chance to get up to speed on this. Definitely.
Lou (00:06:44):
And I think that fear is what makes some of the good people really good in recon, is sitting in the chair and not knowing the answer to a very important question, keeps us up at night and makes us do the extra legwork.
Alan (00:06:57):
Yes.
Lou (00:06:58):
So I definitely appreciate having people like you around that are diving into it to lead the charge and help us understand what we need to know about these systems so that we can do what we need to do.
Alan (00:07:11):
Well, and there's two sides to it for an expert. One is understanding what it all is, and part two is deciding what to do about it. Just because you know what ADAS is doesn't mean you need to go get a master's degree in it and know everything about it. You just need to decide, for your practice, how much you're going to work with it and at what point you're going to tell your client, "Hey, we need to hand this off to someone else. We need to bring in someone who specializes in it." But I think the option of putting your head in the sand and not learning anything about it just isn't a viable option. So at what point or to what Level do you want to learn about this field is the question that everybody has to answer for themselves.
Lou (00:07:49):
Yeah, and that's a really potent question I think in talking a little bit about how much each reconstructionist needs to know, and it actually reminds me a little bit of when the CDR tool became available, and I remember a lot of reconstructionists jumping back and being like, "Well, that's too much for me to handle. I'm not going to do that." And it's like, well, you have to if you want to be a relevant reconstructionist. And it seems to me there's some analogy here between autonomous vehicles and EDR where you have to know enough at least, you might not have to know everything. You don't have to be Alan Moore, but you have to have some foundation.
Alan (00:08:27):
And I've got a pretty good minimum criteria for that. I think everybody in our field needs to know what a ADAS is, what the different pieces of it are, and they need to know how to figure out if it's on a vehicle. So if someone calls and say, "Hey, does this VW have collision warning? Does it have pedestrian detection? Does it have lane departure warning?" We should all be able to answer that question. Now the next question of should that have prevented this crash? That can go into the weeds pretty quickly sometimes. So I would expect a lot of people are going to say, "You know what, that's not my thing, but I know these other people are really good at that. Let's call one of them." And I think that's a good place to draw the line is what are all the technologies? Was it equipped on this vehicle?
(00:09:14):
And then from there you can decide how much you want to get involved with. One way to think of it is like human factors, we all need to understand that reaction times exist. A lot of the time it's 1 to 1.8 seconds roughly, depending on the scenario. But from there it can go into the weeds really, really quickly. And I think all of us get to a point where we've got to refer it to a human factors expert. And so I think that ADAS is the same way. You definitely have to know the basics. What is it and was it equipped are the two big ones?
Lou (00:09:45):
Yeah, I think on a lot of these forensic cases, the way that I picture it in my head is the accident reconstructionist is the quarterback. They're the one taking the call from the client. They're the one who is often figuring out what the landscape of the case is, and then they're figuring out what other experts should potentially be brought on board and guiding the client to figure out who to hire and who to help. And this seems no different.
Alan (00:10:09):
It's funny you say that. I do the same. I've never thought of it as quarterbacking, but anytime there's another engineer, another expert who is better at something, or even if they're just less expensive than me, I'll try to farm out a piece of the work, speed analysis from video. If it's at all complex, I farm it out to Adam Cybanski and I still do it myself, but his method is excellent. So I want his method to be on my cases and lots of things like that. I'll farm out to other people if they're better or less expensive.
(00:10:41):
And then I'll still do, what I always do though is I still do the basics of it myself. So if someone asks, I can say, "Yes, I did it," and I had this other method done as well. I've got two ways of looking at it. And in our field, I think it's always good to do things more than one way. How did you calculate speed? Oh, I used the crush method. Now we're going to have an hour discussion in my deposition on the strengths and weaknesses of the crush method. But if I use the crush method and EDR and speed from a video analysis, all of a sudden the attorney's going to give up asking me about speed because there's no way to argue all three. So multiple methods I think is good.
Lou (00:11:20):
I totally agree, and that's exactly how I run things, especially on motorcycle crashes where the impact speed determination methods end up with a pretty notable range. But if I can do three of them or four of them and they're all saying the same thing, then I can walk in a lot more confidently. And like you said, the cross-examination is inversely proportional to the number of methodologies you're using to develop speed.
Alan (00:11:43):
Yes, that's absolutely true.
Lou (00:11:44):
That's the Moore equation, I think that's called.
Alan (00:11:48):
It's a new Moore's Law. I got to say, motorcycle speed determination has gotten a lot easier since the last WREX conference and the paper you guys did on it. I made Excel calculations based on that. I use that all the time. I think that is just wonderful.
Lou (00:12:02):
Okay.
Alan (00:12:02):
It gives us something, 0.2 based on real world modern data. I think that's great.
Lou (00:12:07):
Good. I'm glad. I'm glad you're finding it useful. I definitely appreciate that. And then simulation is the other big one for motorcycle speed determination. And back to one of your sentiments that I've heard you speak about before, and we were talking about a little bit before recording, is the omnipresence of electronic data and how important that is to us. And the reason that's huge in motorcycle recon now is we get the collision pulse on the target vehicle most of the time, and if we get that collision pulse and we know how much the motorcycle and the rider weighs, then we're in good shape for estimating impact speed.
(00:12:42):
And now I'm jumping ahead a little bit, so I do want to talk about that in that electronic data in the CDR report, you know what you get from Bosch and what you might get from vehicle control history and other things. But let's take a step back real quick and just kind of define the Levels of autonomy. It's a spectrum and we don't necessarily need to go through each one, but there's five Levels, six depending on if you count zero as a Level, which is absolutely nothing. But if you could walk us through those Levels briefly.
Alan (00:13:15):
So I actually like to start at the top. So Levels 4 and 5 for most of us are almost the same. The difference is small. Levels 4 and 5, you don't need a steering wheel, fully self-driving, not Tesla self-driving, but completely autonomous. And the thing about Levels 4 and 5 is if anything goes wrong, the vehicle finds its minimal risk condition, it finds its own safe place, which generally means pulling over and stopping. That's a key thing about Levels 4 and 5. You don't need a steering wheel, you don't need to be awake or sober. And if anything goes wrong, the car will pull over by itself and stop. Level 3 is partway there. Level 3, let's see who just came out with a Level 3 vehicle?
Lou (00:14:04):
Oh, Mercedes Drive Pilot. Yeah.
Alan (00:14:06):
Thank you. Yep. I just saw the article on that. So that right now is the only Level 3 vehicle available in the US. Level 3 says you can read a book, put on makeup, read the news on your phone, we'll drive for you, but if anything goes wrong, you've got to take over within about 10 seconds or so. So that's the idea of Level three. If anything goes wrong, it's your fault. It's one way to look at it. The driver has to be able to take over. If anything goes wrong and the vehicle has a problem, it's going to stop in the lane with the hazard lights on, and that's not a minimal risk condition, that's requiring the driver to take over. So what most of us think of as self-driving or ADAS on cars we can actually drive is Level 2.
(00:14:52):
That's where Tesla Autopilot and GM Super Cruise are, where it can monitor speed of traffic ahead of us, monitor lane position, and it can control our steering and speed on a single roadway. So it can take away the steering and the speed control, which when you think about it, is a pretty major portion of the effort of driving and it's actually really relaxing when you take that away from the driver.
(00:15:17):
The problem is it's really relaxing. So what do people do? They either fall asleep or they get distracted by something else. So we're going to give you this great technology, but tell you, "Oh, wait, no, no, no, no, no. You've got to stay up and pay attention. You've got to keep your eyes on the road and your hand on the steering wheel." So it's kind of like we give this to a driver, but then we take it back and if anything goes wrong, what does a manufacturer do? Blame the driver for not paying attention. So that's where Level two is, and then Level 1 is either steering control or speed control. Basically that would be an adaptive cruise control without steering assist. So that's a rough breakdown of the Levels.
Lou (00:15:59):
And the Model Y I had was obviously Level two, and I found myself falling exactly into that trap that you were talking about. When I first got it, I was really apprehensive and putting it into that autonomous mode where it's controlling steering and throttles and brakes. I was like, okay, I was just super vigilant and looking out for anything that would make the system behave inappropriately. And then eventually I'm driving to Arizona and I'm on 10 and it's super straight and there's nothing to do, and I just double click the stalk and then try to fire off some texts or change the song or something like that. And then while I had it, there are over the air updates as you know, then they started monitoring the driver and they're like watching your eyes and telling you, "Hey, you're not paying attention to the road, so we're going to turn off the "autopilot system"." And then I found myself frustrated. I'm like, "Come on, you should be able to handle this. Let me take a little bit of a break here and do what I need to do."
Alan (00:16:58):
So what's neat about that, so they're using a camera to monitor the driver, but they're not just monitoring the driver. Believe it or not, they're watching your cell phone. So when you have your cell phone in the view of the camera, the camera is estimating the angle of it based on its reflection and whether you're looking at it or not. So if I'm holding my cell phone like this, I'll get a warning pretty quickly in a Tesla. If I'm holding my cell phone like this to show it to the passenger, it won't respond as quickly.
(00:17:27):
It's pretty impressive what they're doing with that driver monitoring. But driver monitoring matters a lot because people either get relaxed and distracted or they use the ADAS system as a benefit to allow them to do something else. A secondary task is what we call it. But on a Level 2 system, you can't do a secondary task. You've got to be watching the road ahead. And when you think about how much training airplane pilots get to use their autopilot, which is much simpler than a Level 2 system, and we give this to consumers and say, "Have at it. Out on the road you go." Obviously issues are going to happen. There's no question about that.
Lou (00:18:06):
Yeah, read the owner's manual, you'll figure it out. Yeah, that is a little wild. So a Mercedes system that just came out, that Drive Pilot, are they using LiDAR? Do we need LiDAR to get to Level three at this stage of the game?
Alan (00:18:21):
It's a good way to do it, yeah. So the Mercedes, from what I read, I haven't been in one yet, it's using camera's, radar and LiDAR, which is the most conservative way to do it. But the problem with it, and when Audi tried to release a Level three system in the US about four years ago, three or four years ago, and they ran into the same problem to the risk for the manufacturers so high to have a Level three system that they really have to box in its use. So like Audi's system, the Mercedes system says limited access expressway with guardrails with a lead vehicle that your car can follow less than 40 miles per hour.
Lou (00:19:01):
My 10-year old could do that. Yeah.
Alan (00:19:03):
Exactly. So I had a really long drive on Saturday in traffic where that actually would've been useful, but my Tesla does that already. I just have to stay awake. So the environment where that works, it's just so limited that it's not going to provide a whole lot of safety benefit. Now as time goes on and they get some more fleet learning, they're going, as I understand, they're going to raise the maximum speed of it, and once they get up to say 65 or 70 miles an hour, then it has real utility. But for now it's use is pretty limited. But the GM Super Cruise is somewhat similar, although it doesn't have that speed limit on it. Still the use of it's pretty limited.
(00:19:45):
But because of that, you won't see an article in the news about a Cadillac causing some mayhem with a Super Cruise enabled like you see with Teslas. So one manufacturer is much more risk tolerant in where they allow their system to be used. So more mayhem occurs. Another manufacturer is extremely conservative in where they allow the system to be used. So not much mayhem happens, but drivers don't get the safety benefit of it either, which is better. 10 years from now we'll look back in hindsight, it'll be obvious. Right now the answer is not as obvious as some people might think it is.
Lou (00:20:22):
Yeah. And so Tesla has been one of those companies that's really vocal about trying to progress without the need for LiDAR without implementing LiDAR. And I think for somewhat obvious reasons, it's ugly, it's expensive, it's vulnerable, it's sticking out on the end of the car and can be thrashed by a million different things. But what are the sensor groups that all the OEMs at this point are using, just to help people get an understanding of what sensors are driving these autonomous systems?
Alan (00:20:53):
So for the vehicles we have on the road today, for ADAS systems and vehicles that we drive, it's cameras and radar. That's basically all we're using. For fully autonomous development vehicles or the Mercedes Level three vehicle that came out, they're almost all using cameras, radar, and LiDAR. And so that makes it easy to find a fully autonomous test vehicle. There's a bunch of junk on the roof because they have a LiDAR units all the way around it. So how that's going to work out for Tesla, we'll see. For any technical person to vocally state, "I am not going to use this technology today," three years from now that decision should be reconsidered. Is that decision being reconsidered as technology changes? I'm not so sure, but when I see Audi and Mercedes and even I believe Bentley integrate LiDAR into the grille where it's a limited view width, but it's still LiDAR, it's still useful and the cost has come down so much, why not use it? By the time you've put all the technology and the cost into it, why not use a tool that's providing beneficial info? So we'll see.
Lou (00:22:11):
Yeah, that makes me think of the iPhones now, where this little sensor here is LiDAR, and this is, I'm pretty sure it's manufactured by Sony, and they just had an announcement that they're coming out with a new longer range LiDAR device. So this is not like you and I are used to using a terrestrial scanner with a spinning mirror or something like that with this big bulky thing. If you could put 10 of these around the car and it costs you a thousand dollars or $500 to install them, that information is huge. Photogrammetry is obviously extremely powerful, but seems like it can't handle everything. Obviously there's a benefit from adding LiDAR in that synergy.
Alan (00:22:52):
Right, right. Yeah. A real limit of photogrammetry in a vehicle is, and I've not seen it yet because our roads are flat, but Tesla's full self-driving system has, in some models, it has a stereo camera system forward, but the side views are all basically monocular. Well, if you're trying to judge traffic and there are slopes, big hills like San Francisco style hills, its distance estimation can't be as accurate. And I've been curious how they handle that, and I've not messed with it. We don't have hills in Florida, so I don't get to see it.
(00:23:26):
But yeah, that would be one of many applications where LiDAR would be better. You don't have that limitation with LiDAR. And so not only is the cost of the sensor dropping rapidly, but the computing power needed to analyze the data is cheaper. But also our techniques for processing this data have advanced so rapidly. Who would've thought that we could do a 3D LiDAR scan and register it on a cell phone? Even five years ago who would've thought we could do that with LiDAR? And now we all do it? So yeah, this stuff advances at such a rapid rate it's a race to keep up with it.
Lou (00:24:05):
Yeah, it really is. And speaking of the sensors, so Waymo has all of the sensors essentially on board, looking at those vehicles. I think they're in LA and San Francisco now where they're available for, you could just order it's a Lyft or an Uber, and this driver in this car-
Alan (00:24:21):
In Arizona too. Flagstaff, I think.
Lou (00:24:24):
Okay, so that's full Level 5 then, there's not even a driver in the seat?
Alan (00:24:28):
Technically it's probably Level four. So the difference between Level four and Level 5 is Level 5 can work in all environments. So you could have a white-out blizzard snowstorm or a heavy rainstorm with flooding, I guess, and it'll still work. Well, that's not very realistic. So Level 4 is really the highest we're going to see, so those are most likely Level 4 systems.
Lou (00:24:51):
That makes sense. And that does sound completely unrealistic because even I have a hard time driving in a white-out storm.
Alan (00:24:57):
Exactly.
Lou (00:24:57):
I used to live in Massachusetts, I had that skill at one point, but now that I'm in Southern California, even the rain terrifies me. So speaking of Apple, they've got obviously this cool array of cameras they're doing photogrammetry with, they have the LiDAR they're using for augmented reality and things. Where are they in all this? You don't really hear, at least me as a layman, I don't hear much about Apple in this game. Where are they at and what do you expect their role might be in the future?
Alan (00:25:26):
I don't expect anything from Apple at this point. They had an autonomous vehicle division for a while working on something, it was very vague, but I think a good way to estimate what they were doing is by looking at a company called comma.ai. And that's kind of a neat story. AI is a company started by George Hotz, H O T Z. He's one of these just brilliant masterminds. I think he was the first person to hack an iPhone, and I believe he was a teenager when he did it. He's a very, very sharp guy. His idea is, I don't want to have to buy a car to get this technology. I want to have a device that I can plug into a car that gives me the technology I want. So basically he came up with a device, it's called the comma, and you literally can buy this today and put it in, I think you can put it in your Tundra actually.
(00:26:18):
And it interfaces with the CAN network, takes control of throttle, brakes, steering, camera, well not camera, but radar, and gives you a Level two system that's pretty similar to a Tesla autopilot from say four or five years ago. The beauty of that is it solves the rental car problem. When I fly to another city, get some other car, it's going to have some mix. Why is this Nissan beeping at me and telling me the radar is obstructed? Why in the Chevy Tahoe is one seat cushion vibrating at me? I don't know what all this stuff is, but if I can take my cell phone or George Hotz' comma three, plug it into the car, now I have the interface I expect, the features I expect the performance I expect in any given car.
(00:27:04):
That's what comma.ai has developed. And I think Apple saw the same and wanted to develop and realized, one, it's very complex and two, this other company's already doing it. But I think that's where Apple is going because to me, it doesn't make sense for Apple to get into the business of making cars. By today's standards the car part is pretty easy compared to a lot of the other technology that has to go into it.
Lou (00:27:27):
Yeah, that technology, it's like super computing power onboard, it seems. Correct me if I'm wrong there. But then you need to also be developing machine learning tools, artificial intelligence, potential vehicle to vehicle communication, potential vehicle to infrastructure communication. There's just a whole plethora of problems. And like you said, the car, we've mastered that a long time ago. Granted, they keep getting a little bit better and a little bit faster and a little bit smoother, but the rest of that stuff is pretty darn new.
Alan (00:27:57):
Oh, yeah. Think about how much tires or automotive paint or passive restraints have...
(00:28:03):
... Or automotive paint or passive restraints have improved in the last 20 years. Yeah, they have, they've gotten better, but think about how much image classification, which autonomy relies greatly on. That almost didn't exist 20 years ago and now, we can do it on our cellphones, traffic signals can do it now. So there's been so many advances in those areas that it overshadows the rest of the automobile from an ADAS standpoint.
Lou (00:28:29):
And so speaking of V2V and V2I, V2V being vehicle-to-vehicle communication, V2I being vehicle-to-infrastructure communication to facilitate autonomous driving, where are we on that now and where do you see that going? Is it necessary to continue along this path and get to Level 4 or 5?
Alan (00:28:49):
Yeah, so I call it V2X, so vehicle-to-anything communication, it's V2V and V2I and so on. That's not really my specialty, but I think it's kind of the wild west right now. Everybody was focusing on DSRC, which is a short range radio and Cadillac and Mercedes actually put them into production, but since they were the only vehicles on the road using it, it wasn't that helpful, right? It's like when telephones first came out, if you were the only one with a telephone, it didn't do you much good. Well, since then, it appears that 5G is going to replace DSRC. So now all the companies developing this technology have to move everything to 5G. And there's a lot of promise with it in crash reduction, but there's a lot of challenges to it as well and I really don't have a good feel for where that's going to end up.
(00:29:45):
There's some really neat Jetsons-style scenarios where I'm approaching a green light at night, another car is going to run a red light and either that car or the traffic signal make an assessment that he's not going to stop at that red and they warn me to stop on my green light to avoid an impact. Think of what that could do to reduce crashes, but the moving parts that have to fit together to make that happen are difficult. Another one is in a crowded city. A pedestrian's about to step off the sidewalk into the roadway. If something can warn me before the pedestrian even leaves the sidewalk, I might be able to avoid that impact. The technology is all there to do that, but just to get all the pieces in place and to get the timing to work out is really difficult. So I don't know where that's going to end up. Right now, we can interface traffic signals with vehicles that has some minor benefit. We can interface vehicles with infrastructure like OnStar, some pretty major benefits with that, there's been some real hero stories with that.
(00:30:51):
But yeah, where that's going to end up, I don't know. But the biggest problem I wanted to solve is the loss of visibility problem. We get this in Florida where you have a combination of fog and smoke that happens on an expressway and people don't understand how debilitating that is, but witnesses describe it as having a white blanket across your windshield. So imagine you're driving at 70 miles an hour and literally somebody puts a white blanket on your windshield, what do you do? Do you slam on the brakes? Well, you might get rear-ended. Do you keep going? You're probably going to hit someone else who stopped. But if V2X communication can warn everybody hey, slow down, there's limited visibility ahead, we could prevent these 50 car pile-ups that we hear about sometimes and that would be a major benefit of V2X in my opinion.
Lou (00:31:44):
Yeah, and it seems like you're saying there's a lot of moving parts there to bring that all together and there would have to just be this monumental cooperation between OEMs and governments and municipalities.
Alan (00:31:56):
Yeah. So with the obstructed visibility, that's one neat application because you don't need everybody involved. Let's say one out of five vehicles gets the message, slow down ahead, Waze could do that today if it responds quickly enough. If you get one out of five vehicles slowing down as soon as you start seeing brake lights, most drivers slow down anyway. And then you can get the whole convoy of traffic slowed down before they get into the fog bank. So that's a great application of it 'cause not everybody has to have it. Whereas the scenario of a pedestrian steps into the road, how do you warn the driver soon enough? That those two parties have to have all the equipment and engage to each other, that's pretty unlikely. So that's why I like the obstructed roadway scenario.
Lou (00:32:44):
Yeah. It sounds like you said it's totally feasible and we're already kind of doing it in some regard. Driving down, I was in a rental car the other day in, I don't even remember where I was, somewhere near San Francisco, and I got warning pothole on the road. A 100 yards later, a warning there's some object in the road and it helps put me on alert and just make sure that I'm paying attention so I don't crash into something.
Alan (00:33:10):
So there you go. That's V2X, that's it working and that was probably through your cellphone, not even through the vehicle, I'm guessing.
Lou (00:33:17):
That's right. Yeah, and it helps me also avoid police too. So if I'm going 85 in a 65-
Alan (00:33:22):
Oh, that's true.
Lou (00:33:23):
I could just slow down a little bit, not that I ever would go 85 in a 65, that's ridiculous. So the cooperation, I wonder if it's good? This is just me kind of spit-balling. Are we going to need that cooperation among manufacturers to get to... first of all, are we ever going to get to Level 4 where the majority of the vehicles on the road are Level 4 and if we do, is it going to require some cooperation between the manufacturers? Are they going to have to get together and say this is a problem too big to solve on our own. Why would we try to solve it on our own? We have talented engineers at all of these places, let's get together and solve the problem.
Alan (00:33:59):
I think that ADAS is the biggest example of collaboration in the auto industry ever. throughout history. The amount of standardization that's been going on, I find really impressive, especially since most of it is happening without regulation. The federal regulators have intentionally stayed out of this as much as they can to encourage innovation. As soon as you put a rule on it, like Part 563, as soon as you put a rule on it, you standardize something which is good, but you limit a lot of other things that could be better. And so yeah, we do have a lot of collaboration going on and collaboration is good for things like V2X. It can't work at all otherwise, but just like drivers need to be able to operate as independent agents, cars need to be able to operate as independent agents.
(00:34:51):
What if you don't have a cell connection? What if you don't have an internet connection? Will your car stop on the side of the road? Of course, you want it to keep going. So the biggest question in my mind is to get to Level four, what do we need from the roads? Because right now we have Level four vehicles, that technology is out there, but it depends on good operators, as we'll call them, so trained operators somewhere in the mix. And that's just going to be a challenge no matter how we look at it, but it also requires good roads. And in my class a lot of the accidents I show, the roadway was a contributor. Not that the road had to be designed to allow Level 4 driving, but quality of lane markings is a great example. If you have poor quality lane markings, your Level 2 vehicle won't drive as well. And you can go state by state and look at differences in roadway maintenance and similar differences in Level 2 performance, just based on lane markings.
(00:35:55):
So what will it take to get to Level 4? It's probably going to take some level of dedicated roadways. We don't need it now because it's small companies doing development on it, but by the time we release it to the mass public, I would not be surprised if we end up with some level of dedicated roadways for it.
Lou (00:36:12):
And that was one thing that I was always really careful of when I was driving around the Tesla and I double click the stalk and I'm driving along in my "autopilot." Anytime I saw funky road markings or the sun was potentially in a position where it might limit the contrast between the lane lines and the roadway itself, then I would either keep a sharp eye on things or just take it off and as an operator of a Level 2 vehicle, I think it's really important to understand the limitations of the machine, like you say, we don't get any training on that. That's something that you and I might have some idea about, but a regular soccer mom is probably not going to understand exactly what the sensors are looking for, so it becomes a little bit cumbersome.
Alan (00:36:59):
Well, and that cumbersome area is where we live. So when a Level 2 system works fine, we never hear about it. As Jeff Muttart says, everybody goes home and nothing happens. When a system reaches a limitation, that's when we get called and asked what happened? Why didn't this work? And so yeah, a lot of ADAS-involved crashes have an ADAS system limitation. A great example is a pedestrian impact at night. Right now, very few manufacturer systems have any ability to handle a pedestrian impact at night or warn the driver and avoid it. So that's a simple limitation. But in bright daylight with a brightly lit pedestrian using very clear biofidelic motion, we see the legs moving, most cars made today, sold today are amazing at their ability to at least warn the driver, if not brake to avoid that pedestrian at certain speeds.
Lou (00:38:00):
And one thing that I found interesting as I was prepping for this podcast, and maybe you can answer this and maybe you can't, but the corroboration of a fleet. So say all Tesla's out there, they are monitoring a bunch of situations and then when they see something noteworthy, I don't know, they're sending that information back to the mothership and it's helping to inform the development, the further development of their system via machine learning. That's the way I understand it as somewhat of a layman. Is that accurate and if so, is there anybody else doing that too?
Alan (00:38:34):
It's accurate, and I actually have some really cool examples of that in use. And I'm not sure if any other manufacturer is doing that. I'm not sure if any other manufacturer has the capability of doing that right now, but Tesla excels in that. There have been points in time where every Tesla on the road was running Autopilot in a shadow mode where it's kind of running a simulation on realtime data and reporting back to Tesla issues that came up.
(00:39:05):
But I have two really neat examples of that in use. So when Tesla was working on their full self-driving system, there's one scenario they ran into where you, I've never seen it before, but you have a stop sign, below it says except for right turn. So you have to stop unless you're going to turn right. So I think the right turn is kind of a separate lane almost. So for them to figure out how to handle that, they needed one, a bunch of images of stop except for right turn so they could train their image classifier, and then two, they had to have a bunch of examples of how to handle it. So they basically had, not every, but most Teslas on the road start looking for that sign. And now Tesla has the largest database in the world of images, of signs, that say stop except for right turn. And once they had that, then they could train their models on what to do in that scenario.
(00:39:58):
But another example is just in regards to the performance of collision warning and automatic braking. How often does it work? How well does it work? We get called when it didn't work, when a crash happens and someone says, hey, why didn't my automatic braking stop this crash? Well, Tesla gets data on near-misses, events that didn't happen. And according to Andrej Karpathy, the former head of their autonomy division, Teslas prevent more than hundreds per day of pedestrian crashes, more than hundreds per day. We never hear about them, but the system triggers and either warns the driver or autonomously brakes.
(00:40:39):
So given that a lot of times we get called when it doesn't prevent the crash, it's really useful to know that it did. In fact, a major trucking company has asked me, how come we're having all these crashes? We spent all this money in this technology, why are we still having these crashes? Well, thankfully the system that they have in place allows us to see other events that it did prevent. I can go in there and say, here's our crash, but here's the last nine events it did prevent, let's look at those and get an idea of how well this system actually worked. Because when everybody goes home, we don't hear about it. But that's a success for ADAS if it was involved.
Lou (00:41:14):
Yeah, exactly. We're cherry-picking what we're looking at to a degree. And that brings up a really good point, and I think I've heard you talk about this elsewhere before. Pedestrian impacts from 2020 to 2021 went up 13%, and it is some big numbers, like 12,000 more fatalities or something like that, I don't know if that's the exact number, but the delta is indeed 13% and that is with the advent in the assistance of all these advanced systems. So imagine what that would look like without it and I talked a little bit about this with Muttart and I'd love to get your take as well. And I said, "Jeff, why do we have this increase when we have these better vehicles?" And he said, "One thing that you have to consider, Lou, is these pedestrians are on their phone now. So not only do you have a distracted driver at times, you have a distracted walker and they're pegging each other and if we didn't have the autonomous systems," this seems to be the take, but I'm not the expert, "if we didn't have the autonomous systems, that 13% would be dramatically higher."
Alan (00:42:17):
Oh, absolutely. I think there was a great, I think it was actually an insurance company ad, but it might've been just a public safety message and the message was, if they can't walk while texting, what makes you think you can drive while texting? And it showed people trying to walk around a city while texting and they'd walk into stop signs, they'd fall off ledges, they'd fall downstairs, do all sorts of crazy things just trying to walk. And yeah, Jeff makes a good point. So now we have both parties distracted and pedestrians, they can be even more focused. Their gaze time at their phone can be longer than a driver. So I could definitely see that.
(00:42:55):
But I think of it as two steps forward, one step back. So we have all this great technology to prevent crashes, but then we have all these drivers doing secondary tasks. We don't know what it is after the crash, but a secondary task is usually something on their phone. So it's like one technology is making driving safer, the other is making it less safe and they almost balance out, which is kind of unfortunate. And then when you give a driver a Level 2 system, so many drivers think hey, great, this is my time to read the news, I can read a book, I can do whatever I want. They don't even understand the message that no, that's not what Level 2 is. Level 2 is keep looking ahead. You're still watching the roadway.
Lou (00:43:39):
Yeah, we got so close and hopefully, we even that out, and I don't know how, I don't think anybody does or else it'd already be done, but to prevent drivers from that secondary task, from that distraction, it's obviously some sort of addiction and we all feel like... obviously, you just drive down the highway here in Southern California, I suspect it's true where you are as well just take a random peek at the guy next to you and the odds of them being on their phone are like 25% to 50% at least. Most of the time when I look over, somebody's on their phone and I'm just like what are you doing?
Alan (00:44:14):
Yeah. And with an ADAS system, there's two ways to try to prevent that. One, the simplest way is looking at steering wheel torque. If you can measure some torque in the steering wheel that suggests there's a hand on it attached to a person who's awake and maybe then they're paying attention, but obviously that's a poor proxy for driver distraction. The best way to do it is with a driver facing camera where you can watch the driver's eyes where some cars actually watch the angle of the cellphone.
(00:44:42):
But there's a third thing I've noticed too, that I've noticed in Teslas. I don't know if other manufacturers do this, but is an attempt to issue warnings based on hazard level. So for example, on a straight flat expressway with no other traffic around, an autopilot or full self-driving system is more tolerant of lapses of distraction. But if you're approaching a traffic light at high speed or a railroad crossing or a change from a divided highway to a non-divided highway, I've noticed Autopilot tends to be a lot more particular about the driver's attention level because the risk is higher, the risk is higher in those environments. So yeah-
Lou (00:45:29):
I love that.
Alan (00:45:29):
And then Ford is really good at trying to measure driver drowsiness using only steering wheel torque, which is a neat thing. So if some Fords anticipate that a driver is getting drowsy or lazy or sleepy, you'll get this coffee cup warning on the dash saying hey, you might want to take a break. So there's things like that the technology in a car can do to help, but it's still a battle between improving safety and letting drivers be more distracted and that battle's going to continue until we get to at least Level 3 in the majority of vehicles and that's a good ways out from here.
Lou (00:46:06):
Yeah, it does seem to be and that brings up an interesting concept, which is new to me, this idea, you've probably already thought about it, but that maybe these advanced driver systems, one of the most beneficial things they can do is just tell the driver hey, knock it off, like stop looking at your phone, I see you looking at your phone. And I guess they don't want to be too intrusive because people won't go buy their car if that's happening, but if you can, like you're saying with the Tesla driver-facing camera, if it's looking at the driver and saying I can see you looking at your phone, dummy, stop it, then maybe we can pair these technologies and they can simultaneously bring crash rates down from our record highs.
Alan (00:46:51):
I'd like to think that's true, but I'm not as optimistic about human behavior as you are. I have a friend who's one of those people who just won't wear a seatbelt, and every time he gets in my vehicle, I'll start driving and I don't even hassle him at first anymore. I'll start driving and the chime will be going off, not quite continuously, but pretty close, even though he is in the passenger seat. He's so immune to it, he doesn't even notice. It's not until a few minutes later he thinks, oh, that's right, in a minute I'm going to have to hear Alan hassle me about this too and he's going to pull over to stop. It's easier just to put the belt on. But the chime, which aggravates me to no end, but I never drive without a seatbelt, has zero effect on him.
(00:47:35):
And I drove a, it was a pickup truck, an F250 recently, and it looks like Ford had copied General Motor's backseat reminder. On a GM vehicle if you had opened a back door, then when you turn off the car it gives you a chime, hey, check the backseat in case you left a kid or a pet back there. Well, this F250 that went off every time I turned the truck off, even if a back door hadn't opened, and it was terribly aggravating. If there was a way to turn it off, I couldn't find it. So the attempting to change human performance without aggravating most drivers, that's a challenge, that's really challenging.
Lou (00:48:17):
That does seem like one of those things that might have to be mandated via legislation just because if the auto manufacturers all don't have to do it, then people would just drive the cars that don't do it.
Alan (00:48:27):
Well, true. Well, now there is one way to handle that though, say with I think, let's see, I think every Level 2 system I've driven, if you don't have your seatbelt on, it won't work. If you take your seatbelt off, it'll disengage. So if you want the value of Level 2, you have to wear a seatbelt. We're not quite there yet, but we shortly will be. We're at every new vehicle with Level 2. If you want it to work, you have to wear your seatbelt and you can't be looking at your phone. Those are things a Model Y can do now, and that'll be more common in newer vehicles. So that's a more reasonable way to do it I think. If you want this benefit, you can't drive distracted. If you want to drive distracted, you have to turn off this benefit, then it's even harder to drive distracted. At least we're not doing two steps forward, one step back kind of thing.
Lou (00:49:18):
Yeah, I hope we can find that balance because in five years, I would love to meet back up with you in 2028 and look at the crash rates from 2021, 2022, and just be like that was ridiculous. I'm glad we made some progress. And on those ones, the crashes, the ones, the cases where the autonomous system didn't do great, so kind of moving toward the practicality section now. We get a case, average recon gets a case, you get a case, there are suspicions. Like say, my Tundra, 2018 Tundra plows into the back of a car on the highway at 55 miles an hour and we're like okay, I know that vehicle had some pre-impact braking capabilities and it had collision detection capabilities. Why didn't that work? Doesn't matter on the car specifically, but what does that process, how should recons be thinking about, just starting the process of diving down the rabbit hole to try to understand if some advanced system should have been participating in the event?
Alan (00:50:18):
Yeah. So that's the point, like I said, is on the other side of a line. So every expert should be able to say your Tundra had a collision warning and automatic braking system, and here's how I can tell that and all of us should be able to do that. The next step of why didn't it avoid this crash? Some of us are going to dive in and learn everything about that. Others in our field are going to say you know what, that's for somebody else. But the quickest way to look at that is what limitations might apply if the crash is in the dark, especially with a pedestrian, the camera is not all that useful. If there's a high closing speed, if the closing speed is over about 50 miles an hour, very, very few systems can automatically brake in time to avoid the crash and very few systems can warn the driver in time for the driver to brake. So that's a limitation.
(00:51:15):
If the target has an unusual radar reflection or an unusual appearance, so say an equipment trailer that's a mix of a bunch of different things with different radar reflections and the taillights are in different places and the license plate is in the wrong place and it has two wheels on the right side and one wheel on the left side or something, the radar is going to be confused by that. It's going to see a lot of noise. The camera is going to be confused by that. It's not going to classify it as a vehicle, and it may try to literally just drive right through it. To us, that seems crazy. Why would it happen? But it's important to remember that image classifiers, which are what the camera uses, don't think the way we do. They are faster and more accurate than we are, but only if it's something they're trained to look at.
(00:52:03):
So an image classifier can usually pick out a motorcycle faster and more accurately than we can in a crowd of vehicles, but a Can-Am, a three-wheeled motorcycle or what was another one, I can't remember the name of it, how well is it-
Lou (00:52:19):
A Spyder?
Alan (00:52:20):
No, not the Spyder, it's a third one, real funky looking like a retro style in the front. They're not very common. But even with a Spyder, that's a good example 'cause there's only one wheel in the back, but yet it's the width of a car, how is an image classifier going to handle that? Who knows? Depends on how it was trained, and you know what the angle of the sun was at that moment. There can be lots of reasons why it doesn't work. So you see this in the owner's manual. The owner's manual lists more reasons why these systems won't work than they do why it will work. And they're not just... yeah, they're trying to cover their bases and all that, but it's true. There's lots of reasons why they won't work.
Lou (00:52:59):
And that's what I was thinking as you were talking about that. It seems like the owner's manual is probably the recons best friend at the beginning. It's like okay, maybe even before you go do your inspection, I just got this new case and it involves a newer vehicle. Let me just, first step, go to the owner's manual, figure out what this thing might be equipped with, and then I can look for the appropriate sensors once I get there to confirm what it has and if it's equipped.
Alan (00:53:23):
Yeah. So that's actually to me step two. Step one is look at the vehicle and if there's a camera at the top of the windshield, it has some type of ADAS in it. If it's a Nissan with no camera, it probably still has a radar. So that's step one. But step two, you're right, is to look at the owner's manual, the build sheet, the sales brochure for that year, what did this vehicle come equipped with? And thankfully, this isn't the case so much in new vehicles, but in say, 2018, you could buy a Tahoe with probably three different levels of ADAS. You could have a collision warning that only worked in a narrow speed range, you could have a collision warning and automatic braking that worked in a more broad speed range and then you could have collision warning, automatic braking and adaptive cruise control with even a different speed range.
(00:54:16):
Well, the hardware in the vehicle looks the same, the camera's the same, the radar could be different, but you can't really tell by looking at it. So yeah, what trim level is this in that model year? What could you get with that trim level? If say the lane keep assist doesn't work below 38 miles per hour, and this guy was doing 30, well, the lane keep assist wasn't going to help him then, right? The adaptive cruise control, if it's not a full speed system, it might slow down to 9 miles per hour, chime and then turn off and let the vehicle just roll into the stopped car in front of it, that's my favorite.
Lou (00:54:56):
That's one of my favorite things, yeah. My Tundra does that at 25. It's just like you're on your own buddy and it's pretty abrupt. It's not like immediate, but it chimes and then two seconds later, it's me.
Alan (00:55:06):
So someday someone's going to get this crash and I want to hear about it where there's this amazing event that can only be avoided by an automatic braking system. It slams on the brakes, comes to an almost complete stop to avoid the stopped object, and then releases the brakes and rolls into it at 8 miles an hour. So it did all this amazing stuff to avoid the crash and still hits the target and there's still all the injuries we hear about in those cases, right?
Lou (00:55:32):
Yeah. It's on you from here. And so what does the CDR, most of us are accessing event data via the Bosch kit, the CDR tool. And how often do those downloads tell us if there was some sort of vehicle intervention?
Alan (00:55:51):
Unfortunately, not very often. The data fields have been in there since 2013-2014 for some manufacturers. If you look at a Chrysler vehicle...
(00:56:03):
... for some manufacturers. If you look at a Chrysler vehicle, it'll show a whole bunch of fields for tracking a target object. And if the adaptive cruise was turned on and all this stuff, most of the time, they're empty. They're just not populated because that vehicle didn't have that capability or it wasn't turned on at the time. So up until, say, two years ago, very rarely did I find anything useful in the CDR data related to an ADAS event. But that's changing rapidly now, especially on GM vehicles. But the toughest question to answer, that depends entirely on the manufacturer, is if we see braking, is that the vehicle braking or the driver braking? And say, Audi is one example. They'll tell you which it is. Some manufacturers leave you out there to guess. You have no idea. And then others will show you, you can predict that there was an automatic braking event because you don't see the brake pedal depressed, but you see a rapid deceleration.
(00:57:03):
Sometimes you might see brake pedal pressure. So obviously, there's a deceleration happening. It must be an automatic braking event because this vehicle records foot pedal and the driver didn't step on the foot pedal. So it can be tough to figure that out. And I've had cases where I've had to say, I don't know who braked, but what I can tell you is that the automatic braking system wouldn't have done more than this. The closing speed, the time to contact, the rate of deceleration are all what I expect from an automatic braking event on this type of vehicle of this age. But I can't tell you if that happened here or if the driver just, coincidentally, braked at about the same level.
Lou (00:57:45):
Yeah. Go ahead.
Alan (00:57:48):
I was going to say, and that's a key thing to remember in this field, if I'm looking at a crash on your 2018 Tundra, and I want to go out and recreate it, I don't want to use a 2019 Tundra or a 2020, and if yours is an SR5 trim, I don't want to use a limited trim. I need to use that model year, maybe even near that production month. I want to use that trim level. And even if I do that in a Tesla because of fleet learning, I may still get a different response. There was a big Tesla crash a couple years ago in Florida, where when we went to do a reenactment on it, we had to accept the fact that we're testing a different vehicle now, because the vehicle has a year and a half newer of software updates in fleet learning, and for us, there's really no... All we can do is try to extrapolate to what was available at that time.
Lou (00:58:37):
That sounds super challenging. I just thought of that as you were speaking about it, I'm like, oh my gosh, if I'm trying to test a Tesla where it's getting an update a week or whatever, then you're looking at totally different algorithms than were in place a year and a half ago. And one solution to that, which is completely impractical is that we have all this documented. Somebody's tested a Tesla that week of 2022, and they know how it behaved then, but that's not going to happen. Hopefully, I've heard you talk a little bit about the importance of just the community developing this level of knowledge like we have with other disciplines, say, crash analysis or EDR analysis.
(00:59:18):
I can almost always call Rick Ruth and say, I have a 2019 Ford that's doing this, and he's like, oh, yeah, Bobby, from Michigan, saw something similar, but we're not there yet with autonomous functions. And some of these things, you can go test. It seems like testing is a big part of it. You have to go test that vehicle, but in some cases, you're going to have to just raise the white flag and be like, this might not be the same configuration as it was at the time.
Alan (00:59:43):
Right, right. The good thing about it is most of the cases we're getting now are crash or no crash. These vehicles hit it 50 miles an hour. Could the truck have stopped instead? Well, that's a really big difference. There's no splitting hairs if you're looking at a 50 mile per hour impact or no impact. Where it would get tough is, let's say we have, and I've had this one, let's say we have a 12 mile per hour impact and the automatic braking was engaged during that time. If we make one or two changes to this scenario, could it have stopped and not impacted the other vehicle? That's a lot tougher. Now, we're starting to split hairs, and now we have to consider things like variability of system performance. But fortunately, those aren't too common. Usually, it's a really, really big difference.
Lou (01:00:33):
And what is available for ancillary reports, at this point? I know Tesla will supply the owner with certain reports, and we have vehicle control history. If CDR doesn't answer what we want, where else can we look?
Alan (01:00:45):
So within the CDR, we get the ASCM data from some General Motors vehicles. Other manufacturers offer bits and pieces within the CDR data. On Toyotas, we get the vehicle control history with the images, which is improving dramatically. It's amazing what we can get out of that. With Teslas, we get the EDR that doesn't tell us anything about ADAS. And then sometimes we can get log data, which gives us a mix of different things, not always useful. And then on trucks with the Bendix or WABCO or Detroit Assurance, we can get a mix of different data from those, that just like, truck engine data, a lot of times, there's problems with it, that we have to try to handle or assess. So it's a wild west of what's out there and how we can get it.
(01:01:34):
But what I find really frustrating is there's a lot more data out there we can't get. For example, every windshield camera I've disassembled has a flash memory chip in it. Are there images stored in there? Can we get to them? Who knows? That's beyond my capability. But that's what started me down my mantra, that we are surrounded by electronic data. We just have to find it and validate it and put it to our use. So there's data in the vehicles that we're still not getting and not seeing, and how do we get to that? Some cases, I think we have to admit it's there, but it's not feasible to get it for this case. But the more we work as a field together to get ahold of some of this data, the more we can do with it, obviously.
Lou (01:02:19):
And one big part, it seems, of developing this group knowledge is testing, how often do you find yourself performing tests, either, for research or as part of your class? I saw that one of your classes, and maybe this is always happening, was run at IIHS. So you had the ability to perform tests, it seems, anyway. How often are you doing tests for research, for classes and for cases?
Alan (01:02:42):
I'm doing testing for research constantly. Basically, any scenario I can create, I'm trying to test. For training, we just added that to my SAE class at the last session. Every time I do the class, it'll be two days, and there'll be about a half day of testing, where we put people in the vehicles, let them experience the system in use, try and download data out of it. And that's been a really big thing. Using it on cases can be tough for three reasons. I've got to be able to recreate the environment and the vehicles. I've got to recreate the crash configuration. And if the target vehicle is moving, the complexity goes through the roof. Let's say, and I've had this case, I have the tractor trailer going 70. The lead vehicle is going 20 in a horrendous impact results.
(01:03:35):
Well, how do I test that? Even with a deformable barrier, there's still going to be a fair amount of damage at the end of it. So those scenarios can be pretty tough. Frankly, they lend themselves a little bit better to computer simulation, but that's only good if we can validate the simulator. So what I end up doing instead, a lot of times, is boxing in what actually happened. I can't tell you what actually happened in this case, but I can tell you, it's not over here, and it's not over here. It's somewhere below this and somewhere above that. So our actual crash, I can put it in a box, and hopefully, within that box, I can say, here's what would've been different had this worked or not worked, or whatever had happened. So that's what we end up doing. There's a great paper on this with airbag deployment thresholds, where the idea is how do you reverse-engineer something without having to actually reverse-engineer the whole thing, just the parts we care about?
(01:04:34):
How do you box in what it is? I think it was Axon who did the paper, where they were trying to box in deployment thresholds for a certain type of vehicle in a certain kind of crash, and they basically admitted, we can't tell you exactly what it is for this car, but we can put some limits around it. And that's what I try to do with testing. I was going to say, it can be challenging because I've not only have to match the image that the camera's looking for, but I have to match the radar reflection as well.
(01:05:03):
And radars are looking for a 3D reflection. If you look across the radar path, it doesn't want to see just a flat reflection of a vehicle. It wants to see a tailgate, a rear axle, a transmission cross member, a front axle, something like the back of the cab. So it wants to see noise that roughly is the length of a vehicle. If you only give it a flat plane, some radar systems will ignore it. So that limits the proxies that are the exemplars that we can use in testing.
Lou (01:05:33):
It sounds super challenging. And then I think that's a good analogy, too, for the way that I've been thinking about this in part is similar, like you're talking about with the airbag thresholds, where you get a call from some attorney and they say, well, the airbag should have popped in this, can you tell me? And it's like, I could tell you if it was commanded or not, but it's a matter of evaluating the algorithm that was designed for this vehicle specifically. That's the only way I could tell you if it should have deployed or not, which is not me. And do you have the qualifications to go up against the manufacturer or the developer of that algorithm to say, I know better than you who have been developing these things for decades. And it seems like there might be an analogy to that with autonomous vehicles, where if somebody wants to go against Tesla for their algorithm, they're coming... It's tough to evaluate A and B, who has the credentials to say that they know better than what the manufacturers are doing.
Alan (01:06:36):
I mean, if you look at plaintiff cases against manufacturers, which is a type of case that I generally don't take, if you try to jump into the weeds and say, hey, if you had tuned this parameter a little differently, we wouldn't have this crash. If you had made it like this, we wouldn't have this crash. Well, a manufacturer can probably list a whole bunch of reasons why they did it the way they did, or why they didn't do it another way. But I think from a plaintiff claim standpoint, it's probably easier to take a step back and say, hey, look, you sold this system, you advertised it as being able to do this and it didn't. So now you want to have a bunch of excuses on why it didn't. No, the fact is you sold it to do this, and now a consumer's injured because it didn't. I think from a plaintiff's standpoint, that's probably the best way to make a claim rather than getting into the weeds on it.
Lou (01:07:27):
And it seems like that's what Tesla has been up against recently, is because obviously, they are pretty vocal about what their cars are capable of, and they used terms like autopilot, so the plaintiff attorneys have latched onto that. Although, I don't know if they've been successful yet. And there was a recent case here in L.A., that they just lost for unintended acceleration, plaintiff attorney lost. They, obviously, have really good data. If you want to come after Tesla and claim that there was an unintended acceleration, you better show that the accelerator pedal was not pressed, and the brake was, and good luck.
Alan (01:08:00):
I would hope you would ask for that data first. That's a good place to start.
Lou (01:08:04):
Exactly. So do you ever test on a component level? In other words, figure out who makes the radar module, grab that, see what the capabilities are, read the owner's manual, or is that not necessary, at this stage, the way you see it?
Alan (01:08:19):
I've tried to do some things with that, and what I'm finding is that the future of our field is going to end up being a lot of software engineering because trying to do anything with components or trying to do anything with algorithms, it dives really quickly into software engineering. And I'm finding that's challenging to work with. But even if I get a component, the component can be configured different ways. So let's say I'm looking at the limitations of a radar sensor, it can be configured to output only raw data, or it can be configured to output targets. Most manufacturers just want the target. What is a moving target? Where is it? What's the size of the radar reflection? What's its lateral speed? That kind of thing. Just give me that.
(01:09:09):
For a number of years, Tesla Autopilot didn't use that at all. It used the raw data coming out of the radar, that most manufacturers don't. So if I'm going to do component testing, how was it configured in the vehicle? It can get challenging to work with, but I think we're going to see more of that in the future. And there are some papers from other companies where they've looked at components and I think it's been useful. But to apply that to a specific case, it's challenging. It's definitely challenging.
Lou (01:09:37):
You have that big gap, like you're mentioning, you don't know how that signal is being processed, and unless you can get to it downstream of processing, then it might not be telling you anything.
Alan (01:09:47):
And a lot of times, how the algorithm uses it can be hard to understand. On a Toyota is a great example because we can get so much data out of it. If you look at, say, the target number or the target classification, a lot of times, it doesn't make sense. It's not a repeatable value if you do the same test over and over again. So why it's doing what it's doing or why the manufacturer has chosen to do it that way is not always apparent. So what I've done, so far, is look, primarily, at the end performance rather than trying to get into how it gets there. So treat the system as a black box, give it some inputs. What are the outputs? Are they doing what we expect it to do is, at this point, a safer way to look at it, I think. Hopefully, that'll change in the future.
Lou (01:10:35):
And that was one of my questions, is what does fingerprinting ADAS performance mean? And I suspect that's what you were just talking about, but maybe I'm wrong.
Alan (01:10:42):
So fingerprinting is something I've been working on, and Detroit Assurance data is really good for this because every time we get data from a VRDU, a Video Radar Decision Unit, it gives us, I think it's 10 events. Well, nine of them are going to be unrelated, but hey, that's still data we can look at. And so something that we really need to be able to fingerprint or correlate is time to contact versus speed. So if I'm going to drive at a fixed barrier at 10 miles an hour, I know pretty well at what point I'm going to get a warning and at what point I'm going to get automatic braking, and it'll vary for different manufacturers, but I know where it's going to be. At 30 miles an hour, it should be a bit earlier. At 70 miles an hour, it might be the same as 30, which doesn't make any sense.
(01:11:31):
It might be much earlier, which would be good or it might not happen at all. So can we come up with a graph of time to contact versus speed for different classes of vehicles? That'd be really good for answering the question, why didn't this system work? Oh, well, it's because your time to contact at your speed was way outside of the bounds of where these systems work in most vehicles. So that's what I'm hoping to put together. And it's hard to get because you have to have that data of near misses, which we can get from Toyota's and Detroit Assurance systems, but not a lot of other places. The manufacturers have some of that data from their testing, but not from the real world. And there's a big difference between testing and real world.
(01:12:20):
A great demonstration of that was the BMW X1 that... I forget if it was IIHS or I think it was AAA that tested it for pedestrian detection using a standard pedestrian detection test method, using a standard mannequin. Everything was the same they do on every other vehicle. It didn't avoid it once. It hit the pedestrian like 32 times. Well, I don't think that the BMW system sucks. I think it was designed or tuned for some other scenario, maybe real life pedestrians, not a mannequin. Another example is there's a type of deformable barrier called the ADAC barrier. It's great because it's fairly cheap, it's deformable, it's easy to hit. The radar signature of it looks nothing like a regular vehicle.
(01:13:13):
So as I talked about this 3D radar reflection, manufacturers now have to say, okay, we need to be able to pick out a motorcycle and a bicycle and a truck and a car and a boat on a trailer, oh and make sure we can pick up that ADAC barrier so we don't end up on the cover of consumer reports for hitting one and not knowing it. So what a headache for a manufacturer, right?
Lou (01:13:35):
Yeah. That's not what they're actually looking for.
Alan (01:13:39):
Yeah, because that doesn't exist in the real world, but that's what they test on.
Lou (01:13:43):
So we should be using better barriers, it sounds like. The ADAC might not be an appropriate methodology or a tool to use.
Alan (01:13:50):
So it's a great barrier for testing. There's a lot of neat things you can do with it, but the problem with testing any system where AI is involved is that you can end up giving the system the answers to the test. You overfocus on a certain scenario and it starts to ignore other scenarios. So that's the risk with it. But certainly, it's wise to use a variety of targets. And I know manufacturers try to do that. There was a study in Australia by Volvo looking at kangaroos, to make sure they could track kangaroos because you might have 30 kangaroos on the side of the road that really aren't a hazard, they're just eating. But you might have two that want to run across the road. Well, you've got to be able to identify those two, but select out the other 48. I mean, that's an extreme example of trying to identify unusual targets.
Lou (01:14:42):
That is a unique down under problem right there, for sure. So we don't have a lot of that in the States.
Alan (01:14:48):
But it does lead to what we do have a lot of, which is, edge cases. The entire problem with autonomy can come down to training humans to use it and edge cases because no matter what we do, we can't ever get out of edge cases. A kid on a hoverboard looks like a pedestrian, but the legs don't move like a pedestrian. It's not going the speed of a typical pedestrian. But to us, the hazard is the same. To your autonomous system, does it think of it the same way or does it put it in the category of a motorcycle or a bus stop? Who knows? Until you present it with that edge case, you don't know. I've got a great example in my class of an edge case where the hazards in the road are a chicken and a woman in an electric wheelchair with a broom chasing the chicken. You can't-
Lou (01:15:44):
They didn't program that in? Come on.
Alan (01:15:47):
But it was a Waymo vehicle that maneuvered through this scenario, and it did exactly what a human would do, it stopped when the hazards were in front of it, when the hazards were on the other side of the road, it tried to creep through the environment and then it stopped again when the chicken came back in front of it and it took pictures the whole way, which is exactly what humans would do. So that's pretty impressive. That's a success for autonomy.
Lou (01:16:10):
That totally is. Oh, man. I want to buy some of these test devices, by the way. It just sounds fun, to at least get my feet wet. Is that something that somebody can do who doesn't specialize in it, or are they too expensive?
Alan (01:16:24):
So I'll give you three examples, and one of them is a warning, but the first example is go on Amazon and buy a cardboard cutout of your favorite character or politician or actor. And if you have a vehicle newer than about, I think, probably 2020, so I don't think your Tundra will do this, but vehicles newer than about 2020, the pedestrian detection relies mostly on image classification and ignores radar data. So lots of new cars will stop for a cardboard cutout if it looks like a person and if the wind doesn't blow it over. So that-
Lou (01:16:59):
And buy somebody you want to hit, I imagine, buy some-
Alan (01:17:01):
Exactly.
Lou (01:17:01):
... figure that you hate.
Alan (01:17:04):
Mine is James Bond. You'll see it at WREX. I'll have James Bond there. And I'll get a couple others in case someone runs over Mr. Bond. So that's the easiest way to do it. If you want to use a car shaped target, that's more difficult. You have to have an image, so a vinyl graphic image stuck to something deformable. It can either be a piece of foam, or mine is actually bonded to a radar reflective fabric that I can then stretch out on an awning. I can attach it to an inflatable soccer goal or something like that, or you can make it a thick piece of foam with metal in it to be the reflectors. And some vehicles will respond to those and some won't because they don't have that 3D reflection I was talking about. So my Tesla, the adaptive cruise control will stop for that, but the collision warning will ignore it and blow right through it. Same vehicle-
Lou (01:18:03):
What does your hood look like, Alan?
Alan (01:18:06):
A couple of chips on it. So two different performances from the same vehicle. But now, for my warning, there were two cases, and one... Well, I think there's now been three, in Florida and Georgia, where customers at a dealership said, hey, what is this stuff? How does it work? And the salesman said, I'll show you, one of them aimed at a tree and hit the gas. He hit the damn tree.
Lou (01:18:33):
Oh, man. That is-
Alan (01:18:34):
Another one aimed at a stopped car, hit the gas, drove into the car. And then there's a third one I use, actually, in my class where it actually hits a group of pedestrians. So the problem is if you're going to do it, make sure it's something you're okay hitting, and make sure it has a radar reflection unless it's the pedestrian scenario I talked about, make sure it looks like a vehicle. But the first time I did this with a vehicle image, the vehicle image I found didn't have a license plate on it. And I wasn't getting a very good response. And I thought, oh, there's no license plate on there, so I printed an image of a license plate, not metal reflective, just the image of it, stuck it on there. And the performance of ADAS systems probably doubled, just from adding a license plate. So odd little things like that could make a difference. But don't practice with anything you're not willing to hit. It's kind of the bottom-
Lou (01:19:31):
So I shouldn't put my motorcycle in front of my newly acquired Tesla that I'm hoping to get before too long, and just try to plow right into it.
Alan (01:19:40):
I don't recommend it, I don't recommend it. At WREX, you will see me walk in front of them and you'll see times when it works and when it doesn't work. But I've practiced at that. I'm a trained professional. Don't try that at home.
Lou (01:19:52):
Yeah, make sure you tell WREX's insurance company that so that they write the policy.
Alan (01:19:58):
Exactly, exactly.
Lou (01:19:58):
I didn't mean to cover this but it's a bit of a selfish question, Tesla's been getting a little bit of heat recently for pegging motorcyclists. Is there a inflatable device that can be used for testing for motorcycle detection for the autonomous systems?
Alan (01:20:14):
Not that I've seen, not that I've seen.
Lou (01:20:17):
I guess Lightpoint's got another mission, then we have to build one.
Alan (01:20:20):
There you go, there you go. You know what's interesting, because some of the testing I've done suggested that cameras and radar are really good at picking out motorcycles, surprisingly good at picking them out. So when I heard that Tesla has hit a couple of them, it surprised me a little bit because I've had the opposite experience where they picked them up very quickly. But I'm doing this in a stopped scenario. Can pick them up at speed and assess closing speed and lateral offset. That, I don't know. But fire trucks worry me more. I don't know how many fire trucks Teslas need to run into, but it's been a lot. So that's a little more concerning to me.
Lou (01:21:00):
Yeah. I guess somebody gets hurt either way, but in that case, it's definitely the Tesla driver taking the beating. So the next section I was hoping to talk about is the future, and they'll be via my roundabout questions, but one thing I've heard you talk about a bit is how or how not autonomous vehicles will kill recon. And we all thought EDR might kill recon in the early 2000s. We're like, well, that's the end of us. We got downloads. What do they need us for? How do you look at the potential for autonomous vehicles crushing us in our industry?
Alan (01:21:35):
I don't think that'll happen at all. What I think will happen is it will dramatically change our industry. I think in 20 years from now, we won't carry tape measures anymore. We won't measure tread depth or tire pressures anymore. But we'll all be experts at reading code and looking at validation studies within software. So I think our work is going to move more toward software engineering. When you think about now, for us to do some looking at hex data, 20 years from now, that just seemed crazy that we would even consider hex data, and now sometimes we have to look at that stuff.
(01:22:15):
So that's the biggest change I see happening, is a shift. And for some people, they're going to feel like it doesn't fit them, and they'll have to either choose to change or maybe limit what kind of work that they do. But it'll make opportunities for other people as well who have the software background, but maybe not the automotive background to come into the field. So it's definitely going to shift, but it's not... If EDR didn't kill our business, if anything, it grew it. ADAS isn't going to kill it either. It'll grow it but change it.
Lou (01:22:46):
That's how I looked at video too. I think a lot of people are like, well, video's everywhere now, so we don't need the recons. But if your practice is anything like mine, when I get a video, it just means that my bill's going to be bigger. It's going to be more work because you can figure things out on a more detailed level, but it requires a lot more work to do that.
Alan (01:23:06):
Well, I guess that's a good analogy. So video, usually, allows us to do a better quality job than we could do without it. Sometimes it just adds noise, but usually, it's a better quality result. And ADAS will be the same, in the future, once the ways we get data from it will be more standardized, we will be able to do a better quality job because of it than we can do today. So I look forward to that.
Lou (01:23:32):
So how do you consider your role? So people like you and me, I'm not quite as advanced as you age-wise, chronologically speaking.
Alan (01:23:42):
Advanced, I like that. I'm advanced. That's what I am.
Lou (01:23:46):
You are very advanced, Alan. So do you plan, personally, on diving into that? How does somebody like, okay, if I'm entering recon and I'm 25, it seems inevitable, I have to be focusing on the software side. I'm a mechanical engineer. I'm in my forties. Who knows how long until I have to retire, is do I have to.
(01:24:04):
Who knows how long until I have to retire? Do I have to jump into the software side? Are you going to jump into the software side? Or can we get a buddy and say, "Hey, I need you to help me with my recons now?"
Alan (01:24:11):
I think it's a mix. I remember, a couple years ago, I had a problem where I found a programming error. I think it was a CAT engine where something about the timing was clearly a programming error. I could identify it, but I didn't know what it meant. And I have an old friend who's an engineer at Apple, a software engineer at Apple, and I sent it to him and he said, "Oh, this is a complement of twos problem," and sent me a link to what it was, an explanation of how it worked, a solution that likely would allow me to correct it and still work with it. I had no idea that existed.
(01:24:48):
But he knew immediately when I told him what the problem was. The same guy, when I needed to do a low pass Butterworth Filter in Excel, I couldn't find a clean way to do it. He wrote me one in five minutes. So that literally is exactly the kind of thing he does for a living, but he doesn't know how a car works, right? So being able to go to him is really useful.
(01:25:10):
But on the other hand, I find it really frustrating that there's more I could do if I had a software development background. And I've thought about getting more into that field. And then I remembered, oh, that's right. I hate coding. Can't stand it.
Lou (01:25:24):
Yeah. There's a reason we're not software engineers. Yeah.
Alan (01:25:27):
Exactly. But you know, you look at the stuff Jeremy Daly does. I mean, he does coding. But I don't think that's, at the end of the day, what he's trying to do. He's trying to reverse engineer a lot of electronics in software, but not actually create it. If I had what he has, I would be thrilled. I would like to have the knowledge and capability Jeremy Daly has and put that into my work, where I can grab a radar and get data out of it and do testing with just that component. And I'm moving that way, but it's fairly slow. Right?
(01:25:59):
But part of what started me down this road is every couple years I get bored of left turn crashes. Oh, this time it was the red car turning left, not the blue car turning left. Oh, that'll make a big difference. This is exciting now. Well, it just gets boring after a while. Right? So whenever my work gets boring, I start messing around with things to make it interesting. What can I do? What new methodology can I experiment with? What can I learn to make this interesting?
(01:26:26):
And in 2015, I bought the new Ford F-150 when it came out, that had, it was one of the earlier vehicles that had the whole suite of ADAS stuff on it. And I was fascinated by it. And I just had to learn everything I could about it. And that's how it started. So a big motivator to me is to keep me interested, to keep me excited about this work. And to me, this is exciting. It's probably not to everybody, but to me it is. And so that's really what drives my motivation is when I have questions I need to answer, and when I need curiosity to inspire me. So that's kind of what governs how much I learn about it.
(01:27:02):
But one of my mountain biking buddies is doing a Master's Degree in Electrical Engineering right now, and is an absolute nut about Teslas. Well, we were on the side of a mountain bike trail yesterday and someone was making fun of us because we were talking more than biking. Well, to me, that's part of how I not only enjoy what I do, but how I learn more about it. Right? I learn things from him that I wouldn't find on my own.
(01:27:25):
So, to me, we're fortunate if our work overlaps with our personal lives such that we enjoy what we do and there's not much distinction. For me, to smash up vehicles is fun. I don't have to get paid to do that. So we're fortunate in that way, right? Not everybody has that, but that makes it easier to learn things as well. For someone else, if this topic is dry or boring or unappealing for some way, yeah, it's not going to be as much fun to learn about it. They're probably not going to learn as much. But that's okay. That's different for everybody.
Lou (01:28:01):
Yeah. And then you will need that buddy. You'll have to pair up. It'll be a multidisciplinary effort. Maybe they can call you, they can call an electrical engineer. And like you said, for human factors, most of us can answer what is the response time to a left-turning vehicle in the middle of the day, where contrast isn't an issue. Most of us can do that and most of us should be able to do that. But if it's going to get more detailed, phone a friend or dive down that rabbit hole really deep. And if it's something that blows your hair back, then that's you. And if it's not, then you're going to need to know who to call.
Alan (01:28:33):
Exactly. Yeah. And that's a different decision for everybody. For me, it's not surprising. I do get called from people looking for help, and I like the fact that then I'm just doing the ADAS part. Someone else has already done the reconstruction. I don't have to worry about that. I can focus just on this nugget, this part that I'm so fascinated by. And so, I kind of like that.
Lou (01:28:54):
Yeah. And we have a lot of similarities career-wise, we were talking about before recording, and I think we've always found that we had a lot in common. And that's the way my career has developed too, where I'm just searching for exciting things, new things to learn, things that can help me perform analyses that are more sophisticated, more detailed. And if I find something, even if it doesn't look profitable at first, I don't care. I'm diving down that rabbit hole because that looks like a lot of fun.
Alan (01:29:22):
Oh, yeah. Absolutely.
Lou (01:29:22):
And generally, they will align. If you are excited about it and you see that it personally has value, then generally there's other people they're going to find value in it, too.
Alan (01:29:31):
Mm-hmm. Mm-hmm.
Lou (01:29:32):
So on that note, seems like continuing education always has obviously been important in recon, especially with EDR and video analysis and things that are changing quickly. But this, it brings it to a whole new level. Things are changing so quickly. If you really even want that base level of knowledge, it seems like you're going to have to pay attention and go seek out some classes.
Alan (01:29:57):
Oh, absolutely. Yeah. It's difficult to even keep up with. So with the SAE class, there's a test at the end of the class, and one of the questions was, true or false, are there any Level 3 production vehicles available in the US? Well, the answer was false until last week.
Lou (01:30:14):
Yeah. Right.
Alan (01:30:15):
I mean, that's just one example. But literally, these things change weekly. Right? So it's tough for even me to keep up on it.
(01:30:23):
And of course, it's that question that keeps good people in our field awake at night. What if I'm wrong? What if I miss something? What if I didn't get something? What if everybody else knows that Teslas can't identify motorcycles, and I'm the only one who didn't know that? What if something like that happens?
(01:30:39):
Well, I think that paranoia is part of what drives us to keep learning, to make sure that nobody knows more than we do. But it is a significant challenge trying to stay on top of it, especially when the same vehicle, one model year to another, can have dramatically different behavior. It can improve significantly. The technology we have went from pretty much not being able to detect pedestrians at all in, say, 2017 production vehicles, and I'm kind of generalizing here, to where I'm willing to walk in front of most 2021 production vehicles below 15 miles an hour. I mean, that's huge.
Lou (01:31:21):
Yes.
Alan (01:31:21):
That's a huge difference in just four years.
Lou (01:31:26):
Yeah.
Alan (01:31:26):
And how much did tires or engines or seat belts or airbags changed in those four years? Yeah, they improve, but incrementally, right?
Lou (01:31:34):
Yeah. And I know, in speaking with Rick Ruth, who teaches EDR classes, and then I teach a motorcycle class, one of the biggest benefits of being an instructor like that in forming the communication, the relationship, I should say, with the community, is that they ask you questions when they see weird things. And you can tackle them together, and then you end up on the forefront.
(01:31:55):
So I know you're teaching the SAE class. I suspect you're learning a lot from all of the attendees who come with casework, A. And then, B, I'd just like to hear you talk a little bit about the course. I saw SAE, and you made an awesome trailer for the course. And it looked like you're doing testing. It looked like you were at the IIHS facilities there. But could you just fill us in a little bit about the class? And there's one in May, it looks like. I'm hoping to attend.
Alan (01:32:20):
Yes. Yeah. So the class, I started maybe five years ago. It was a one-day class, not just on autonomy or ADAS, but reconstruction of crashes using it. And there's another really good SAE class just on autonomy itself. But mine focuses on the reconstruction of accidents involving ADAS.
(01:32:40):
And it took me a couple of years to get the content, get it all worked out. But this past year, we engaged with IIHS to use their facility to do not only the training, but also do the onsite testing. And I got to tell you, that was a highlight of my career. We show up there, we get a tour of the whole place. They do a live crash test of their, you know, small overlap test for us. They ran the pedestrian mannequin, which is moving across the road. Anytime you have a moving target and a moving vehicle, the complexity is a lot higher. They did that right for us. And then they set up stationary targets for us to drive into. We could download them.
(01:33:19):
And then we're back in the classroom, where we're all sitting around a round table. It's not me up in the front. We're all kind of sitting together, which creates more informal conversation. There were TV screens all the way around us so anyone could look in any direction and see what I'm showing. And we're talking about this stuff. And I just stopped talking for a second and stared out the window, the big window, the whole length of the room.
(01:33:43):
And as I stopped, people started turning around. Well, what was coming in was a transporter full of Rivians, the R1T and the R1S, which I don't think were even available for sale yet. They were early production units coming to IIHS for their testing, for rating and so on. I hadn't even seen the R1S. And here, when 11 of them come in, they're going to leave in a dumpster by the time they're done.
Lou (01:34:07):
Yeah.
Alan (01:34:07):
Because they've got to crush them when they're done. So the whole thing was just amazing.
(01:34:12):
Well, now that COVID travel restrictions have lightened up, the companies that support IIHS are filling up their travel schedule. So we can't use that facility right now, but we're probably going to be using the Mcity Autonomous Vehicle Test Track in Ann Arbor, near Detroit, starting in May. I'll be up there in two weeks to go do a tour of it. But it looks like we'll use that from now on, and all the same kind of stuff, just a different track. And one neat thing about that is they have some V2X capability there that we can mess with, and they have a Level 4 shuttle that we can ride in. So some other neat things there.
(01:34:51):
Because what I really want people to do when they come to my class is not listen to some old man talk. I want them to experience things and see it for themselves. Because to me, the best way to teach someone isn't to tell them something, but to show them a phenomenon and let them decide for themselves what happened.
(01:35:08):
Here it is in action. Here it is working. Here it is not working, sometimes. You decide for yourself, don't wait for me to tell you what it means. Decide for yourself what to do with that, and how to kind of package this up for your own understanding.
(01:35:21):
And I remember, in the tour of IIHS, a friend of mine who was in the class, he looked kind of bored. And I remember thinking, how could he be bored in this amazing place? And then I look back again, and he had his cell phone up just doing a pano picture of the entire facility all the way around. And I realized he wasn't bored. He was amazed by the place we were in. And it was a cool place. And doing it Mcity is going to be pretty amazing as well. So it's a lot of fun. I really enjoy it.
Lou (01:35:55):
Yeah, it's great. It sounds like it's a great pairing of resources between your knowledge and everything you bring to the table, and then SAE's power to bring people to the table and contribute with facilities and venues.
(01:36:09):
And that might be a good way for a lot of people to, like you're saying, if you don't have that spark, if this doesn't excite you, then the odds of you having the information and the knowledge required to reconstruct one of these things fully by yourself is low. You have to be excited about it. You have to dive into it. And probably one of the best first places to go do that is to go take your class, see what's out there, see some of this stuff happen. If it triggers something in you and you want to learn more, do it. If you don't, then figure out the contingency plan, which is at least know enough to be able to quarterback it and bring somebody else onboard.
Alan (01:36:47):
Yeah. Yeah. I mean, you literally just bookmarked my class. One of the first slides is here's accident reconstruction. Here's all these overlapping specialties, like human factors. How much overlap do you want to have in your field? That's one of the first slides.
(01:37:02):
One of the last slides is, if you like this, here are some other places you can go to learn more about it. So that kind of gives people time to think about what they want to do with it. And some, I'm sure, dive right in. Some probably are like, "Hey. Nah, I'm going to call someone else when these come up."
(01:37:19):
And both are fine. But it's just I'm a big fan of thinking ahead. Decide before you get that call, where someone asks you what this stuff is, decide before you get the call, how you're going to handle it. Are you going to take it on yourself? Take on some of it and hand it off? Please don't say, "Oh, I don't know anything about that stuff." No, we're all smart. We can learn about this. So it's well worth learning something about it.
Lou (01:37:44):
Yeah. I think, and one of the things, if you talk to reconstructionists, the ubiquitous concern is that we're going to be dinosaurs. And in speaking with you, I think it's pretty clear, no, you absolutely do have to learn about ADAS in autonomous vehicles. But you don't have to be the expert.
(01:38:04):
There's two distinct paths. The third path doesn't exist. You can't not know anything.
Alan (01:38:10):
Right.
Lou (01:38:10):
Not an option anymore.
Alan (01:38:11):
Yeah.
Lou (01:38:12):
But the two paths that do exist, if you want to stay in this industry, are learn enough about it so you can quarterback it or be that person yourself. And you have to pick one of those paths if you want to hang with recon in 2030.
Alan (01:38:25):
Oh, absolutely. Absolutely. I mean, an extreme example, I got a call just a couple years ago, not 15 years ago, just maybe two or three years ago, from a client who said, "Hey, Alan. I've hired this reconstruction expert in another state. He can handle the crash, but he can't do the truck download. He doesn't do truck downloads at all. Can you come do it?"
(01:38:46):
And I said, "I can. Here are some less expensive options, but here's a thought to consider. If your reconstruction expert won't touch truck downloads at all and doesn't have a way to get it done for you, you probably shouldn't use that expert. You probably shouldn't use them at all, you know?"
(01:39:03):
Because there's some truck downloads, a Mac and Volvo download I can't do, but I can tell my client, "I'll coordinate it for you. I'll get it done. Here's the other party I use and here's why." So I take care of it all. But to tell a client, "Oh, I don't do that." No, this is part of our field. Just like truck downloads, we need to have a way of managing this, right?
Lou (01:39:25):
Yeah, exactly. It shows a lack of awareness of the community and involvement with the community. I'm with you. I know my boundaries. There's a bunch of stuff I don't know how to do, but I think that it's important. It's part of our job, like I've said earlier, as a recon, to me, is helping a client select the appropriate experts for that job.
Alan (01:39:46):
Yeah. And thankfully, most people in our field feel the same way.
(01:39:48):
"Hey, you know what? That's not in my specialty, but I know someone who can do it. I'll help you set that up." That's the right answer.
Lou (01:39:55):
Yeah. And that's where the majority of my work comes from. I'm a motorcycle expert, that's all I do now for reconstruction work. And a lot of times people understand, hey, that's a little bit over my head. Give Lou a call. And so, I appreciate it. And then I do the same thing with a bunch of other things, including heavy truck downloads. I can do, if the truck is not annihilated, I'll do the DLC, Detroit Diesels and the Cummins. Those are pretty easy. I learned from Jeremy. And that guy, I'll take his intelligence, by the way also, if that's up there for availability with his knowledge.
Alan (01:40:34):
You're right. Right?
Lou (01:40:34):
So you were saying, we're going to be trading our tape measure for a laptop, kind of downloading things, understanding the software, that's going to be more important than going out there and measuring a tire mark or something like that.
(01:40:48):
And then we kind of talked about the omnipresence of data, CDR, vehicle control history, whatever the heck is going on in those dash cams that we might eventually get. And then Tesla has the vehicle data report that's either out or coming out, which you'll be able to get. What does recon look like in a decade? Is it primarily interpreting electronic data and video, in less boots, fewer boots on the ground?
Alan (01:41:16):
Yeah. And I'd say, actually, that describes the majority of my work today. I actually like crashes. I can reconstruct without seeing the vehicles or the site at all, because that means I have really good quality evidence. Right? So, yeah, I do that quite a bit now. And what happens with that is we have to interpret all this data.
(01:41:38):
So take Tesla log data, it's not made for us. It's not made for our use. It will tell us things like adaptive cruise speed is set at 82 miles an hour. Well, wait, this is a neighborhood street. It can't be an 82. Was he trying to do 80? Well, no, he wasn't. That's just the last set speed it had and it propagates through the data, right?
(01:42:02):
So there's a lot of things like that in electronic data we have to be able to pick out, that whoever coded that report and the data behind it isn't doing what we're doing. They don't really care what we do. So we've got to pick out what matters and what's relevant, and be able to weed out the things that aren't relevant and explain why they're not relevant. No, this driver was not trying to do 82 miles an hour on a neighborhood street. That's just the last data point that was held in the log. But I actually do that a lot. I mean, between dash cams, telemetry and EDR data, we can get a really good picture of what happens in a crash now, in addition to everything we normally do with the vehicles in the roadway. And I think that's great. There's a lot we can do there.
Lou (01:42:48):
Yeah. And it's all evolving so quickly. Kind of like we talked about, there's no current textbook for this. And if there was, you'd have to rewrite it every single week.
Alan (01:42:57):
Yes.
Lou (01:42:57):
So this is going to be something that we all have to keep our finger on the pulse of, and keep your head up there, and maybe take your class every couple years. Find a class, form relationships with people who are keeping apprised of the situation, and ask them what's going on in a specific case. There's just a lot there.
(01:43:21):
And so we have WREX coming up, obviously, in a couple months and you got your presentation there. So hopefully, you'll be informing 1,200 people about where we're at and what they should do. I just powered right through. It looks like you wanted to say something, so I'll give you that opportunity. But then I'd love to hear about WREX and your presentation there.
Alan (01:43:41):
Oh, yeah. So I was just thinking about how to learn things, how to use continuing education to stay up in our field. And I was thinking, early in my career, I thought that reconstruction conferences were just to learn something and stop. Okay, I'll take an EDR. I'll take a CDR conference presentation, and then I'm good. That's all I need to know.
(01:44:01):
What I didn't realize is that most of the people doing presentations, they're progressing their knowledge, they're advancing what they know and what they share. So it's really a way, if you're early in the field, it's a way to learn what's out there. For most of us, it is a way to stay up with what's changing and what's new. Right?
(01:44:18):
And so that's the real value of a conference, is don't go take a five-day class necessarily just to learn what's new, come to these conferences and learn the bits and pieces that are new. And so that's one of the real strengths of it to me.
(01:44:33):
In that way also, I mean, that's how you and I met, it was I think at the WREX conference or maybe one before that. That's how you get to know people and who's good in your field that you want to go to for different things. Right? That's one way you find them.
Lou (01:44:46):
Yeah. They've been invaluable in my career. Like you said, forming relationships just unintentionally, you find people that you get along with really well and you have overlapping interests, but not necessarily the same skillset. And when you put the two skill sets together, you can accomplish things that otherwise wouldn't have been accomplished.
(01:45:05):
And then, like you said, you get that exposure. It's such a crazy community in that there is so much you have to know and so much to be aware of. And it's really difficult to do that without getting out there, talking to people, going to conferences, being part of a forum, and then at least you get the exposure. You're not going to learn autonomous vehicle reconstruction by listening to Alan Moore for two hours, but you'll know what you need to know and you can start getting out there and exploring it, and figuring out if it's something that interests you.
(01:45:34):
And that is huge. And you can see, the people that don't go out and attend conferences and become part of the community, you can see gaps in their skillset that are not necessarily there for people that are staying in touch.
Alan (01:45:48):
Yeah. Yeah. I mean, one way to think of my presentation at WREX is I want to give people a toolbox, not a physical toolbox, but kind of an intellectual toolbox. Here's what's out there and here's what you can do with it.
(01:46:01):
So if you buy a bunch of tools, you may never use every tool, but you know you've got a full set of metric sockets. If you need a 17 or a 12 millimeter socket, you've got it in there. Right? My goal in speaking at conferences is to give people a toolbox. So when they hear Lane Departure Warning or Lane Keep Assist, or why does everybody turn this feature off in their cars, well, it's because it's really annoying, a certain feature, I want people to know what those are. So when they hear it later, they're like, "Oh, wait. Wait, wait. Hang on. No, I heard about that. Let me go back and look at the notes. Let me go back and look at the PDF when he talked about this."
(01:46:34):
So people kind of know what's out there. They may not know every piece of it or how to use all of it, but at least they'll know what's out there. And I think that's important.
Lou (01:46:41):
Yeah, I totally agree. Get that first exposure. So how do people stay in touch with you? How can we continually monitor what you are thinking about so we know what we should be thinking about? How do people find you?
Alan (01:46:55):
So email is the best way, and then conference presentations. And then I try to, you know, the INCR news group or user group is obviously huge. I try and stay on that, but I go in and out of it, especially now because I'm traveling quite a bit. I'll disappear from it for a while, and then I'll, but usually I try and check it. Whenever there's something ADAS-related, I try and jump in there and get on it, because a good way to share ideas and share awareness.
Lou (01:47:24):
I totally agree. That's been a huge asset in my career, just in that you have access to people like you, if you're checking anyway, and you can just post up and say, "Hey, I have this question. This is the case I'm working." Who knows what, you know, about a paper or about a methodology, or about who to contact. So for those that aren't a part of that, I think you have to be nominated, but feel free to reach out to me and I'd be happy to nominate you if appropriate.
Alan (01:47:49):
Mm-hmm. Oh, yeah. Yeah. The one unfortunate thing about that, though, this happened just about a month ago. Someone had emailed me about a Tesla case they were working on. They needed help on it, and I don't remember the details, but it was a fascinating case. The log data had some really good stuff on it. And he sent it to me as I was heading off to travel somewhere. I thought, okay, I'm going to get to this when I get back. Well, I came back and everything happened at once. Six months later, I'm like, "I never got back to you on that, did I?"
Lou (01:48:17):
Oh, yeah.
Alan (01:48:18):
And the case, whatever it was, the case ended, it was over. There was nothing more to be done. And it was a really cool example of, I don't forget what now. So that's the unfortunate thing, is it's not always the most efficient because our time draws us elsewhere. Right? So that's kind of the one downside to it.
Lou (01:48:33):
I tell Sam here sometimes when I come in, if I responded to every email and I try, but if I responded to every email in my inbox, I would never do my job.
Alan (01:48:41):
Yeah. You'll never get done.
Lou (01:48:42):
And my family will have no money. Well, thanks so much, Alan. I really appreciate you spending a couple hours to talk about this stuff. I think it's really important for people to hear about. It's like, listen, folks, there's no textbook for it. So keep up with people like Alan. Take classes and figure out what your tolerance for learning all of this stuff is.
Alan (01:49:01):
Absolutely. Good. Well, thank you for putting this together. I enjoyed it. I'll be curious to see where it goes.
Lou (01:49:06):
Cool. Awesome. Thanks, Alan.
Alan (01:49:07):
All right. Thanks, Lou. Take care.
Lou (01:49:10):
Hey, everyone. One more thing before we get back to business, and that is my weekly bite-sized email, To the Point.
(01:49:16):
Would you like to get an email from me every Friday discussing a single tool, paper, method, or update in the community? Past topics have covered Toyota's vehicle control history, including a coverage chart, ADAS, that's Advanced Driver Assistance Systems, Tesla vehicle data reports, free video analysis tools and handheld scanners. If that sounds enjoyable and useful, head to lightpointdata.com/tothepoint to get the very next one.