Michigan Minds: When will cars drive themselves?
EXPERT ADVISORY
In this episode of the Michigan Minds podcast, Henry Liu—director of Mcity and the Center for Connected and Automated Transportation, and a professor of civil and environmental engineering at U-M’s College of Engineering—gives an overview on the state of autonomous vehicles, whether you’re wondering what the holdup is for cars that no longer need driver’s wheels, or eyeing offerings like Tesla’s Full Self-Driving, GM’s Super Cruise or Ford’s BlueCruise.
Kate McAlpine:
Welcome to the Michigan Minds Podcast, where we explore the wealth of knowledge from faculty experts at the University of Michigan. I’m Kate McAlpine, engineering news editor for the Michigan News Office. I want to welcome Henry Liu, director of Mcity and a professor of civil and environmental engineering, who will tell us about the state of autonomous vehicles. Professor Liu’s recent research has explored ways to train them on rare, dangerous, real-world situations more quickly.
Welcome, Professor Liu. To start off, could you give us a brief overview of your expertise in autonomous vehicles?
Henry Liu:
Well, my research mostly focuses on testing and evaluation of autonomous vehicles. We also do training of autonomous vehicles as well. And my other side of expertise is related with connected vehicles, which means vehicle can communicate with each other and how we can utilize that for various applications.
Kate McAlpine:
On the one hand, my colleagues who’ve been following autonomous vehicles very closely in the news, they know how much slower full autonomy has been to arrive than was originally promised. What have the technological roadblocks been?
Henry Liu:
Major challenge to autonomous vehicles, there are mainly twofold in terms of safety. One, I call it curse of dimensionality. What that truly means is the driving environment for autonomous vehicle is very complex. If you think about combination of different weather conditions, different roadway infrastructure and different users, and different users have different behavior, and if you combine all of that, there are very many, many different situations autonomous vehicle has to be able to navigate through, and that’s very difficult. The good news for that is that with the development of deep learning techniques, the conventional wisdom on that seems to being pushed away. And that’s also why somewhere around 2016, 2017, many of the car companies has announced that they will be mass-producing autonomous vehicles by 2020.
And now we are 2024, we don’t have any commercial available autonomous vehicles for average consumer yet, mainly due to the second major challenge to autonomous vehicle, which I call it curse of rarity. What that really means is that there are a large number of these rare cases related with safety of autonomous vehicle that this vehicle has to be able to handle. And these rare safety critical cases happens could be in any of the situations. For example, could be on freeways, could be on urban materials, sidewalks and things like that. And just to give you one example of that, if it’s during, say, Halloween time and there are people, pedestrians, wearing costumes walking on the street, and then the autonomous vehicle, the perception system, could be misclassify the pedestrians because they are wearing, let’s say like a dinosaur, then it’s difficult for them to classify this is really a pedestrian. And that misclassification could create dangerous situations for these autonomous vehicle to handle.
So these type of situations could be many, many, and it’s very difficult to collect sufficient amount of data to train the machine-learning models as well. And so that’s the difficulty which sometimes people call this is corner cases, and we call it curse of rarity. I think we still have some distance to go to be able to handle that.
Kate McAlpine:
You’ve been working on the curse of rarity. Can you tell us a little bit about that?
Henry Liu:
Yes. The probability for some of the safety critical event are very low. This is also a compounding effect. The autonomous vehicle still has to be able to navigate through complex driving environment. And so there are multiple dimensions, meaning that there are many, many different situations that could generate safety critical event. I just gave you one example, Halloween event. Another one could be different weather conditions, and that create difficulty for autonomous vehicle as well. And sometime crowded pedestrian and those type of situations is also very difficult for autonomous vehicles to navigate through. But what I’m really saying is that with these complex driving environment and low probability of safety critical event, it’s very hard for machine-learning models to get enough data to learn from that. And majority of the data are not safety critical, and so the autonomous vehicle can handle these type of situations already. And for these low-probability safety critical event, there’s not enough data.
Kate McAlpine:
Now, you developed some software for faster training. Can you tell us a little bit about that?
Henry Liu:
We’ll basically separate the training data set into two type of data. One is normal driving situation. Another side is on safety critical events. And then by putting it all together, then we learn from these safety critical situations that some of the safety critical situations are successful and avoid the accident; some of these are failures and result into accident. And we learn from both success and failures. And then to train the machine-learning model to handle this type of situation, it actually turns out learning from success is much more important to learn from failures. And although as human being we learn from failures, but learning from success is more important because you have a lot more data to learn because a lot of these situations are near-miss type of situations. And really, the accident situations is even a lower probability, and so the data for that is even smaller.
Kate McAlpine:
Okay. That’s really interesting. So I understand that there’s a roundabout a little south of town?
Henry Liu:
It’s the most dangerous roundabout in the Washtenaw County.
Kate McAlpine:
But you put sensors there, right?
Henry Liu:
Yes.
Kate McAlpine:
And what did you see?
Henry Liu:
So the reason that roundabout is quite dangerous is that that’s a two-lane roundabout, and there are generally two rules for the two-lane roundabout. One is if you approaching the roundabout in the outer lane, you can turn right and you can go through, but you should not turn left. The other general rule is that if you approach to the roundabout, regardless it’s one lane or two lanes, you have to yield traffic on both lanes. And these are the two things that people always forgot, and sometime they create dangerous situations.
And what we did at that roundabout is we add sensors at each of the corners of the intersection. So we have cameras there and they look down of that intersection. And then we can identify these vehicles in terms of their locations and size of the vehicle and things like that and we can record the trajectory of these vehicles, and we can also identify the accident occurs at the intersection, then more importantly, to identify near-miss occurred at the intersection. And so this give us enough data so that we can utilize that data to train our machine-learning models to help autonomous vehicle to navigate through this roundabout.
Kate McAlpine:
So the autonomous vehicles won’t make our mistakes at that roundabout. One of the challenges that I think Mcity is looking at is certification. How do you prove that your AV is safe enough?
Henry Liu:
Right.
Kate McAlpine:
Can you tell us about that?
Henry Liu:
So Mcity has developed what we call Mcity Autonomous Vehicle Safety Assessment Program. We have two tests, and part one test is what we call driver licensing test. This is very similar to test the human drivers when you get the driver license. What we are trying to do is test what we call basic behavior competency for autonomous vehicles, for example, whether this vehicle will be able to follow other cars, whether this vehicle can merge from freeways to mainline, and whether this vehicle can make a left turn during the permissive time period, and whether this vehicle can navigate through the roundabout, the two-lane roundabout as we discussed.
The part two of the safety assessment program is what we call driving intelligence test. And that’s a more comprehensive testing of autonomous vehicle safety performance. We’re trying to compare autonomous vehicle safety performance versus human drivers on average. And so in this case we like to know, for autonomous vehicle, what will be the accident rate, meaning how many miles in a general driving situation this autonomous vehicle will incur an accident, and we compare that to average human drivers. Obviously we expect for autonomous vehicle to be deployed in large scale. Their safety performance in terms of accident rate should be much better than average human drivers. And so this is the two-part test for autonomous vehicle we have developed.
Kate McAlpine:
Okay. What will it take to get that implemented? It doesn’t seem like there’s a ton of oversight for autonomous vehicles right now.
Henry Liu:
At the moment, there’s no consensus in the professional community how we test and evaluate autonomous vehicle. And in terms of autonomous vehicle testing itself, it’s actually quite complex. Just think about human-driven vehicles. There’s federal guidelines in terms of what would be the safety performance for these human-driven vehicles. And also we have a driver vehicle test and also written test for drivers. And in this case it’s driverless vehicle, and so we need to test both. We need to ensure both the driving functions in terms of functional safety of this vehicle satisfy the standard and also the intelligence of this vehicle, just as a human driver they will be better than average drivers.
And so at the moment, particularly the second part in terms of the intelligence or the behavior of these autonomous vehicles, behavior confidence of these autonomous vehicles, there’s no consensus on that yet. So what we’re trying to do at Mcity as a research standard at the University of Michigan, we’re trying to provide a framework and how these vehicles should be tested. And obviously we are not a regulatory body; we cannot certify any vehicles, but we can provide the safety assessment framework so that the regulatory body can adopt in the future.
Kate McAlpine:
So while we’re not seeing full autonomy in cars that we ride around in, we are seeing it in bike lanes with delivery vehicles and also such as the U of M startup May Mobility, which has deployed some buses, I believe.
Henry Liu:
These are shuttles. I always say smaller shuttles or vans.
Kate McAlpine:
Oh, shuttle buses. Right.
Henry Liu:
Smaller shuttles, yeah.
Kate McAlpine:
Okay.
Henry Liu:
It’s closer to passenger vehicles, but you can have multiple people riding it. Yeah.
Kate McAlpine:
How is that going?
Henry Liu:
May Mobility is doing really well. We just have Ed Olson, who is the May Mobility CEO, deliver a keynote at our CCAT Global Symposium last week. And it looks like May Mobility is currently deploying not only in the US, but also in Japan, and so there are multiple deployment. I think one of the big milestones for them last year is they started the deployment without safety driver. So that deployment is in Sun City in Arizona right now.
Kate McAlpine:
Wow. And how about the bike lane vehicles? How’s that going?
Henry Liu:
So the bike lane vehicles, those are the delivery robots. Is that-
Kate McAlpine:
Yes, the delivery robots. I think Refraction AI is another U of M startup.
Henry Liu:
Yep. Refraction AI is another U of M startup, and I think they have offices in Ann Arbor and also Austin, Texas, and I think they are also testing and deploying these robots as well. And because it’s low-speed, also because the mass of these robots is smaller, and so usually it does not really create safety hazards so it’s a little bit easier compared to these autonomous vehicles transporting people.
Kate McAlpine:
So I’m kind of on the other end of my colleagues who are surprised that full autonomy is not here yet, but it’s not just Tesla anymore saying that you can take your hands off the wheel. I think that both Ford and GM have similar offerings. How do you think about that? How ready are humans for that sort of vigilance without full engagement, and how is that as a step toward full autonomy?
Henry Liu:
All of these, including GM’s Super Cruise and Ford BlueCruise and also Tesla’s full self-driving, so-called FSD, these are all currently what we call level two vehicles. And what that really means is they are advanced driver assistance system. They are not full self-driving as level four of autonomy. And it’s one important differentiator for full autonomy versus the advanced driver assistance system is who take responsibility. And in these previous, the three type of vehicle I mentioned, these are all human drivers take full responsibility for their vehicle. The difference, I think, among these vehicles, the Tesla FSD has much larger operation design domain, which really means that they can operate almost everywhere. The Super Cruise and BlueCruise right now has limitations in terms of where they can utilize, mainly on freeways.
Kate McAlpine:
Okay. So even though it sounds flashy, we’re really looking at an advanced driver assistance system.
Henry Liu:
It’s still advanced driver assistance system. I would say the current development of AI, particularly on large models and foundational models and the recent advancement of Tesla’s so-called end-to-end model for autonomous vehicle, really have the promise to resolve some of the curse of rarity issues in the next few years.
Kate McAlpine:
All right.
Henry Liu:
So it’s coming.
Kate McAlpine:
It’s coming.
Henry Liu:
Yeah.
Kate McAlpine:
Thinking of those as advanced driver assistance systems, sometimes they get bad press because there are crashes and they’re high-profile. However, advanced driver assistance systems, for the most part, have been good for safety, right? Can you tell us a little bit about that?
Henry Liu:
There are a number of studies. Many of these studies is conducted by the car manufacturers themselves, for example, Tesla and GM and Ford. They all have their own studies related with the safety performance when people use these type of features, and the latest report I have seen from Tesla showing that the accident rate, it’s much lower than human drivers when the FSD is being utilized. So there are controversial argument related with these safety performance studies because they are limited to certain type of situations. And so whether it’s fair to compare with average human driver performance is questionable, but their studies does show that the safety performance is much better.
Kate McAlpine:
Is there anything that we should have covered? Why aren’t we talking about this?
Henry Liu:
One thing I do want to say is that given the technology development autonomous vehicle really in terms of industry investment starting from 2008, 2009 when Google X vehicle was developed, and then the peak of investment for autonomous vehicle roughly around 2016, and sort of the lowest in terms for autonomous vehicle, the interest to autonomous vehicle, like last year, particularly after Cruise Automation has this terrible accident in San Francisco. And given the current advancement in AI and also the deployment from Waymo and also Tesla’s FSD, I’m very optimistic. My prediction is in the next five to 10 years, we will see commercial available autonomous vehicles on the road.
Kate McAlpine:
Wow. That’d be amazing.
Henry Liu:
It’s truly amazing.
Kate McAlpine:
All right. Well, thank you so much, Professor Liu.
Henry Liu:
Thank you.
Kate McAlpine:
Thank you for listening to this episode of Michigan Minds, produced by Michigan News, a division of the university’s Office of the Vice President for Communications.
One of the challenges Mcity is looking at is certification. How do you prove that your AV is safe enough?
Liu: So Mcity has developed what we call the Mcity Autonomous Vehicle Safety Assessment Program. We have two tests and part one is what we call the “driver licensing test.” This is very similar to testing human drivers when you get the driver license. We will test what we call basic behavior competency for autonomous vehicles. For example, whether this vehicle will be able to follow other cars, whether this vehicle can merge from freeways to mainline, whether this vehicle can make a left turn during the permissive time period, and whether this vehicle can navigate through the roundabout.
Part two of the safety assessment program is what we call a “driving intelligence test.” And that’s a more comprehensive testing of autonomous vehicles safety performance. We’re trying to compare safety performance versus human drivers on average. And so in this case, we like to know for autonomous vehicles what will be the accident rate, meaning how many miles in a general driving situation this autonomous vehicle will incur an accident. And we compare that to average human drivers.
Obviously, we expect for autonomous vehicles to be deployed in large scale, their safety performance in terms of accident rate should be much better than average human drivers. And so this is the two-part test for autonomous vehicles we have developed.
Contact: [email protected]