Autonomous-driving disruption: Technology, use cases, and opportunities

As autonomous-driving technology advances, new transportation use cases will emerge, largely driven by factors such as what is transported, type of vehicle ownership, and where the vehicle operates. Use cases drive business models, value chains, and strategic decisions. Asutosh Padhi, a senior partner in McKinsey’s Chicago office, and Philipp Kampshoff, a partner in the Houston office, share the McKinsey Center for Future Mobility’s perspective on how the most prominent autonomous-driving use cases are developing to help executives navigate and stay ahead of upcoming changes.

When will autonomous-driving technology be market ready?

Asutosh Padhi: Our expectation is that true Level 5 autonomy is about ten-plus years away. But we are likely to see geofenced applications of autonomous vehicles (AVs) in the next three to five years. The progress on the hardware has actually been very significant. The cost of LIDARs [light detection and ranging sensors], for example, has dropped by a factor of ten over the last five years. Similarly, the amount of computational capacity that the GPUs [graphic processing units] can provide has gone up dramatically.

There are still two challenges that remain. The first one is object detection and categorization, which is the ability of a car, for example, to recognize a pedestrian: this is what it looks like if a pedestrian is pushing a stroller, if the pedestrian is carrying an umbrella, if the pedestrian is carrying a plant, when a pedestrian doesn’t look like a pedestrian, etc. And the second challenge is decision making. When there is human driving, there are a lot of subtle signals that drivers send to each other—right of way, etc.—and often if you’re an autonomous car, you can’t pick those up. So I think a combination of those two, and in particular the edge-cases question; there’s going to be high time required to be able to teach the car how to recognize and deal with the edge cases.

Philipp Kampshoff: That’s really the reason why we don’t see autonomous vehicles already being readily available as robo-taxis driving around. The decision making is done by the neural net in the car, at least for the large part. And you can train the neural net relatively quickly to accurately assess 95 percent of the situations. But it takes a lot more training and a lot more miles that you have to drive in order to train the neural net up to 99 percent correctness.

Edge cases is where the problem is. If you take a regular human driver, on average it’s roughly every 165,000 miles he has an accident. That means, in 99 percent of the cases, the human person is right when they drive themselves. However, for an autonomous car, regulators will require the autonomous car to be much safer than current human behavior. So it will come down to handling these edge cases—for example, the four-way traffic sign where these days hardly anybody really comes to a full stop. So do you have the autonomous vehicle be the only one that’s behaving correctly and sits there forever waiting for everybody else to stop all the way? Another example is a construction site where you have a red traffic light, and the autonomous vehicle is approaching the red traffic light, but there is a construction worker who waves the people and cars through. How does the autonomous vehicle know that it can ignore the red traffic light? So these are the kinds of edge cases that have to be overcome for autonomous vehicles to really be out there in mass adoption.

Read more : https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/autonomous-driving-disruption-technology-use-cases-and-opportunities