Autonomous Car Crashes: Who — or What — Is to Blame?
The promise of driverless vehicle technology to reduce road fatalities hangs in the balance now as never before. Two recent deaths involving Uber and Tesla vehicles using driverless systems have raised the debate on safety to levels that threaten to significantly delay or derail adoption of the technology.
Uber has temporarily halted tests of self-driving cars after the latest crash, and so have Toyota and graphic chips manufacturer Nvidia, whose artificial intelligence technology helps power driverless cars. Arizona, where the Uber vehicle had its crash, has banned the company from testing its driverless cars in the state. And even before the latest crashes, California had introduced a permit process for autonomous vehicles with elaborate requirements.
The publicly available information on the two accidents does not appear to predominantly place the blame on either human error or technology. On the night of March 18, as Elaine Herzberg crossed a six-lane freeway in Tempe, Arizona, pushing a bicycle, she was fatally struck by a Volvo SUV that had been modified to use driverless technology. In what is believed to be the first pedestrian fatality involving autonomous vehicle technology, the sensors in the SUV failed to spot Herzberg in good time and slow down its 38 mph speed, and a safety driver in the car was apparently distracted, as police videos showed.