Driverless cars are not ready for the road – as two recent deaths have shown

If the car experiences something in real life that has not been covered in training, how the car will react is anyone’s guess

Ashley Nunes
Sunday 15 April 2018 14:49 BST
Comments
Waymo launches a driverless car with Jaguar Land Rover

Britain’s efforts to roll out self-driving technology continue. London’s Gatwick Airport recently announced plans to deploy autonomous vehicles on the airfield. The goal? To help local workers move around more seamlessly. Similar trials are already underway in Greenwich and East Croydon.

These efforts come in the wake of two accidents linked to self-driving technology. Walter Huang was killed in California last month after his Tesla Model X crashed into a motorway barrier and burst into flames. The car was, at the time of the accident, operating on autopilot. Days prior, Elaine Herzberg was killed in Arizona after being struck by a SUV Volvo also using autonomous features. She became the first pedestrian known to have been killed by an autonomous vehicle

Who deserves the blame has been a source of speculation. Some say the driver. Huang for example, didn’t have his hands on the steering wheel seconds before the crash – despite the fact he was required to do so. His death has also been blamed, in part, on the condition of the road. Highway safety barriers, designed to absorb the impact of a crash, were purportedly in less than ideal condition.

However, Huang and Herzberg’s deaths have mostly been pinned on the immaturity of self-driving technology. These systems are, we are told, not quite ready for prime time. The cameras, sensors and radars, crucial to the success of driverless cars, require much more testing before they can be considered safe.

Once the technology is perfected, commentators say that congestion will ease, emissions will fall and cities will be made more liveable thanks to machines increasingly taking charge of the wheel. This will ultimately culminate in no human involvement in the driving process. At least that’s what we’re told.

The reality however, is very different.

For one thing, self-driving technology relies on so-called “deep learning” algorithms. A car’s computer is loaded with large amounts of training data that show different traffic scenarios. The computer then predicts, based on those scenarios, how to respond to current road conditions. This makes similarity between training and the real world paramount. If the car experiences something in real life that has not been covered in training, how the car will react is anyone’s guess.

Driverless cars are also complex – so complex in fact that they require tens of millions of lines of software code. More code means more functions for consumers, such as collision warning, voice control and live traffic updates. But it also means more opportunity for systems to interact in unexpected ways – ways that affect passenger safety. In these cases, more code becomes a liability – not an asset.

This makes the oversight of self-driving technology imperative. Someone must be able to act if automation behaves inappropriately or if it simply fails. These instances may be rare.

Waymo, Google’s self-driving car company, covered over 350,000 miles autonomously in California last year. Human intervention was required a mere 63 times. That’s impressive considering how complex a task driving is. But intervention was still required. A human driver had to intervene 63 times to keep the vehicle safe. Could the same outcome be guaranteed if human oversight was absent?

Few people, if any, would say yes.

Automation means societal involvement may dwindle, but it will not disappear. Society will not allow it. Nor should we. While self-driving technology has improved over the years, it is, like all technology, imperfect. Huang and Herzberg’s deaths are the most extreme examples of this. But there are other more “moderate” examples of the technology struggling with mundane tasks like detecting stop signs, cyclists and pedestrians on the road.

The word “driverless” has increasingly become synonymous with “humanless”. Society will apparently be better off when robots whisk us around. This may be true. But this vision demands a guarantee of technological perfection – a guarantee car manufacturers are yet to give.

Until that happens, expect human drivers to stick around.

Ashley Nunes is a research scientist at the Massachusetts Institute of Technology

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in