The recent self-driving car mishaps have prompted genuine concerns about the safety of autonomous vehicles. But calls for banning the technology are misplaced
Sunday, March 18, 2018, brought the focus back sharply on the safety aspects of autonomous cars. The negative publicity blitz began when an Uber self-driving SUV (on one of its trial runs) hit a female pedestrian who was walking with her bicycle, crossing the road in dark conditions, at 10 pm in Tempe, Arizona, US. The accident has many questioning the safety of self-driving technology.
Although there’s no doubt the accident and the subsequent death of the pedestrian is most unfortunate, a calm perspective is required in understanding what went wrong that night.
News reports have emerged alleging Uber had disabled the collision-avoidance software installed in the self-driving Volvo vehicle, thereby leading to the collision as the pedestrian crossed the road. The company manufacturing the collision-avoidance technology, Aptiv PLC, clarified that Volvo’s XC90 standard advanced driver-assistance system “has nothing to do” with the autonomous driving system of Uber’s test vehicle. Whatever the case, it’s clear that human negligence to have proper safety interlocks is to be blamed at all levels.
Another important aspect remains the safety driver, Rafaela Vasquez, who should have acted as backup and stepped in the moment something seemed amiss. Unfortunately, human negligence again acted as the culprit—a fact corroborated by crucial video footage on the crash captured from the car’s dashboard camera, which was released by Arizona state police. As the interior view indicates, Rafaela is looking down repeatedly, most likely distracted by her mobile phone. When she looks up suddenly, Rafaela is shocked to see the pedestrian right in front of the vehicle. By then it was too late because the SUV hit the pedestrian a split second later.
In the actual scenario, the Uber’s sensors should have detected the pedestrian immediately and mitigated the collision by applying the brakes or steering away instantly. If any glitch prevented the autonomous software in reacting or sighting the pedestrian, the human safety driver should have instantly seized control to mitigate the situation. Unfortunately, none of the above happened since several basic safety interlocks in the system were apparently missing. That sensors’ failure of the vehicle is evident as it showed no signs of stopping or slowing down at all.
Robust systems required
In today’s scenario, when driving technology is transitioning from human drivers to autonomous vehicles, it is imperative to have a sensor (Novus Aware) for the human driver in the loop. It may be noted that Hi-Tech Robotic’s Autonomous Driving Software (Novus Drive) takes inputs from both the external perception sensor (Novus Pilot) and the human sensor (Novus Aware), then generating the behaviour the autonomous vehicle would exhibit, including critical safety elements.
Our Uber accident analysis
Hi-Tech Robotic Systemz used the low-resolution footage to run its deep-learning algorithm, which also powers its perception sensor Novus Pilot. This algorithm not only detected the pedestrian crossing the dark road, but did so 1.1 seconds before the moment of impact, which Uber’s gamut of sensors couldn’t capture at all. Since the vehicle was said to be moving at a speed of about 40 mph and didn’t slow down before the hit, the detection distance determined is around 18.33 metres before the collision. This left enough time and distance for emergency brakes or any other evasive manoeuvre to be applied either by the autonomous software or the human safety driver.
Testing the video feed with Novus Aware (human sensor for autonomous vehicles) software to check the inattentiveness and distraction score of Rafaela Vasquez, it found she was inattentive for more than five seconds before the vehicle hit the pedestrian.
The human reaction time in such a scenario is 0.7 seconds. Since the Novus Aware software gives a warning within a second, it leaves more than adequate time of three-plus seconds for the safety driver to react. But this safety interlock was missing in Uber’s autonomous vehicle, which is a serious concern.
Given the extremely dynamic driving scenarios in India, Hi-Tech Robotic, I believe, has been well placed to capture more edge or extreme cases than companies in the West. Consequently, it was possible to train Hi-Tech Robotic’s deep neural networks in more unusual but likely settings, placing its sensors in a better position to tackle them.
Moreover, in the Uber case, Rafaela had been inattentive repeatedly, just like in the Tesla autopilot crash of 2016, when the autonomous vehicle’s sensors were confounded by the side of a truck, missing it completely. In the resulting crash, the inattentive human Tesla driver was unfortunately killed, paying a heavy price for his lapse. Both cases reiterate that the role of human sensors is indispensable in ensuring the safe introduction of autonomous vehicles into mainstream public life.
Finally, banning autonomous vehicles won’t address the problem of high-accident rates because human drivers are more prone to rash, error-strewn driving, triggering countless fatalities in India and worldwide. Therefore, robust autonomous technology needs to be promoted to help in minimising accidents on roads across the world.
Anuj Kapuria is visiting faculty, Carnegie Mellon University; co-chair of Robotics Society of India; core member of Taskforce for Artificial Intelligence; and founder & CEO, The Hi-Tech Robotic Systemz Ltd