When it comes to self-driving cars, it is a common question that comes to the mind is - How safe are they? Who will save humans from runaway autonomous cars? But on the flip side, another question is- How will we save these autonomous cars from human errors?
When it comes to self-driving cars, it is a common question that comes to the mind is – How safe are they? Who will save humans from runaway autonomous cars? But on the flip side, another question is- How will we save these autonomous cars from human errors? In one of the worst accidents in which Google’s self-driving vehicles have been involved in, a Lexus with Google’s technology embedded, was hit by another vehicle that ran a red light. This incident happened in Mountain View, California. There was reportedly no injuries. The other car was an Interstate Batteries van.
Reportedly, the van was at fault. Google released a statement in which it said that the traffic light was coloured green for a minimum of six seconds before the company car entered the intersection. This comes in the aftermath of a few accidents in which Tesla owned Autopilot mode cars were involved in. In one of the accidents, there was a fatal injury too. Those accidents indicate that the detection system in vehicles is still not strong enough to understand and respond to the environment. But nothing much could have been done by the Google’s software if a can runs a traffic light.
Accidents like these bring to the fore, various kinds of problems. No matter how far the companies go with the technology and science, it will remain an undeniable fact that the driverless cars will still be sharing the road amongst flawed human drivers. So, the moot question is whose responsibility is it? Is it the human’s fault to make flawed judgements? Or is it the machines for not anticipating and avoiding the human error.