1. Auto(nomous) Accident

Auto(nomous) Accident

Google’s self-driving car ramming into a bus proves programmes so far are just as fallible as people

By: | Updated: March 2, 2016 12:34 AM

One of Google’s autonomous vehicles (AVs)—self-driving cars—crashed into a bus in California. This was not the first time a Google AV was involved in an accident, but this was the first caused by an AV. AVs have been involved only in a handful of crashes till the last reported incident—most of which were minor “fender benders”—but each time, other road users have been at fault. The company accepting “partial” responsibility—it claims the human user of the AV too underestimated the likelihood of the crash and did not override the self-drive mode—is bound to stoke public anxiety over how machine-learning erases successfully the anomalies of human reflexes in chaotic traffic.

Google, according to a report by the BBC, has gotten busy in teaching its AVs that bigger vehicles, like buses, are less likely to yield to smaller vehicles. But, how we view machine-learning and artificial intelligence (AI) will depend a lot on how the California department of motor vehicles treats this particular case. The US National Highway Traffic Safety Administration had only recently told Google that it would likely give the self-driving computer the same legal treatment as a human driver, something that the company has viewed as a breakthrough for the entire AI universe. But the AV-maker admitting to only “partial” responsibility of the machine—implying a human user is ultimately responsible—will perhaps set off a discussion on how AI remains just as fallible as human intelligence.

  1. No Comments.

Go to Top