
The technology around driverless cars is evolving day by day. New innovations are coming across which promise to make this concept safe and more efficient. Recently, researchers at MIT have developed a new system, which, by the virtue of analysing maps and visual data is going to enable driverless vehicles to navigate routes in new and complex environments. The idea behind the development of this new system is to provide human-like reasoning capabilities to such vehicles.
The system has been developed in such a way that it is able to detect mismatches between its maps and the actual road conditions, thereby correcting the course of the vehicle by determining if its position, sensors, or mapping are incorrect. The system is capable enough to learn the steering inputs and judge the pattern of human drivers when they navigate through roads in a small area. The same is done only with the help of the feed from a video camera along with the GPS map according to the researchers.
Daniela Rus from Massachusetts Institute of Technology (MIT) in the US said that the objective behind the development of this new system is an autonomous navigation system which can drive in new environments. Just like human drivers, driverless cars also find it difficult to navigate on unfamiliar roads. What human drivers do is they match the area around them with what they see on the navigation screen in order to determine their current location and consequently, the destination.
Alexander Amini from MIT says that their system does not require to be trained for each and every road beforehand. One can simply download a new map for the vehicle in order to navigate through areas which it has never seen before. In order to train this system, a driverless Toyota Prius was first driven by a human operator. The said car was equipped with several cameras along with a basic GPS navigation system. This helped the system collect a lot of data from local suburban streets which also included a number of road structures along with obstacles.
After this, when the system was deployed autonomously it successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests. According to the research, the system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver, according to a paper presented at the International Conference on Robotics and Automation in Montreal, Canada. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries, researchers said.
Inputs: PTI