The interesting approach that is being considered in all these researches is that instead of trying to define the program behaviour for the robots, the focus is to set up an environment where the robots should be able to find ways of learning to do things on their own.
Having recognised the potential and impact of Artificial Intelligence (AI) on businesses, there has been a surge in the demand for qualified talent which is not easily found by end users’ industry as well as software development firms. This has resulted in an exciting new trend—the efforts aimed at giving more teeth and power to AI by building automated tools that would support the AI engine instead of depending entirely upon humans to train and build such engines. Tech giants such as Microsoft, Amazon and Google and several start-ups are racing to develop the new tools that would reduce cost as well as dependence on AI experts and are expected to help companies take advantage of the enormous data at their disposal and embed significantly more AI into their systems without necessarily having to invest in building their own AI capabilities.
Currently, robots being used in manufacturing or logistics applications are trained for specific tasks and are not trained for complex tasks such as sorting or using curiosity to solve a problem. However, the recent robot developed by University of Berkeley shows the machine which has been trained by viewing hundreds of purely digital objects to be able to distinguish the items not seen in the digital data set. Thus learning from the simulated models, machines are learning how to apply such learning at the real work place. The interesting approach that is being considered in all these researches is that instead of trying to define the program behaviour for the robots, the focus is to set up an environment where the robots should be able to find ways of learning to do things on their own. As of now, this is the quintessential differentiating factor of AI tools as opposed to humans with humans being capable of learning on their own.
While the attempts are aimed at machines emulating humans in developing the learning ability rather than learning to do a specific task and be able to handle complex array of tasks, humans would be required to closely monitor how machines learn and slowly move them away from ‘what to learn’ to ‘how to learn’. Therefore the focus is more and more on the software which is built around neural networks which enables the machine to recognise the patterns.
The coming together of neural networks and AI is being experienced in the art and music scene as well. Google’s Project Magenta is aimed at teaching machines to create original music, sketches, videos and jokes to find new ways to communicate. Just as Facebook identifies faces in online photos or Android phones recognise voice commands, machines are now being taught to recognise music notes or objects and shades in the sketches to produce new formats of music or pictures.
With increasing concerns that users have been expressing on the usage of personal data and companies too becoming protective of their data, future development of AI would depend upon the broader distribution and access of data and algorithms. This would mean possibly applying blockchains to access the required datasets by AI networks with no stakeholder including the platform provider controlling the data or algorithms and thus taking AI to the next higher plane of possibilities. Despite the potential of AI becoming smarter and more impactful, as the celebrated professor of history, Yuval Noah Harari says, over a period of time AI can be expected to develop higher level of intelligence to solve more complex problems but it cannot be expected to develop consciousness, i.e., the ability to feel things. Hence humans will continue to play an important role although their roles will keep getting redefined with every stride that AI takes.
- The writer is chairperson, Global Talent Track, a corporate training solutions company