1. Why cautious adoption of Artificial Intelligence is unrealistic; even Elon Musk, Satya Nadella have opposite views

Why cautious adoption of Artificial Intelligence is unrealistic; even Elon Musk, Satya Nadella have opposite views

Since no one really knows the outcome of artificial intelligence (AI) —even tech moguls like Elon Musk and Satya Nadella have completely disparate views—it is not surprising everyone wants to hedge their bets.

By: | Published: November 3, 2017 5:50 AM
Since no one really knows the outcome of artificial intelligence (AI) —even tech moguls like Elon Musk and Satya Nadella have completely disparate views—it is not surprising everyone wants to hedge their bets. (Image: Reuters)

Since no one really knows the outcome of artificial intelligence (AI) —even tech moguls like Elon Musk and Satya Nadella have completely disparate views—it is not surprising everyone wants to hedge their bets. The Financial Stability Board (FSB), which coordinates financial regulation across G-20 nations, has called for cautious adoption of AI and machine-learning in banking and insurance. Apart from the possibility of a massive loss in jobs, FSB may fear the AI invasion will create something it cannot understand, and therefore may not be in a position to regulate. Apart from the fact that some form of AI is already being used—what else is algo-trading?—the be-cautious approach doesn’t factor in how businesses work. Businesses are built on exploiting inefficiencies in the current system and, if all goes according to plan, AI has the best chance of being able to do this. Algo-trading, for instance, is able to spot, and exploit, arbitrage opportunities that human beings can’t.

If traders feel, for instance, that deep learning allows AI to process more information than human beings can, the person/firm that does this first/best, will be the winner. If bankers feel AI can better help identify people/companies they should be lending to, they will try this. In a situation where everyone is going to be working on AI-based solutions, the FSB needs to understand where AI is going and identify possible pitfall areas/scenarios and design regulation to take care of these including working on identifying early warning signals for potential problems. Asking firms to be cautious in their adoption of AI is akin to burying its head in the sand since firms are going to adopt AI if it offers an advantage.

  1. No Comments.

Go to Top