Artificial Intelligence (AI) has become a ubiquitous part of our daily lives. It is altering the nature of how humans interact with their environment, especially with machines. However, like every great discovery throughout history, AI too comes with ethical considerations, for its judicious use to benefit society as a whole. We are still only scratching the surface of what AI can do and its potential impact. For example, we recently saw a case of AI chatbots autonomously creating and interacting in their own language unknown to their creators. When we are dealing with such intelligent systems it becomes extremely important for us to understand what we are creating and treat it with utmost respect and caution.
Researchers, policymakers, business leaders, and academicians must work together under a set of universal guiding principles to ensure that development and use of AI-based technology is thoughtful and trusted by everyone to be in their best interest. We need to create solutions which reflect a strong commitment to the adherence of ethical principles in order to encourage the widespread adoption of this technology. There are six core ethical principles which in my view should be followed —fairness, reliability and safety, privacy and security, inclusivity, transparency and accountability.
AI systems should treat everyone in a fair and balanced manner and should not differentiate between groups of people. As these systems are designed by humans and learn from them, there is a high probability of them transferring their conscious or unconscious biases to the machine. This will inevitably lead to all future operations conducted by the machine to be biased. It is therefore imperative that we continue to develop analytical techniques to detect and address potential unfairness by systematically assessing the data used to train AI systems.
Accuracy is another major concern with building automated systems. The involvement of domain experts in the design and operation of AI systems is crucial to ensure accuracy. Consider this example, an AI system designed to help make decisions about whether to hospitalise patients with pneumonia “learned” that people with asthma have a lower rate of mortality from pneumonia than the general population. This was counter-intuitive and while the correlation was accurate, the system did not account for this lower mortality rate being attributed to asthma patients receiving faster and more comprehensive care than other patients because they are at greater risk. Luckily the researchers noticed that the AI system had drawn a wrong inference and corrected it. This highlights the critical role that subject matter experts must play in observing and evaluating AI systems to assure accuracy.
For any AI system to operate responsibly, the people who design them should be accountable for how they function.
An ethics body should be setup across organisations to ensure that AI is being used responsibly. Even governments should play an important role in promoting responsible use of AI and collaborate with private players. Strict regulations which are universally enforceable will be key to using AI for good and not with malicious intent.
For majority of us AI systems are still very new and unknown. As with anything that humans do not fully understand, some of us have a healthy amount of scepticism and even fear of adopting this technology. Trust is an integral part of any collaboration, and it is up to us as technologists to design and operate within a clear set of parameters and build a robust feedback mechanism in order to report performance issues. AI systems must comply with privacy laws in order to ensure transparency of the nature of personal data being collected, how it is used and where it is stored. These elements will foster trust and build people’s confidence in using AI systems in various aspects of their life.
We are still not in a position to foresee the future of AI with utmost certainty. We generally tend to adapt to technologies that ease our lives, but in our constant endeavour to discover the new, we should always have a line which we will not cross. We must work within an ethical framework for AI in order to establish a human-centric approach that in the long run will positively impact society.
The writer is general manager, Artificial Intelligence & Research, Microsoft India.