Can humans be in control of the AI system once AI grasps all the intelligence from humans? The concern around AI in ethics is not just a moral dilemma, it is centered around the social responsibility of the business towards its customers, employees and the society.
Artificial Intelligence tools have the ability to read and interpret data that possess the power to influence humans and nudge them to take actions which may be beneficial or harmful to the mankind. This makes the role of ethics in designing AI solutions important. For instance, autonomous vehicles could potentially reduce the accidents caused by human errors. However, autonomous weapons could herald wars and killings much more than humans would have desired. Thus the bias in decision making, intended or unintended manipulation, potential misuses of insights derived, intrusion into private lives, surveillance practices, copyright issues, the likely security breach and lack of transparency in how the AI models are built are some of the dilemmas.
Is AI making people, businesses and countries richer and those without access to AI are becoming poorer? Can humans be in control of the AI system once AI grasps all the intelligence from humans? The concern around AI in ethics is not just a moral dilemma, it is centered around the social responsibility of the business towards its customers, employees and the society.
An example of AI creating disrepute to business is that of Amazon when it decided to do away with its AI driven recruitment tool as it was found to be having a bias against women. Cambridge Analytica had to fold its business on account of the scandal around influencing people to vote in the US election using personalised content. Organisations are beginning to recognise the likely loss of faith and trust of customers that irresponsible AI solutions could be blamed for. The examples of social media’s influence on elections have stirred several governments to act against such companies or putting in place stringent controls. These steps are offering only partial protections and the fundamental questions related to influences on seemingly insignificant matters remain unanswered. This could even lead to a sea change to the value system and to the culture of communities and the world.
Hence the design of AI systems needs to consider the impact of ethics as the core element in its solution and it is essential to draw up the code of conduct for designing AI systems. The key tenet of Asimov’s code of ethics is that automated systems should not create harm to humans or by refusing to act, such systems should not allow harm to come to humans. The complex algorithms and correlations based on huge datasets make it difficult to even establish the origins and the building blocks of the AI models. Hence it is important to ensure that AI systems capture the detailed steps involved in the development process and the types of data used for training the AI models.
Since most AI models are trained on the publicly available data sets, there is likely to be hidden biases of the society. Therefore, data sets that are inclusive should be used to build the model. In order to overcome the hidden biases, people from diverse backgrounds and cultures should be part of the team that is building AI solution. And companies should reconsider their dependence on AI tools for hiring by making it human driven. AI tools would not be able to distinguish gender sensitivity or diversity, whereas these factors could be important in the context of a hospital while treating patients.
Another important dimension concerns the environment. AI tools require huge cloud infrastructure and considerable electricity, this would mean more impact to the environment. At the same time, with smart use of AI systems and by directing them to solve the real problems such as reducing of carbon footprints and power requirement, world could become a better place to live in.
In conclusion, increased awareness of social responsibility and long term reputational risks would have to be necessarily factored into designing of AI systems along with constant reviews of the pitfalls to correct them.
The writer is executive chairperson, Global Talent Track, a corporate training solutions company