India also must consider making data providers and companies deploying AI responsible for ensuring privacy and removing biases.
Researchers studying COMPAS—AI used by lower courts in the US to determine an offender's chances of committing a crime—determined that it was more likely to find against an African-American defendant.
Ethical AI seems to have only lately become an important research question for artificial intelligence (AI) developers. This shift has come about with AI deployment in the real world having shocking unintended consequences, because ethical challenges had not been anticipated. So, last year, the organisers of the Neural Information Processing Systems (NeurIPS) conference set up an ethical board to screen papers that could have potential biases . Companies are still having trouble navigating the complex terrain of ethical AI. Google, for instance, was recently flayed by its own employees and outsiders over its handling of two AI ethics researchers who had reportedly been facing pressure to censor research findings. This, when the company had last year had to apologise after its Vision AI showed indications of bias, classifying a thermometer held by a dark-skinned hand as a gun, while terming it a monocular when held by a light-skinned hand. In 2015, an algorithm used by Amazon for hiring favoured men over women. Researchers studying COMPAS—AI used by lower courts in the US to determine an offender’s chances of committing a crime—determined that it was more likely to find against an African-American defendant.
Some companies have taken a moral stands—IBM, for instance, won’t allow use of its AI for facial recognition in policing in the US—but many are lining up to claim the spaces vacated by such firms. Yandex, a Russian company, has gained notoriety for building an image search database with little regard for privacy. Thus, ethical standards need to move beyond the purview of mere self-regulation, to some form of government control. The US Algorithmic Accountability Bill, introduced in 2019, fixes liabilities and penalties on companies leveraging AI, in order to correct biases in their algorithms, and sets bias-correction standards.
In India, the police have started using facial recognition technology (FRT) which uses elements of machine learning and AI. A report by the Internet Freedom Foundation talks of 32 FRT systems getting installed in the country under Project Panoptic at an outlay of `1,063 crore, even though, in 2018, the Delhi Police counsel had told the Delhi High Court that FRT’s success rate was a mere 2%. A year later, the ministry of women and child development pegged this at below 1% and said it could not even distinguish between a boy and a girl. Against this backdrop, NITI Aayog’s 2020 draft on Responsible AI can be a good start on ethical AI regulation. The draft recommends setting up an oversight body, borrowing from jurisdictions like the US, the UK and Singapore. While it states that self-regulation will be the best way forward, it recommends sector-specific regulation so that an insurance company and a police department are not subject to the same rules. India also must consider making data providers and companies deploying AI responsible for ensuring privacy and removing biases.