A study conducted in October last year showed similar problems, the algorithm favoured providing more healthcare to Caucasians than African Americans. Such biases would seem inherent in AI, necessitating some form of ethical training.
OVID-19 recovery in relation to business operations.
These challenges include a permanent hybrid remote and office work structure (22 per cent), office and facilities reopenings and return (17 per cent), and managing permanent remote working (13 per cent).
In 2015, it was reported that an algorithm used by Amazon for hiring favoured men over women. Since the algorithm used resume submissions over the last decade and these were predominantly men, it automatically had a bias. An analysis published in Propublica a year later showed that instances of bias were not related to just gender. Researchers studying COMPAS, AI used by lower courts to determine the likeliness of an offender to commit a crime, determined that it was biased against African American defendants. A study conducted in October last year showed similar problems, the algorithm favoured providing more healthcare to Caucasians than African Americans. Such biases would seem inherent in AI, necessitating some form of ethical training.
On Tuesday, NITI Aayog, two years after it released a paper on AI, released a draft document for discussion on creating and enforcing Responsible AI mechanisms. While India has been gung-ho about the adoption of AI/ML, the discussion on ethical AI has been more or less muted. NITI Aayog’s paper may thus be a good starting point. While the draft recommends setting up an oversight body, taking examples from jurisdictions like the US, the UK and Singapore, it also considers that self-regulation will be the best way forward. More important, it seems aware that a ‘one-size-fits-all’ approach may not work; so, it recommends having sector-specific regulation accompanying a blanket regulation for the use of AI. However, there are still issues that may fall outside the domain of regulation like black-boxing, and unless India allows unrestricted research in the field, it would be difficult to avoid human biases influencing AI products. Roping in ethicists and social researchers is certainly a solution, but the government also needs to consider that certain aspects may only be resolved by technological advances.