By Munir Mohammed & V Sridhar
Identification is a method for reliably connecting information to individuals. In the US, the Social Security Card and the associated number have long been used to identify an individual for tax, social security and other purposes. Identification enables one to verify identity, i.e. the person accessing her records is indeed the owner of the account or the subject of the records. If the identification method allows one to relate it to a human being with “flesh and blood” using biometric information, then it can have serious privacy implications. Digital ID projects, conceptualised back in the 1980s in the UK and the US, never took off due to their privacy ramifications. However, in India, with Aadhaar, digitally unique IDs (UIDs) of individuals get tagged to their biometric information. This led to a debate on identification and associated privacy issues, culminating in a Supreme Court judgment (Puttaswamy vs Union of India, 2018) and the drafting of the data protection Bill.
Governments proclaim the advantages of digital ID projects as a way to improve the efficiency of distribution of government financial subsidies under various schemes, inclusion of the needy, and eliminating fraud through de-duplication. But there have been studies on how such national ID projects have their own shortcomings, namely exclusion of those in need due to (1) technical failure of digital systems, (2) administrative failure at field-level implementation, (3) governance failure due to political ideologies, (4) privacy intrusions at various levels, and (5) lack of law and regulation enforcement. Privacy advocates argue that identification is demeaning to individuals as it reduces people to a number, as in the case of Social Security Number or Aadhaar number, or to bodily characteristics if biometric information is used.
Apart from national IDs, one’s identification in some form is used in the digital economy for providing goods and services. The Internet, telecom and digital platform firms have been collecting user data, processing, disseminating and monetising them, most often with users implicitly, (un)informedly accepting cryptic consent. Identification of the individual’s usage patterns leads to first-degree price discrimination, in which each one is treated as a “market.” E-commerce firms collect browsing behaviour to provide personalised recommendations, ads and coupons. This data is also used for purposes other than intended, such as selling to third parties without explicit consent from users. This leads to breach of trust, as in the case of the Facebook-Cambridge Analytica scandal.
Further, AI and ML algorithms keep learning from big data, as a raw material, for decision-making, now that it is abundantly available, thanks to digitisation. There has been a debate around ethical harvesting of personal data; building secure and safe systems that protect human dignity; data quality; design of privacy preserving and ethically aligned digital systems. Globally, regulators have been lenient on tech companies to nurture innovation and entrepreneurship. Noticing abuse of market power of some digital firms in the use of personal data, armed with evidence, regulators have started enacting policies that curb inappropriate use of personal data.
The Institute of Electrical and Electronics Engineers (IEEE) has a global initiative to look at ethical aspects of intelligent and autonomous systems. It has resulted in a document called “Ethically Aligned Design” with perspectives from over a hundred experts. The IEEE Standards Association has identified key areas to work on standardisation and development of the resultant IEEE P7000 series of standards and certification process of such systems.
As digital technologies pervade our lives, effective dialogue is needed amongst academia, researchers, industry, government and civil societies to promote global initiatives and concurrence on some of the complex issues discussed in this article.
Mohammed is with IEEE India and Sridhar is professor, IIIT Bangalore