By Ameen Jauhar
In February 2020, the EU published a whitepaper and commenced its ambitious journey to produce the world’s seminal piece of legislation comprehensively regulating artificial intelligence (AI). Last week, in a historic session, the European Parliament voted and adopted the text of the proposed EU AI Act. The legislation crafts a risk-based approach to AI regulation, categorising actual or potential harms into four classes—unacceptable risks (prohibited systems), high risks (strictly regulated systems), low risks (mainly requiring transparency by developers and deployers), and minimal risks (unregulated).
Interestingly, a key technology that has dogged the debates of the EU AI Act is facial recognition tech (FRT). Initially, the EU considered a moratorium on the deployment of FRT systems for five years so that adequate regulation could be devised. This stemmed from the increasing amount of evidence that exposed risks of bias, discriminatory and inaccurate outcomes as well as general concerns around mass surveillance affiliated to FRT. However, the whitepaper tempered it to strictly regulating FRT instead of outright bans or moratoriums. This language was adopted into the initial text of the EU AI Act tabled in the European Parliament in 2021. FRT, especially the one used by law enforcement, was classified under high-risk systems. Though deemed as a compromise between allowing its exceptional and necessary use while subjecting it to strict compliance mandates, this position has been significantly altered in the text that was adopted on June 14. Now, real-time biometric surveillance is prohibited in any public space—an unprecedented partial ban against FRT, unheard of in any other jurisdiction.
Also read: The long and short of lasting as a CMO
The adoption of the compromise text and the initiation of the Trilogue between the three wings of the EU comes at a time when India is also formulating substantive legislation for tech regulation. The minister of state for electronics and IT, Rajeev Chandrashekhar, has repeatedly insisted that an omnibus legislation entitled the Digital India Act (DIA) will overhaul the archaic provisions of the existing Information Technology Act. The DIA, inter alia, will introduce principle-based regulation of AI systems in India, aiming to balance innovation and India’s lucrative start-up ecosystem with the liberties and rights of digital nagriks.
To accomplish this balancing act, it is pertinent to inquire how the DIA is likely to deal with more contentious technologies like FRT. Presently, FRT is ubiquitous in the Indian public sector, ranging from airport check-ins and transit to disbursing pensions and welfare services. Its most contentious usage is in law enforcement and intelligence gathering, a concern that resonates with similar debates around biometric surveillance emanating globally. There is a significant amount of literature, including a discussion paper by NITI Aayog, which lists the myriad concerns with India’s current use of FRT systems—it’s opaque, unregulated, and implemented solely by executive fiat.
As with the EU, North America, and other countries, there have been vociferous calls to stop the development and deployment of FRT systems in India. A key example of this is an ongoing PIL before the Telangana High Court, challenging the use of FRT by the local police during the pandemic and petitioning for a writ against its unregulated use. The petitioner raised legitimate concerns around the legality of the use of FRT. As a sequitur, it is important to determine if FRT can even be deployed in India, and if so, under what circumstances.
While an EU-style ban cannot theoretically be ruled out, it is unlikely. And in fairness, two things must be considered. First, the fundamental rights to privacy and freedom of speech and expression guaranteed under the Indian constitution are not absolute. In fact, in the Puttaswamy judgment, the majority of the bench stipulated as much and confirmed that in exceptional and exigent circumstances, the state can impinge individual privacy. Secondly, not all FRT applications pose the same risk. For instance, DigiYatra, which was piloted by the Civil Aviation ministry last December in a select few airports, is presently a voluntary service that affords a paperless transit through airports for domestic and international passengers. The programme hinges on this opt-in subscription, and affords alternatives if a passenger does not want to use the app.
The problem, however, emerges with non-voluntary FRT systems which are presently being deployed in over 25 states and five Union territories across India. As per Puttaswamy, while this may be in pursuit of a legitimate state interest (say to preserve law and order), the action lacks lawfulness (as it’s not backed by any statute), and is arguably a disproportionate action, given that there are no checks and balances in place that ensure risk mitigation.
Also raed: Fiscal federalism, again: Centre should not impose exacting financial propriety standards on states
Over the last year, several commentators, including me, have argued that India should chart its own course of legislation and regulation for AI systems which is aligned to our constitutional norms and legal system, and mindful of our sociopolitical and economic realities. Regulations around the use of FRT systems can prove to be an excellent case for the Indian government to demonstrate their proclaimed objective of balancing innovation with individual rights and liberties. This needs to comply with our constitutional standards (per Puttaswamy), and implement the responsible AI principles set out by NITI Aayog. Failing to do so will reinforce an all-too-familiar anaemic approach towards tech policy and regulation in India, which has historically been an unfortunate hallmark of Indian lawmaking and policy processes.
Writers are senior resident fellow, applied law and tech research, Vidhi Centre for Legal Policy. Views are personal