Facial analysis tech can come only after a data privacy law
The San Francisco city council announced a freeze on the technology last year because foundational, human-origin information used by the technology caused it to be biased against people of colour—an MIT study found that facial-analysis had a high error rate, ranging between 21-47%, for darker-skinned women.
As India looks to expand the deployment of facial recognition technology (FRT) for law-keeping—a report by the Internet Freedom Foundation talks of 32 FRT systems getting installed in the country under Project Panoptic, at an outlay of `1,063 crore—it must take a cue or two from the global experience. It must also put in place the legal framework to allow such technology to be used without overstepping its remit. A European Commission white paper from earlier this year calls for a three-to five-year freeze on the deployment of technology such as FRT over misuse concerns and possible technological shortcomings; though the European Data Protection Supervisor is yet to order any such action, heads of this regulatory office have publicly endorsed this position. The San Francisco city council announced a freeze on the technology last year because foundational, human-origin information used by the technology caused it to be biased against people of colour—an MIT study found that facial-analysis had a high error rate, ranging between 21-47%, for darker-skinned women. Not just governments, even some of the largest conglomerates have been mulling over halting ongoing FRT operations/participation. IBM, in June this year, announced that it would be completely exiting the facial recognition business. After vigorously defending its software last year, Amazon, as per a Wired report, said said in October that it was imposing a “one-year moratorium” on police use of Rekognition. While Union home minister Amit Shah had claimed that the Delhi Police was able to identify nearly 1,900 people involved in the Delhi riots using FRT and driving licence, voter ID and other official data, the fact is there are many problems that need resolving before FRT etc can be safely deployed for policing—in 2018, the Delhi Police counsel had told the Delhi High Court that the success rate of FRT was a mere 2%. A year later, the ministry of women and child development pegged this at below 1% and said the technology could not even distinguish between a boy and a girl.
This is not to dismiss the promise such technologies hold for law-keeping; indeed, once FRT achieves high levels of accuracy, it can not only be a surgical crime-fighting tool by tapping into the Crime and Criminal Tracking Network and Systems database but also can be put to positive uses such as tracing missing persons, etc. But, before that, to ensure that AI-aided public surveillance is not misused, the government has to bring in the required data/digital privacy protection laws. A Brooking’s paper recommends an independent body or use of the court system for FRT-request approval. However, the Justice Srikrishna committee report, as this newspaper has pointed out before, underscores the near-impossibility of this—despite an anti-abuse procedure governing phone-tapping by authorities, the review committee has to review nearly 15,000-18,000 interception orders every meeting! The draft data protection law envisages alerts for users when someone tries to access data from, say, a FRT database, but the government has been kept out of the purview of this provision. Without establishing trust in FRT, any attempt to deploy such data gathering, especially from public places, and analysis would always meet criticism and challenge by litigation.