The FR accuracy can vary significantly based on various factors like profile angle, gender, camera quality, light, distance, database size, Algorithm etc. The FR software itself can have faults to give results in high ‘false positive’ errors.
Facial recognition technology once a futuristic technology is increasingly being incorporated into our everyday lives. Both Governmental and private sector organizations are integrating Facial Recognition (FR) into products and services to create significant benefits for consumers. Recently National Crime Records Bureau (NCRB) has released a Request for Proposal (RFP) to procure an ambitious National Automated Facial Recognition System (AFRS).
Companies are using FR to make it easier for users to organize their photos or tag them as per name match etc. Though numerous potential benefits of FR are being seen here but it is important that these entities use the technology in a responsible manner such as respect and protection of consumer’s privacy. Here lead has to be taken by Governmental agencies since private players are always focussed on improving product margins, urge technology and legal experts.
So what is Facial Recognition?
In principle, any FR system is expected to capture a photo or video of a face (looking straight or in profile) of an individual alone or in a group, so as to allow a specialised software to undertake facial geometry analysis based on factors like distance between eyes, the distance between the forehead to chin etc. A facial profile is thus translated into a facial signature of a person in the form of a mathematical algorithm. “This facial signature is finally compared with an existing known facial dataset. This process is known as 1: n matching (where n equals the number of face signatures in a database) and results in ‘match’ or ‘no-match’ decision when the face print captured matches any of image in the system database”, explains Artificial Intelligence and C4I expert, Milind Kulshreshtha.
On the AFRS RFP, Kulshreshtha says, “The term ‘Artificial Intelligence’ in AFRS RFP is only implied and has an optional reference for implementation using Neural Network (RFP Para 2.3 Serial 4 refers). Today, the popularity and success of FR is by default AI-based and ‘training’ of Artificial Neural Network is a mandatory technical requirement for a successful system. The ISO 19794-5/ICAO requirements in RFP for online identity verification pertaining to matching a selfie to a low-quality photo on a passport may not completely have relevance for AI-based AFRS.”
The FR accuracy can vary significantly based on various factors like profile angle, gender, camera quality, light, distance, database size, Algorithm etc. The FR software itself can have faults to give results in high ‘false positive’ errors. The physical characteristics of people do usually change over time, like they may start to keep a beard, gain weight, wear spectacles or lose their hair. “Thus, when FR technology is used by law enforcement agencies, FR system may not give a definitive match and chances of ‘innocent’ being mistakenly interrogated or tracked may lead to harassment, humiliation etc”, he observes.
Similarly, with the planned AFRS system, a wrong match is a possibility (till a well-trained FR engine emerges first, but this may take years to evolve). The stage where the FR Neural Network is considered to be ‘trained’ is an issue which too needs to be addressed explicitly as a system the parameter before public usage, the AI & C4I expert adds.
According to him “The Cloud computing and need for high end GPU servers within India for Neural Network data storage and processing are going to be the backbone of AFRS infrastructure. Due to Data security issues, this has to be compulsorily set up in India since a Governmental agency like NCRB cannot be sending and storing Indian citizens’ data set on a foreign server (like AWS, Azure etc.).”
Every RFP is followed by a techno-commercial offer based on the technology required therefore, it is always a good option to define the Technology (like AI) and data safety parameters so as to avoid budget overflows during the implementation phase. “It is well known that while a 90% accuracy rate may sound good in general, it is an unacceptable risk in AI-based systems, especially when the end result pertains to legal work. Hence, critical systems like AFRS may be first evolved as R&D tool before NCRB is handed over a well-trained, tested and approved the system for operations,” he opines.