Although gadgets were driving technology adoption even before the lockdown, smart technologies have been brought into focus by the pandemic. More companies rely on artificial intelligence and machine learning algorithms to improve efficiency and keep workspaces functional. Increasingly traditional gadgets are being pared with new algorithms to find workable solutions.
Warehouses run by Amazon started using smart technologies to alert authorities whenever people breach the 6 feet distance to maintain social distancing guidelines. For this, Amazon uses artificial intelligence to create a 6 feet radius around every employee. Once an employee breaches that parameter, it alerts them about social distancing.
Other companies are using tags to create six feet boundaries to decrease the chance of catching the infection.
When we think about such innovations, usually what comes to mind are the western countries, but India has had a spate of meaningful innovations over the last few years. Start-ups like Niramai and Staqu have worked on developing temperature screening devices. Some are even installed at airports so that authorities can monitor swathes of people to detect potential Covid-19 cases.
However, such screening also raises concerns about privacy. How far can technologies be allowed to go to permeate our lives? Is it ethical for employers to track every movement of their employee? In some cases, technology has proven to be a boon to maintain health, security and safety standards. But given all this is too new, its larger implications are yet to be determined.
In the US, technology companies are wary of partnering with police officials. They have found that there is an inherent bias in how technology is being used. Algorithms, in some cases, are more likely to detect African-Americans as perpetrators than Caucasians. Companies, like IBM, have stopped their facial recognition programmes for government purposes.
India has also been in the midst of such controversy. Last year, the Union home minister in the Parliament remarked that the government was using government IDs, like driving licence, voter ID card, passport, etc, to identify perpetrators in the Delhi riots. Given the scale of destruction, some might say it was warranted, but what if the government starts using such means to quell even peaceful protests.
However, that is just one end of the spectrum. On the other end are companies like Staqu, which have been aiding police to solve crimes using its famous AI Jarvis. Atul Rai, CEO and co-founder, explains that the company started its operation in nine districts and has since expanded. It has aided UP police and Punjab police, which earlier this year bagged Crime and Criminal Tracking Network and Systems (CCTNS) award, for developing smarter systems. The company has digitised records for criminals, so whenever a crime is committed, the police can use its database to identify criminals from camera footage.
Rai says that the software has pretty impressive accuracy for identification as it has 99.7% accuracy in person detection and 95% accuracy for activity detection.
Senior IPS officer from UP police SK Bhagat, who is currently IG Vigilance and was then involved in Staqu’s integration as IG Crime, explains how the process has developed. He says that while the police was earlier using photographs of photographs with low accuracy, it eventually developed systems where each police personnel could click photographs and upload it on the portal. However, he also says that the process was full-proof as the verification would be completed at the district bureau level. Rai says that there are three levels of checks that each upload has to go through before being finally uploaded on the system.
Nilabh Kishore, another senior IPS official in Punjab and currently IG, while regaling stories of how Staqu has been used to arrest criminal elements, also details the thorough stepwise procedure. He explains how the app is downloaded on the smartphones of all police personnel for them to easily access its features.
However, it is difficult to determine the quantum or requests received and rejected by the district bureau. What both officers allude to is the concern for data safety and privacy.
Staqu’s contribution extends beyond helping police in nabbing criminals. It is being used by residential complexes, societies and businesses to make tracking easier. “It is not easy for one person to monitor 100 screens and determine what is happening; this is where our AI steps in. Say, if car is not allowed in a society premise, our AI will detect this and immediately flag it to the security company. Similarly, if in any manufacturing company if an unknown vehicle enters, our system can detect that easily” Rai illustrates.
The company has four modules, security, safety, Covid-19 and visual analytics. In the security module, the company provides theft protection. So, if anyone is trying to break lock or fiddle with it, it will immediately raise an alarm. In safety, Rai says, Staqu caters to restaurants to check if food has been prepared as per standards and if people are washing their hands regularly. It can also detect if people are wearing gloves or a mask. In this instance, Staqu creates a personal identifier for each of the employees, say a different coloured cap or apron, and keeps checking how many times a person has washed her hands. It can also detect if a person is working without a mask at a station.
The company is now launching new products in the market. “We are also doing audio analysis now, which entails person recognition. And, we are turning Jarvis into a talking assistant as well,” Rai says. So, instead of checking how many people are wearing a mask manually, a business can just ask Jarvis this information in the form of a question like one does with Alexa or Siri.
The only issue with these new technologies is data privacy. What if the service leaks data? And, how much data-sharing is allowed between businesses and the company providing such services?
Thus, the government will have to devise mechanisms where it is easier for users to track their data. The account aggregator model, which RBI has recently given the nod to, is one technique that can be adopted. In this instance, companies cannot use data without user approval. If it does, it will have to specify why it wants the data and how many days will it take to delete it from its servers.
In instances of heightened surveillance and facial recognition, such a model assumes greater importance. The government should also be asked to check with users for data approval.
While the company assures complete privacy and senior officials also swear by it, other players would have to pivot their model once the new data laws come into place.