At the risk of sounding cliché, we are living in interesting times.
Here are three disparate news pieces that will help join the dots. An unlikely tweet by Assam Police kicked up a social media maelstrom recently: “Be mindful of what you share about your child on social media”, the message said. It was accompanied with the pictures of four unsuspecting children and each of these pictures had a message that went something like “don’t trade their privacy for social media attention”; “children are not social media trophies”; and “snapshots of innocence, stolen by the internet”.
The penny dropped. A quick Google search threw up some interesting statistics. It is estimated that by 2030, nearly two-thirds of the cases relating to identity theft of young people will have resulted from “sharenting”—the tendency among young parents to share photographs or video reels of their children on social media platforms they inhabit. Various studies also show that an average 5-year-old child has already had about 1,500 of her/his pictures shared by the parents on random platforms without the kid knowing what hit them, let alone give consent.
It is not that every parent that shares fun reels about their kids online looks at their child as prime monetisable “content”. It is more likely that these parents didn’t even consider that there could be a lurking threat—that children whose videos they are posting online could fall victim to identity theft or abuse at some point in the foreseeable future.
That’s what the Assam Police was drawing attention to—the potential hazards of over-sharing your life stories online. Not that it is an issue peculiar with Indian parents. Experts say sharenting has seen a huge spike since the Covid-19 pandemic years. The Journal of Consumer Affairs notes the pandemic hastened the phenomenon by forcing interactions to move online.
This episode is part of the problem set.
Contrast that with this other development reported by the Uttar Pradesh Police. Last month, they said that an advanced AI-based face recognition software helped them nab ‘solvers’, who used unfair means during government-job examinations. Those arrested were actually dummy candidates who appeared in place of real candidates for various state-held public examinations held across the state.
As anyone who cares would know by now, such events are not new and that over time these gangs have become extremely tech-savvy. These days they are known to use advanced image editing tools to modify admit cards of examinees in a manner that invigilators and exam centre guards are easily duped. What they do is use AI tools to merge the faces of a real examinee and his designated solver in such a manner that the photograph on the admit card appears identical to that of the solver. Only a pro can spot the image manipulation on the admit card. It is a deep-rooted racket the police had been trying to bust for years.
Time for some tit for tat. Things fell in place when the department got access to advanced AI-based facial recognition software. In effect, one more weapon in their arsenal—AI—over and above lathis and guns.
Now the third event. Media reports say Getty Images, the go-to destination for stock images, has filed a case against Stability AI, the creator of open-source AI art generator Stable Diffusion, alleging that the company copied more than 10 million of its images to train its AI model “without permission… or compensation”. In effect, the stock photography company accused the startup of both infringing the company’s copyright and trademark protections.
It’s like this: AI art tools require photographs, artwork and pictures that are used as training material for your machine and they end up digging that material out from the web, often without the original creator’s knowledge or consent. This Getty lawsuit is the latest chapter in the ongoing legal tussle between creators of AI-based art generators on the one hand and original rights-holders on the other. Aaron Moss, a copyright lawyer at Greenberg Glusker, says the focus (of Getty’s complaint) is where it should be: the input stage ingestion of copyrighted images to train the data. “This will be a fascinating fair use battle,” he had tweeted.
Battle indeed, but not a losing battle, by any stretch of the imagination.
Coming back to where we started. A powerful piece of commercial by Deutsche Telekom, created by agency adam&eveBerlin, hits the nail on the head. It uses a deepfake older avatar of “Ella”, who confronts her parents with the possible consequences of their sharenting while she was a little girl.
Plainly put, AI is used here to draw attention to the risks it itself poses. The message is clear: Yes, there are opportunities, but there is also an urgent need for responsible handling of personal data in the digital world. Once you post your silly moments on the digital platform it is out there for all to see, manage, abuse.
You cannot be too careful.