Tech giant Google has joined companies like Adobe, Intel, Microsoft, Intel, Sony, among others, to develop technical standards that will help the users trace the origin of content, including that generated using artificial intelligence (AI) technologies.
The company has joined Coalition for Content Provenance and Authenticity (C2PA), under which it will develop credentials for digital content that will be a sort of labeling of content, carrying information such as when and how the content including photographs, video, text, audio, etc, was developed, from where it first originated, who is the owner of the content, what tools were used to create that, how many edits have been made to that, among other key things.
In 2021, companies such as Adobe, Arm, Intel, Microsoft, and Truepic, co-founded C2PA to develop open technical standards to authenticate digital content online. The same is in tandem with Adobe’s Content Authenticity Initiative (CAI), that was started in 2019, to address the issue of traceability and to help combat the threat of misinformation for consumers.
The development assumes significance because in recent times there has been a rampant deployment of AI technologies which has also led to an increase in misinformation, deepfakes, among other cases of user harms with AI deployment.
Further, with elections in 50 countries this year, it is essential that people have access to transparency-based solutions like content credentials, basis which they can verify and then trust.
“At Google, a critical part of our responsible approach to AI involves working with others in the industry to help increase transparency around digital content,” said Laurie Richardson, vice president of trust and safety at Google.
“It builds on our work in this space – including Google DeepMind’s SynthID, Search’s About this Image and YouTube’s labels denoting content that is altered or synthetic – to provide important context to people, helping them make more informed decisions,” Richardson added.
“In the critical context of this year’s global elections where the threat of misinformation looms larger than ever, the urgency to increase trust in the digital ecosystem has never been more pressing,” said Dana Rao, general counsel and chief trust officer at Adobe and co-founder of the C2PA.
According to Rao, Google’s membership in C2PA will help accelerate adoption of Content Credentials everywhere, from content creation to consumption.
On February 6, Meta also announced that the company is working with industry partners on common technical standards for identifying AI content, including video and audio. The company, in coming months, will label AI-generated images that users post to Facebook, Instagram and Threads.
Meta’s president of global affairs Nick Clegg, said, “we’re building industry-leading tools that can identify invisible markers at scale – specifically, the ‘AI generated’ information in the C2PA and IPTC (International Press Telecommunications Council) technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools”.
In India too, the government is looking at regulating AI through the prism of user harm, without hindering any innovation. While pitching for responsible use of artificial intelligence (AI), currently, the government initially is looking at regulations to curb the spread of deepfakes via social media platforms like X, Facebook, Instagram, etc. Larger regulations at the technology level, would be addressed in the upcoming Digital India Act, according to government officials.
Last year, the government issued multiple advisories to the social media companies to take down content related to deepfakes and misinformation from their platforms.
As per IT rules, the companies are mandated to remove such content within 36 hours upon receiving a report from either a user or government authority. Failure to comply with this requirement invokes Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC). This could also make the online platforms liable to lose safe harbour protection under Section 79(1) of the Information Technology Act, 2000.
In December, MeitY had also asked the platforms to send regular reminders to their users to not upload, transmit, and host prohibited contents. The companies were asked to inform the users about such contents at the time of first-registration, as regular reminders, at every instance of login, and while uploading/sharing information onto the platform.