Digital watermarking of AI explained: What is it, what it does, why it’s a big deal

OpenAI has a method up its sleeves that can identify any kind of AI-generated content

Decoding AI watermarking
Decoding AI watermarking

Since the launch of ChatGPT the debate over the pros and cons of AI generated content still continues. Amidst this, OpenAI has reportedly developed a method that can identify AI-generated content. But there’s a  catch! According to the Wall Street Journal, OpenAI has a secret tool that can identify AI -generated content. However, it is believed that the new tool can pose a threat to its own AI-chatbot ChatGPT.

OpenAI has a method up its sleeves that can identify any kind of AI-generated content. Be it image, text or even AI-edited original content, this rumoured tool of OpenAI can identify it all. But OpenAI might not be ready to launch this tool yet.

Decoding AI watermarking

OpenAI is planning on something known as ‘anti-cheating’ technology, which is yet to be launched in public. The report explained that OpenAI’s new tool will be able to detect AI-generated content using  ‘anti-cheating’ technology. This could help to avoid the misuse of AI for various purposes. It would also be able to identify ‘AI watermarking’, which is usually invisible to the human eye.

But what is AI watermarking and how does it work? AI watermarking can be defined as the process of embedding a ‘recognisable, unique signal into the output of an artificial intelligence model.’ This could include text or an image, or any form of AI generated content. This signal which is known as a watermark, can then be detected by algorithms designed to scan it. But not all have this capability.

Here, OpenAI’s new ‘anti-cheating’ technology comes into play. The Wall Street journal explained that OpenAI’s anti-cheating tool can change the way ChatGPT ‘selects words or word fragments (tokens) to generate text or any form of content.’ This modification then creates a subtle pattern (watermark), into the generated text/content. The watermarks can be recognised by OpenAI’s detection technology, detecting that a document or section was generated by ChatGPT.

Through this method ChatGPT can stop the passing of AI- generated content, as most of these are being used for fraudulent activities. However, the “decision to keep the anti-cheating tool under wraps is because it has a few risk factors and is complex,” an OpenAI spokesperson told the Wall Street Journal. The spokesperson further added that looking at the complexities, the launch might affect a broader ecosystem beyond OpenAI.’

The way ahead

Early reports suggest that OpenAI has been used for many fraudulent activities. According to the Centre for Democracy and Technology which is a technology policy non-profit organisation, about 59 per cent of middle- and high-school teachers said that some students had used AI to help with schoolwork.

Further rumours suggest that the watermarking technique can see a 99.9 percent effectiveness rate. However, the method can be error free only if ChatGPT can create a substantial amount of new text, allowing exact identification of AI-generated content.

Follow FE Tech Bytes on TwitterInstagramLinkedInFacebook

Get live Share Market updates, Stock Market Quotes, and the latest India News
This article was first uploaded on August seven, twenty twenty-four, at one minutes past four in the afternoon.
X