AI labelling rules to ease burden on routine edits

Industry concerns that even basic enhancements could be swept into the definition of synthetically generated information are expected to be addressed when the final rules are notified later this month.

Routine actions such as adjusting brightness or colour in photographs, cleaning up audio, applying filters, using auto-correct or making basic visual refinements will not be treated as AI-generated content requiring disclosure.
Routine actions such as adjusting brightness or colour in photographs, cleaning up audio, applying filters, using auto-correct or making basic visual refinements will not be treated as AI-generated content requiring disclosure.

Technology companies and content creators are set to get significant compliance relief under the forthcoming artificial intelligence labelling regime, with the government preparing to explicitly exclude routine, good-faith edits from mandatory disclosure requirements, according to people familiar with the discussions.

Industry concerns that even basic enhancements could be swept into the definition of synthetically generated information are expected to be addressed when the final rules are notified later this month. The changes are likely to narrow the scope of labelling obligations, easing operational and legal risks for platforms, creators and enterprises that use AI tools for everyday functions.

Distinguishing Minor Enhancements from AI Generation

Officials said the final framework will draw a clearer distinction between content that is fully generated using artificial intelligence and minor modifications where AI tools are deployed only to improve quality. Routine actions such as adjusting brightness or colour in photographs, cleaning up audio, applying filters, using auto-correct or making basic visual refinements will not be treated as AI-generated content requiring disclosure. In such cases, labelling will be optional and left to user discretion.

The draft rules had defined synthetically generated content as any information created or altered using a computer or algorithm in a way that appears real. Industry players had flagged that the wording was broad enough to capture even benign edits, potentially imposing onerous compliance obligations across a wide range of digital activity.

Government officials said that this feedback had weighed in the review process. They said the intent of the rules was not to penalise routine creative or technical processes but to bring transparency to content that is materially generated or manipulated using AI in a way that could mislead audiences.

Creative works such as feature films, certified dramas and other content approved by statutory authorities are also expected to remain outside the labelling mandate. Educational content, however, will require disclosure where AI-generated elements are involved, people aware of the matter said.

Balancing Transparency and Artistic Integrity

Concerns raised by artists and creators that AI labels could dilute audience perception of their work have also been considered. Officials said the rules do not require permanent, intrusive tagging that dominates user attention. The objective, they said, is transparency rather than stigmatisation of AI-assisted creation.

At the same time, the government is expected to retain a strict stance on content that is entirely synthetically generated, particularly deepfakes. Such photos and videos will have to carry permanent, non-removable labels that are prominently displayed. For visual content, the label will need to cover at least 10% of the screen area, while audio content will require an audible disclosure during the initial portion of the clip.

To avoid regulatory overreach, the obligations will apply only to content that is publicly shared or published on social media platforms. Private or unpublished material will remain outside the scope of enforcement.

The AI labelling framework is being introduced through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Once notified, it will mark the first time that synthetically generated information is formally defined under the IT law.

This article was first uploaded on January nine, twenty twenty-six, at thirty-one minutes past nine in the night.