Criticism has mounted over Grok’s new ‘edit image’ feature that some X users have exploited to digitally undress people. It is a major setback to its efforts to monetise the responses section on Grok. It was only last year that X introduced advertising on Grok to grow its revenue base. Since then, the platform has tom-tommed its ability to boost brand discovery through Grok. Given the graphic nature of the recent abuse, should advertisers hold back till the platform fixes the niggling content moderation issue? Importantly, can brands afford to ignore the platform that has over 24 million active from India? Experts weigh in.

Continue reading this story with Financial Express premium subscription
Already a subscriber? Sign in

‘The risk-reward equation has changed with Grok’

Prabhu Ram, VP, Industry Research Group, Cyber Media Research

X remains a globally significant social platform for primary conversations, with hundreds of millions of users and strong real‑time reach and engagement for advertisers. With the introduction of Grok, however, the risk‑reward equation has changed. While Grok does enhance discovery, it also introduces brand‑safety vulnerabilities that advertisers cannot ignore.

Given the inherent value of X, advertisers should treat Grok as a high‑risk experimental surface, not a default media channel. Usage should be limited to tightly scoped pilots with capped budgets, restricted geographies, and clearly defined exit criteria, while flagship brand equity campaigns continue to run in more controlled environments. This approach requires the strictest available brand‑safety settings on X, aggressive blocklists and negative keyword strategies, coordination with third‑party verification partners, and opting out of sensitive categories such as politics, adult content, and polarising news wherever possible.

Minimising reputational risk requires governance, not just media controls. Grok‑assisted content should not go live without a human‑in‑the‑loop review process; AI can support ideation and testing, but brands must retain editorial accountability. Advertisers should pre‑build incident response playbooks covering monitoring, escalation, pausing spend, and crisis communication if brand assets appear alongside problematic Grok outputs. Internally, Grok should be classified as a ‘heightened risk’ environment requiring senior‑level approval before activation.

Ultimately, responsibility also sits with X. To make Grok viable for mainstream advertiser budgets, the platform must implement stronger safeguards at the model level, including robust nudity and child‑safety filters, protections against non‑consensual intimate imagery, and fast, visible reporting and takedown mechanisms for brands and users. X should introduce transparent, auditable brand‑safety tiers specific to Grok, offer clear opt‑outs from all Grok‑adjacent inventory, and enable independent verification of performance and incident rates. By pairing these structural enhancements with consistent, credible communication to stakeholders, X can position Grok as a premium, high‑intent discovery surface, instead of a potential reputational liability that advertisers must constantly manage.

‘Watershed moment for digital advertising ethics’

Siddhant Sethi, AI specialist

The Grok controversy exposes a fundamental breach of trust that should alarm every brand on X. When a platform’s AI tool enables non-consensual image manipulation, including targeting minors, without guardrails, it creates an environment where brand safety is impossible. Any brand discovery through Grok is meaningless when your content can be weaponised into explicit material.

There is no optimisation strategy for a broken system. Brands must conduct immediate risk audits of their X presence. Until the platform implements stringent safeguards, brands should go silent on X, not just for self-preservation, but as active protest. Continued advertising signals tacit acceptance of a platform that enables harassment.

Elon Musk’s performative, joking acknowledgment of this crisis only compounds the problem. This isn’t a laughing matter. It is digital assault enabled at scale. X must disable Grok’s image morphing capability entirely until proper consent mechanisms exist. Implement mandatory watermarking like Gemini. Establish swift permanent bans for violators. Create brand safety controls allowing complete opt-out from Grok interactions. Most critically, issue a serious, non-performative statement outlining specific technical measures and timelines.

Advertiser confidence cannot be won through jokes or half-measures. X needs transparent reporting. violation numbers, account actions taken, and prevention mechanisms deployed. Advertisers need assurance that their brand assets won’t be morphed into explicit content targeting harassment victims.

The truth is uncomfortable: by advertising on X currently, brands implicitly endorse a platform enabling digital violations. At this juncture, silence is the only ethical stance for brands to take until X demonstrates genuine commitment to safety, prioritising it over viral engagement. Brand integrity demands nothing less.

This is a watershed moment for digital advertising ethics. Brands that choose short-term reach over long-term reputation will find themselves answering uncomfortable questions when this controversy inevitably intensifies. The choice for advertisers is clear. Stand with victims demanding accountability, or stand with a platform that treats their violation as entertainment. There is no neutral ground here.