X, the social media platform formerly known as Twitter, is taking a strong stance against AI-generated videos of armed conflicts that are shared without clear labelling. From now on, creators who post AI-generated videos of wars without disclosure will face serious penalties under new rules.
Nikita Bier, X’s head of product, announced on Tuesday that any AI-created video showing armed conflicts must include a clear note that it was made with AI. “Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program,” Bier said.
The note continued, “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”
X cracks down on AI war videos
According to Bier, if a creator fails to disclose that a video was AI-generated, they will face a 90-day suspension from X’s Creator Revenue Sharing program. “This is about making sure people have real information during war,” said Nikita Bier, head of product at X. “AI makes it easy to create content that can mislead people. We need to make sure people know what’s real.”
The first violation results in a three-month suspension from revenue sharing, while a second violation leads to permanent removal from the program.
Bier made it clear that repeat violators will be permanently banned from the program. X will automatically flag posts that use generative AI tools by checking for metadata or other signals, and posts may also be flagged with a Community Note. “We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier added.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
— Nikita Bier (@nikitabier) March 3, 2026
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
Why the policy matters
AI tools like Midjourney, Runway, and open-source models now make it easy to generate realistic war scenes, whether real or fake. Social media platforms have struggled to keep such content under control, as it can spread quickly and attract massive engagement.
Revenue sharing has become an important source of income for many creators on X. Losing access for three months could seriously hurt those who rely on it. If enforced consistently, the policy could act as a strong deterrent against posting AI-generated war content without disclosure.
The policy currently applies only to “armed conflict” content. It is unclear whether it also covers protests, civil unrest, or historical conflicts. X has been testing AI detection tools, but the company has not shared details on how violations will be detected at scale, how fast reports will be processed, or what the appeals process looks like.
X’s approach stands out because other platforms handle AI content differently. YouTube requires disclosure but doesn’t tie violations to monetisation. Meta labels AI content but still allows creators to earn revenue. By linking enforcement to income, X is trying a harder line to prevent misinformation.
