YouTube has updated its internal moderation policies to permit some content that partially violates its rules to remain online if it is considered important for public understanding, according to a report by The New York Times. The policy shift, introduced in December 2023, signals a recalibration of how the platform balances harm prevention against freedom of expression, particularly on complex or contentious topics.
Under the revised guidelines, content reviewers are now directed to leave up videos unless more than 50% of the material breaches YouTube’s policies. This marks a departure from the previous rule, which set the threshold at 25%. The updated standard is particularly relevant to videos addressing subjects like elections, identity, gender, race, immigration, and social ideologies.
In addition to the higher threshold for removal, moderators are now being asked to assess whether a video’s value in supporting free speech might outweigh potential risks. If so, they are instructed to escalate the case for further review rather than remove the content outright. This process falls under YouTube’s established exception categories, education, documentary, science, and art, collectively referred to as the EDSA framework.
“YouTube’s Community Guidelines are regularly reviewed and adjusted to reflect the evolving nature of the platform,” spokesperson Nicole Bell told The Verge. She emphasised that this policy change applies to a limited portion of content and is meant to prevent overly broad enforcement. For example, she cited the need to avoid removing a lengthy news podcast over a brief clip that might otherwise violate the rules. This development builds on an earlier decision to allow content from political candidates to stay online even when it runs afoul of moderation policies, provided it contributes to public awareness, especially relevant in the context of the 2024 U.S. elections.
The move aligns with broader changes in the social media landscape. Meta, for instance, has scaled back its enforcement on misinformation and hate speech, ending its third-party fact-checking program and shifting to user-driven corrections, a model similar to that used by X (formerly Twitter). Previously, YouTube had taken a stricter approach, particularly during the Covid-19 pandemic and the Trump presidency, aggressively removing misinformation related to vaccines and election outcomes.
