Meta eases rules around hate speech and abuse, raising concerns about real-world impact

Meta also made changes to its ‘policy rationale’, removing a sentence that previously explained why certain types of hateful conduct were banned.

meta hate speech rules
Experts are concerned that these changes could lead to significant harm. (Image: Reuters)

As America braces for the Trump administration for the second time, Meta has made some significant changes to its content moderation policies. The company has eased its rules on hate speech and abuse, kind of aligning with the practice seen on Elon Musk’s social media platform X – specifically related to topics like sexual orientation, gender identity, and immigration status.

Loosening of Restrictions on Hate Speech

Meta’s decision to scale back its content moderation has prompted alarm among advocates who fear it could lead to harmful consequences in the real world. Mark Zuckerberg, Meta’s CEO, announced that the company would “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse,” suggesting that recent elections influenced this shift.

A key change is that Meta now allows allegations of mental illness or abnormality based on gender or sexual orientation, citing political and religious discourse about transgenderism and homosexuality. Essentially, this means it is now permissible to label gay individuals as mentally ill on platforms like Facebook, Threads, and Instagram. However, Meta still prohibits slurs and harmful stereotypes historically associated with intimidation, such as Blackface and Holocaust denial.

Removal of Key Policy Rationale

Meta also made changes to its ‘policy rationale’, removing a sentence that previously explained why certain types of hateful conduct were banned. The now-deleted statement said that hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” This shift is seen by many as a strategic move to align with the incoming administration and reduce the costs associated with content moderation.

Expert Concerns: Real-World Harm and Global Implications

Experts are concerned that these changes could lead to significant harm. Ben Leiner, a lecturer at the University of Virginia’s Darden School of Business, warned that the policy shift would not only escalate hate speech and disinformation in the U.S., but could also exacerbate ethnic conflicts abroad, as seen in places like Myanmar. In 2018, Meta admitted its platform was used to incite violence against the Rohingya Muslim minority in Myanmar, leading to significant human rights violations.

Arturo Béjar, a former Meta engineering director, voiced concern that Meta’s shift from proactive enforcement of harmful content to reliance on user reports could exacerbate the risks. Instead of actively monitoring and enforcing rules against bullying, harassment, and self-harm, Meta plans to focus on more severe violations, such as terrorism and child sexual exploitation. Béjar noted that by the time user reports are reviewed, much of the harm would already have been done, particularly in the case of vulnerable groups like teenagers.

Béjar further criticised Meta for its lack of transparency regarding the harms experienced by teenagers on its platforms. He suggested that Meta is avoiding accountability and working against legislation that could help protect vulnerable users. The overall impact of these changes on public safety, particularly among young people, remains uncertain, as Meta has been hesitant to address the consequences of its revised policies.

(With AP Inputs)

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on January nine, twenty twenty-five, at sixteen minutes past ten in the morning.
Market Data
Market Data