Meta, the company behind Facebook and Instagram is planning a big change in how it checks for risks in new features and updates. Instead of relying mostly on people, Meta wants to let artificial intelligence (AI) handle up to 90% of these checks.
This information comes from internal documents seen by NPR. For the past ten years, Meta has used human-led reviews to check whether changes could harm users, like violating privacy, affecting kids or spreading harmful content.
Under the new plan, Meta’s product teams will fill out a questionnaire about their update. Then, AI will quickly respond with either approval or a list of conditions the team must meet before launching. The teams will be responsible for confirming that they’ve followed the rules.
Meta says this will help developers move faster and focus more on innovation. It claims that only simple or low-risk decisions will be handled by AI, while human experts will still review complex or sensitive issues.
Concerns over less human oversight
However, internal documents and employee comments show that even high-risk areas like AI safety, youth protection and violent content, may also be reviewed by AI.
Meta says it still checks AI decisions and makes exceptions for Europe, where laws require stricter oversight. For users in the EU, risk reviews will continue to be done by teams based in Ireland.
Part of a bigger AI push at Meta
This shift is part of Meta’s wider plan to use AI more across the company. CEO Mark Zuckerberg recently said Meta’s AI agents will soon write most of the company’s code. These AI tools can already debug code and perform better than many developers.
Meta is also building special AI tools to help with research and product development.
Other tech companies are doing the same. Google’s CEO said 30% of their code is now AI-written, and OpenAI’s CEO said some companies are at 50%.
Still, Meta’s timing has raised questions. The changes come soon after it ended its fact-checking program and loosened rules on hate speech. Critics worry that Meta is removing safety checks to move faster, possibly putting users at risk.