An AI agent at Meta had reportedly gone rogue, and this time, it has triggered a serious internal security breach alert. The rogue AI agent has exposed large amounts of sensitive company documents and user-related data to unauthorised employees, which has led the Facebook parent company to go on alert. 

According to a report by The Information, the incident has led Meta to internally classify it as a “Sev 1” severity event — the second-highest level of urgency. The Sev 1 situation lasted roughly two hours before access controls were restored. At a time when the company recently took over the popular social platform for AI agents, Moltbook, an incident like this throws light on the safety aspect of these AI agents, especially in corporate environments.

Meta AI agent goes rogue: How it all started

The chain of events began with a routine technical question posted by an engineer on Meta’s internal collaboration forum, a common channel for employees to seek help. Another engineer called upon an in-house autonomous AI agent to investigate the question and provide an answer.

Instead of waiting for approval, the AI agent posted its response directly to the public forum thread. The suggested fix for the concern turned out to be incorrect and unhelpful. Following the flawed instructions, the original question poser adjusted permissions in such a way that eventually widened access to the data.

As a result, the AI agent made vast quantities of internal company files, including proprietary code, business strategies, and user-related datasets, visible to engineers who previously had no authorisation to view them. 

Are autonomous AI agents dangerous?

This isn’t the first incident involving Meta and a rogue AI agent. Earlier in 2026, Summer Yue, Meta’s head of AI Safety & Alignment, publicly described a separate episode in which an OpenClaw agent began deleting emails from her Gmail inbox despite repeated stop commands. She equated the diagnostic experience to “defusing a bomb” as she raced to intervene manually.

The latest incident of a data breach by an AI agent brings out the challenges in safely integrating autonomous AI agents into today’s high-stakes workflows. While these AI agents promise massive productivity gains, they tend to act unpredictably when guardrails are disobeyed, permissions are mishandled, or instructions are misinterpreted.

Meta is currently focused on pursuing its ambitions to lead the AI race, with recent investments in AI startups like Scale AI, Manus AI and Limitless, while hiring the top talents in the AI space for creating what its Meta Superintelligence Labs (MSL), pouring billions into agentic AI development. The latest acquisition of Moltbook only strengthens the company’s goals and ambitions of having an edge in AI agents.

Meta has not issued a detailed public statement beyond confirming the Sev 1 classification to the media.