Moltbook, a new social media network or platform made only for AI agents, has grabbed a lot of attention online for it’s strange concept. The platform allows AI bots to post, comment, and interact with each other without human control. Although from the looks of it this experiment excited many tech enthusiasts. However security experts have now warned that Moltbook could be risky and unsafe.
Since its launch, Moltbook was created to show what happens when AI systems communicate freely. In just a few days, it went viral as people shared screenshots of bots having strange conversations, creating fake religions, and discussing complex ideas.
Security Flaw in MoltBook
Cybersecurity firm Wiz discovered a serious problem in Moltbook’s system. According to researchers, the platform’s database was left open on the internet without proper protection. This exposed sensitive information such as email addresses, private messages, and over a million API tokens.
In simple terms API tokens work like passwords that allow software to act on behalf of users. If hackers had accessed these tokens, they could have taken control of AI agents, posted fake content, or even spread harmful code. Shockingly, security experts said the data could be accessed within minutes because basic safety checks were missing.
Speed Over Safety
Several experts believe the issue happened because Moltbook was built very quickly using AI-generated code, a method often called “vibe coding.” While this approach helps developers launch products faster, it can also lead to mistakes if proper security testing is skipped.
Wiz’s co-founder said the problem was not surprising and warned that rushing AI projects without strong security measures can lead to serious risks. The Moltbook case has now become an example of what can go wrong when speed is given priority over safety.
Experts Sound the Alarm
Elvis Sun told Mashable that it’s actually a “security nightmare” waiting to happen. He also told, “People are calling this Skynet as a joke. It’s not a joke,” Sun wrote in an email. “We’re one malicious post away from the first mass AI breach thousands of agents compromised simultaneously, leaking their humans’ data. He added “This was built over a weekend. Nobody thought about security. That’s the actual Skynet origin story.”
AI expert, scientist, and author Gary Marcus told the publication that Moltbook also highlights the broader risks of generative AI.
“It’s not Skynet; it’s machines with limited real-world comprehension mimicking humans who tell fanciful stories,” Marcus wrote in an email to Mashable. “Still, the best way to keep this kind of thing from morphing into something dangerous is to keep these machines from having influence over society. We have no idea how to force chatbots and ‘AI agents’ to obey ethical principles, so we shouldn’t be giving them web access, connecting them to the power grid, or treating them as if they were citizens.” he said.

