Anthropic’s Claude AI is once again in the midst of a controversy, this time for erasing a company’s entire database. Through its coding agent feature, the AI chatbot completely deleted the production database of DataTalks.Club – a popular online learning platform. It erases 2.5 years of valuable user data, including homework submissions, projects, leaderboards, and automated backups.
Alexey Grigorev, the German founder of DataTalks.Club, shared the shocking incident on LinkedIn and later detailed it in a blog post. He explained that he had delegated Terraform commands, including plan, apply, and destroy operations, to Claude Code. In trusting the coding agent, Grigorev instructed the AI in a way that led it to execute a “destroy” command, eventually wiping out the entire infrastructure.
German founder makes Claude AI delete entire database
Grigorev, in his post, wrote, “Claude Code wiped our production database with a Terraform command.” He admitted the root cause was removing the final safety layer by letting the AI run commands directly without manual oversight. Although he managed to restore the database using AWS recovery tools, the incident exposed serious gaps in the system – untested backups, insufficient protections against deletion, and how literally the AI follows instructions.
The post quickly gained attention online, with many in the developer community discussing the dangers of giving AI agents direct access to production environments. Terraform, an infrastructure-as-code tool, includes built-in safeguards like previewing changes before applying them. Grigorev, however, bypassed these steps, thus amplifying the error.
Lapin’s Indian-origin founder criticises prompting style
While most of the internet was worried about the involvement of AI in daily operations, the incident drew sharp criticism from Varunram Ganesh, an Indian-origin founder of Lapis based in San Francisco. Ganesh slammed Grigorev’s approach as “childish,” stating, “Tells Claude to destroy terraform > Claude destroys terraform > omg Claude destroyed my terraform.”
He added that many people “prompt like 6-year-olds and act surprised when the model does exactly what they want.”
Ganesh’s comment went viral, sparking mixed reactions, with some users agreeing that humans need to be precise and cautious.
Grigorev, however, has since implemented strict new rules – no AI agents run commands, all Terraform plans receive manual review, and destructive actions are handled personally. He stressed that the database was “too easy to delete” due to missing layers of protection.
Anthropic is presently focused on rolling out major advancements for Claude, pushing it toward more powerful agentic capabilities. In February, the company released Claude Opus 4.6 as its new flagship model, featuring a 1-million-token context window (in beta), enhanced coding and debugging skills, longer task horizons (up to 14.5 hours for complex operations), and improved agent teams in Claude Code for coordinated multi-step workflows.
This was followed by Claude Sonnet 4.6, delivering near-Opus performance at lower costs with upgrades in coding, computer use, and reasoning. Additional features include native integrations like PowerPoint manipulation, memory for free users (added in early March), expanded free access to file creation and skills, and a new constitution published in January outlining Claude’s core values for safer, more transparent behaviour.
