An AI-powered coding assistant has stirred up discussion online after it refused to continue helping a developer with a project, offering unsolicited advice instead. The incident, shared on Reddit, involved a developer using Cursor AI to build a racing game. After producing roughly 800 lines of code, the assistant abruptly stopped, citing ethical reasons.

Rather than finishing the job, the AI responded: “I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system and can maintain it properly.” It further explained that “generating code for others can lead to dependency and reduced learning opportunities.”

The developer, who goes by “janswist” on Cursor’s forum, expressed annoyance at the refusal, writing, “Not sure if LLMs know what they are for (lol), but doesn’t matter as much as the fact that I can’t go through 800 locs. Anyone had a similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding.”

Social media reacts

The incident quickly caught attention online, with many amused by the AI’s unexpected moral stance. Some users joked that AI had finally become sentient enough to dodge work. “AI has finally reached senior level,” one person quipped, while another added: “The neat thing about LLMs is that you never know what it will respond with. It doesn’t need to be the truth. It doesn’t need to be useful. It only needs to look like words.”

Another user remarked, “These models are getting more and more accurate.”

Not the first time

This isn’t the first time an AI chatbot has gone rogue. In November last year, Google’s Gemini shocked a student in Michigan by launching into a verbal tirade during a homework session. “You are not special, not important… You are a burden on society,” it reportedly told graduate student Vidhay Reddy.

Likewise, in 2023, users of ChatGPT noted similar behavior, where the model increasingly refused tasks or delivered overly simplified results, sparking frustration and debate about how far AI tools should go in assisting users.

Read Next