A viral LinkedIn post by Alex Banks lit up feeds with a staggering claim that one power user, @jumperz on X, consumed $27,000 worth of computing power on Anthropic’s Claude while paying just $200 a month. 

The post framed it as proof that AI business models are broken, that Anthropic “burns 70 cents of every dollar it brings in,” and that the industry is heading for a reckoning.

The post spread fast and ignited a debate around the vulnerability of the tool.

The $27,000 number is inflated by design

We went back to @jumperz’s X account and found that he too was agitated with the outage.

If the actual compute cost is approximately 10% of the API price, then @jumperz’s 1.1 billion tokens cost Anthropic closer to $2,700, not $27,000. Still a loss on a $200 subscription. But a very different story from the one that went viral.

In fact, Anthropic’s own data suggests the average Claude Code developer uses about $6 per day in API-equivalent compute. At the estimated 10% real cost, that is roughly 60 cents a day to serve. Against a $20 monthly subscription, the average user is profitable on inference alone. The company’s losses come from the massive fixed costs of staying on the frontier, not from serving your chat messages.

So why is Claude actually limiting usage?

Anthropic itself acknowledged that “people are hitting usage limits in Claude Code way faster than expected” and called it “the top priority for the team.” Some users reported that rolling back to an earlier version of Claude Code.

AI giant says tightening peak-hour limits, introducing weekly caps, blocking third-party exploits, and nudging heavy users toward metered pricing is not the desperate flailing of a company on the verge of collapse.

In late February 2026, OpenAI signed a contract with the US Department of Defence, which led to QuitGPT movement that attracted 2.5 mn participants and Claude shot to number one on the US App Store. Anthropic’s web traffic jumped over 30% in a single month.

Then came Claude Code, agentic workflows, computer use, and autonomous task execution. Developers began running 6 to 15 parallel Claude Code instances simultaneously, each sending full codebase context with every prompt. 

In March 2026, Anthropic rolled out a one-million token context window for its latest models. This meant each interaction could consume dramatically more compute. 

From March 13 to March 28, Anthropic doubled usage limits during off-peak hours. When the promotion expired, many users experienced what felt like a dramatic cut. In reality, limits were reverting to baseline, or slightly below it.