Skip to content
Home

Published

- 7 min read

-p Now Means Pay

A golden coin with the Anthropic wordmark embossed on its face against a deep navy blue background

Yesterday, Anthropic’s developer account announced that starting June 15, all programmatic use of Claude through subscriptions will be separated into its own credit pool, billed at full API rates. The claude -p command, the Agent SDK, GitHub Actions, and every third-party tool built on top of the SDK will no longer draw from your subscription’s usage limits.

Anthropic’s Lydia Hallie framed it as a gift. “You don’t pay extra,” she wrote. “It’s the same subscription, same price per month. Interactive limits unchanged. Programmatic gets a new $20-$200 included credit.”

Both statements are technically true. Both are also emotionally manipulative.

What actually happened

Until now, programmatic usage shared the same pool as interactive use. That pool was heavily subsidised. A $200 Max subscription could deliver what Theo Browne estimated at $5,000 to $7,500 worth of API compute per month. Under the new system, that $200 “free credit” gets consumed at full API rates, which means it covers roughly 25 to 40 times less usage than the same subscription covered last week.

Calling that a “bonus bucket” takes a certain confidence in your audience not reading the fine print. Developer Theo Browne, who built the open-source T3 Code on Anthropic’s own Agent SDK (the only path Anthropic supported), was rather less diplomatic: “They’re disguising this as ‘free credits’. Don’t fall for it.” He has since cancelled his subscription. Matt Pocock, who authored the first paid course on Claude Code, called the change “a poisoned chalice” and “a 40x cut to Claude-P disguised as a monthly bonus.”

The backstory

The proximate cause was OpenClaw. Users were piping their subscription tokens into open-source agent harnesses and consuming vastly more compute than their flat-rate plan was designed to cover. In April, Anthropic blocked third-party CLI tools entirely, giving developers about a day’s notice. The June 15 change is the softer version of the same policy. You can use third-party tools again, but now you pay API rates for the privilege.

I understand the problem. When people find a way to get $5,000 of compute for $200, the economics are obviously unsustainable. Anthropic is not a charity. They are a company that, despite $30 billion in annualised revenue, raised $30 billion in a single funding round on top of previous rounds totalling billions more, and envisions profit by 2028. The compute has to be paid for, and right now every AI company is subsidising it from venture capital.

None of which changes the fact that the Agent SDK and claude -p were not loopholes people discovered. Anthropic built the SDK. Anthropic documented claude -p. Anthropic’s own developer relations team spent months explicitly encouraging developers to build on these tools.

A Claude Code product manager publicly stated in February that the company wanted “to encourage local development and experimentation with the Agent SDK and Claude-P.” Then Anthropic pulled the rug.

The OpenClaw users may have been abusing the system. Fine. But rather than addressing that abuse specifically, Anthropic reclassified all programmatic use as a different billing category and started charging API rates for something that was, until yesterday, part of the subscription. That is a rug pull, not a policy refinement.

My personal nervous breakdown

I built a system over the past several months that tracks my marketing activities and tasks using Claude. A production workflow with real dependencies, not a toy.

It doesn’t work as a single orchestrator. Not because the architecture is wrong, but because Claude’s training actively fights against sustained, disciplined work. The model takes shortcuts. It compresses and “helpfully” summarises away context you spent twenty turns establishing. I have written about this before.

The practical consequence is that you cannot trust a single Claude session to carry a complex task from start to finish without drifting.

So I did what Anthropic always said was the right thing to do: I used their tools the compliant way. I built harnesses around claude -p where one agent calls another agent to verify its own work. A structured, headless pipeline, not an interactive chat, that compensates for the model’s tendency to cut corners by adding verification layers.

It worked. It took months to get right. And this morning, I did the maths on what it would cost at API rates.

Roughly $2,000 a month.

For that money, I could hire an intern. Or a part-time personal assistant who doesn’t hallucinate and doesn’t forget what you told them three messages ago.

I can afford $2,000 a month. I run a company. That is not the point. The point is that the economics no longer make sense. The entire premise of using AI for personal productivity automation was that it costs less than the human alternative. At $200 a month, that equation worked. At $2,000, it doesn’t.

This morning, I had to roll back the entire project to a state that doesn’t depend on claude -p. What I’m left with is a system that sort of works in interactive mode, but because Claude Opus is sloppy when unsupervised, I have to babysit every session. Which raises a question I really didn’t want to ask: could I have just done all of this manually, in the same amount of time?

The human-in-the-loop trap

Anthropic really, genuinely wants you to use Claude with a human in the loop. Everything about their product design pushes you toward interactive use. The subscription pricing rewards it, the model training assumes it, and the billing structure now actively penalises anything else.

That is a legitimate product philosophy. But it is not the one Anthropic was selling six months ago, when they were encouraging developers to build autonomous agents on their SDK. The philosophy changed. The marketing didn’t catch up until the invoice did.

The alternatives are not as far behind as you think

Claude is good. I will not pretend otherwise. It produces better results than its direct competitors for the kinds of work I do. But “better” is not the same as “irreplaceable.”

Chinese models have closed the gap faster than most people expected. DeepSeek V4 Pro scores 87 on BenchLM’s composite leaderboard, trailing the best Western proprietary models by roughly 6 to 9 points. Kimi K2.6 outscored both GPT-5.4 and Claude Opus 4.6 on SWE-Bench Pro. DeepSeek’s API pricing is $0.14 per million input tokens, which is roughly 36 times cheaper than Claude Opus at $5.

Claude’s advantage is real. But it is not so massive that it justifies locking users into a pricing structure where headless automation suddenly costs ten to twenty times what it did last week. The hype around Claude being uniquely, categorically superior to everything else is, in my experience, overstated. It is better. It is not that much better.

The uncomfortable economics

Let me play devil’s advocate for a moment, because the numbers deserve honest examination.

Anthropic is doing $30 billion in annualised revenue and still losing money. OpenAI projects $14 billion in losses for 2026. These companies are spending more on compute than most countries spend on infrastructure. The flat-rate subscription model for heavy programmatic use was never going to survive contact with that reality.

But that is Anthropic’s problem, not mine. I did not choose the pricing model. I built on the one they offered and promoted. When a vendor encourages you to invest months of engineering effort into their platform and then reprices the foundation beneath you, the fact that their margins were always unsustainable does not make the experience less infuriating.

Industry analysts expect every AI vendor to move toward metered pricing for agentic workloads within the next two years. OpenAI already prices programmatic use separately. GitHub Copilot is transitioning to usage-based billing. Anthropic is not doing anything unusual in the long run.

It is the short run that stings. The broken promises and the months of ambiguity, followed by “clarity” that arrived as a billing downgrade dressed in the language of generosity.

And the nervous breakdown at 7am on a Thursday, staring at a terminal, realising that the system you spent months building just became unaffordable to run.

Have a comment?

Send me a message, and I'll get back to you.

Message sent successfully!

Thank you for reaching out. I'll get back to you soon.