
Be careful what you name your markdown files...
Audio Summary
AI Summary
The speaker expresses extreme frustration and disappointment with Anthropic, particularly regarding their Claude Code product and billing practices. This is not the first time the speaker has criticized Anthropic, but recent events have pushed their frustration to new heights, highlighting what they perceive as a profound level of incompetence and disdain for engineers and users.
Previously, Anthropic was criticized for billing users differently based on the system prompt used in Claude Code, specifically to discourage the use of third-party tools like OpenClaw. The speaker acknowledges that there might be some financial motivation for this, as running OpenClaw through a separate service can incur significant inference costs. However, they argue that users should not be penalized or charged extra for using their subscription with different tools, especially if they haven't exhausted their allotted usage.
The situation has escalated, with Anthropic now allegedly billing users based on file names in their Git commits. A specific incident is cited where a user, subscribed to the $200/month Max 20x plan for Claude Code, was billed an additional $200 without using any of their allocated usage. This happened because a recent Git commit message contained the term "Hermes MD." The speaker finds this "genuinely, hilariously, and pathetically" incompetent.
The Max 20x plan is described as very generous, offering up to $2,000 worth of inference for $200/month if used continuously. This represents a 10x discount compared to API prices. Anthropic's API traditionally charges $5 per million input tokens and $25 per million output tokens for their flagship Opus 47 model. The speaker suggests that Anthropic dislikes users approaching these limits, especially through automated tools, because it reduces their profit margin from users who pay but barely use the service.
The speaker attempts to "steelman" Anthropic's argument, suggesting that caching might be a legitimate concern. In AI, caching involves saving the model's state after processing a chat history to avoid recalculating everything for subsequent requests, thereby reducing compute costs and improving response times. Input tokens are expensive because they require this recalculation. If a third-party tool mishandles caching, it could lead to higher compute costs for Anthropic. Claude Code, hypothetically, should be better at caching than generic tools like OpenClaw because it's optimized for Anthropic's specific servers.
However, the speaker contends that Anthropic's own caching systems are deeply flawed. They point out that Anthropic recently and quietly reduced their cache times from one hour to five minutes. Furthermore, the speaker reveals that their own service, T3 Chat, which uses Anthropic models and processes billions of tokens monthly, experiences absurdly high bills. A significant portion of this bill, approximately half, comes from "prompt cache rights," meaning Anthropic charges for writing to cache, unlike Google or OpenAI, who perform caching for free. The speaker's team even stopped caching Anthropic models, and their bill didn't meaningfully change, indicating that Anthropic's caching is ineffective and costly. This directly refutes Anthropic's claim that third-party harnesses are problematic due to poor caching.
The core of the recent issue stems from Anthropic's efforts to "ban" or discourage the use of third-party tools by detecting specific strings in the system prompt. Claude Code offers a `-p` flag, allowing users to programmatically pass prompts, which enables integration with other tools like OpenClaw. This functionality also allows for appending to the system prompt. Anthropic, in response, began to penalize users who included terms like "OpenClaw" in their system prompts. This penalty manifests as an "API error: invalid request" or, if "extra usage" is enabled, direct billing for requests that would otherwise be covered by the subscription, even if limits haven't been reached.
The speaker demonstrates this by showing that merely having a recent Git commit message containing a specific string (e.g., "openclaw inbound meta v1 thing") is enough to trigger this punitive billing, even without any changes to the system prompt or actual usage of a third-party tool. This happens because Claude Code automatically pulls recent Git history into its system prompt to aid the model. Anthropic's "third-party harness detection" then scans this internal system prompt.
The specific case with "Hermes MD" involved a user who had this string in their commit history. "Hermes Agent" is a new open-source alternative to OpenClaw, and "Hermes.mmd" is a system prompt spec file used in Hermes Agent projects. The user, despite using Claude Code directly, was erroneously billed $200 because the "Hermes MD" string in their commit history triggered Anthropic's detection mechanism while their "extra usage" billing was enabled.
Anthropic's response to this specific incident, through a representative named Thor, acknowledged it as a "bug with the third-party harness detection" combining poorly with how Git status is pulled into the system prompt. They offered a refund and credits. While the response itself was deemed "human" and "totally fine," the speaker finds the existence of such a bug "egregious." They argue that certain classes of bugs indicate a fundamental flaw in the underlying approach and necessitate a complete re-evaluation of the software's design, rather than just a fix. The speaker believes that Anthropic's intense desire to lock users into Claude Code, driven by a "hatred of third-party harnesses," is causing the product itself to rot.
The speaker extends their criticism to the entire Anthropic engineering culture, describing it as a "toxic cesspit" and "a cult." They attribute this to the CEO, Dario, whom they believe has a deep disdain for software developers. The speaker claims Dario left OpenAI because he disliked working under engineers (Sam Altman and Greg Brockman). This disdain, the speaker argues, trickles down, causing engineers at Anthropic to dislike their users, leading to the current problematic product and practices. The speaker asserts that engineers at Anthropic are seen as "pawns" who exist only to enable fundraising for researchers.
The speaker also highlights Anthropic's unreliable API billing services, mentioning an incident where T3 Chat's automatic re-billing failed, leading to users being unable to use Anthropic models and receiving erroneous billing error messages, despite Anthropic's own system failing to process payments.
In conclusion, the speaker declares their reservation and attempts at being "nice" are over. They challenge Anthropic employees, particularly those working on Claude Code, to reflect on their choices, suggesting they are either knowingly participating in a "fucked up system" for financial gain or are "drinking the Kool-Aid." The speaker, who spends $40,000 a month on Anthropic services, contrasts this experience with OpenAI, where they spend less for better services, support, and a willingness to listen to feedback and fix issues. The speaker feels that Anthropic now actively ignores their feedback due to perceived animosity. They conclude with a direct and expletive-laden condemnation of Anthropic.