
I promise this isn't a joke
Audio Summary
AI Summary
This podcast episode, titled "The Theo Ben Podcast Network," focuses heavily on recent controversies surrounding Anthropic, particularly the "Anthropic Week" of issues. The hosts, Theo and Ben, discuss a series of mistakes made by Anthropic, including a Cloud Code source code leak, subscription debacles, and poor communication strategies.
Theo recounts how he quickly downloaded the leaked source code and ran a GPT 5.4 extra high setup, resulting in a working version of Clock, which he found to be subpar for actual work but amusing for a screenshot. He expresses frustration with Anthropic's perceived inability to admit fault, suggesting their actions might be driven by a desire to avoid acknowledging his previous criticisms. Ben adds that Anthropic's communication strategy seems based on a prior assumption of universal positive sentiment, a delusion that persists among some employees despite a shift in public opinion.
The discussion highlights Anthropic's decline in the developer world, contrasting it with OpenAI's recent efforts to be more cooperative and responsive to feedback. Theo shares an anecdote about an OpenAI friend's proactive offer to clarify that Theo wasn't paid by OpenAI, demonstrating OpenAI's focus on positive optics and responsiveness to criticism. They note that while other labs, particularly Chinese ones, are making efforts to be pleasant, OpenAI's frontier models (5.4 and Opus) dominate, making them the most relevant players.
The hosts then detail the timeline of Anthropic's recent blunders. It began with a rate limit crunch, where Anthropic reduced subscription rate limits during peak hours without adequate prior announcement, leaving many users unable to utilize their subscriptions effectively. This lack of official communication forced users to rely on unofficial sources like Theo and Ben for information. They point out the absurdity of needing to follow individual developers rather than official channels for critical updates.
Next, they discuss the alleged bug in Cloud Code that caused cache misses, leading to increased token usage and higher costs for users. They explain how caching works in language models, where previous message history is processed to generate new tokens, and how changes in that history (like a timestamp in a system prompt) can break the cache, forcing recalculation and increased cost. They highlight that Anthropic is unique in charging for cache writes, unlike OpenAI, which offers automatic caching without such charges.
The conversation moves to the Cloud Code source code leak, which Theo and Ben attribute to Anthropic's poor engineering practices. They reveal that every publication of Cloud Code to npm has been done manually from a team member's machine, rather than through a robust continuous integration (CI) process. This manual process, combined with a failure to properly clear build directories, led to source maps being accidentally included in the published package, effectively exposing the entire source code. They contrast this with their own small team's disciplined CI publication steps, emphasizing Anthropic's apparent laziness despite their larger team and experienced developers.
Following the leak, Anthropic announced that subscriptions could no longer be used for anything other than their official Cloud Code tools, giving less than 24 hours' notice. Theo and Ben speculate that this decision was driven by a GPU shortage and a desire to consolidate usage within their own tools, potentially to subsidize their own product and entice former employees Boris and Cat back to the company. They argue that the initial decision to heavily subsidize Cloud Code subscriptions was a mistake, a marketing expense primarily aimed at retaining talent, and that all subsequent "optics disasters" stem from this misstep.
They also critique Anthropic's inconsistent and unclear communication regarding the use of their agent SDK and custom wrappers. Despite an initial, seemingly clear confirmation from an Anthropic employee that custom wrappers for local use were allowed, subsequent clarifications have introduced ambiguity, leaving developers like Matt Pocock, who built an entire course around Cloud Code, in limbo about the legality of their projects.
The discussion then turns to the broader implications of Anthropic's actions, particularly their aggressive use of Digital Millennium Copyright Act (DMCA) strikes. Theo recounts receiving a false DMCA strike on GitHub for a one-line change to the public Cloud Code repository, not the leaked source code. He explains that Anthropic has become the most prolific copyright striker on GitHub, leading to 8,100 invalid repository bans. He attributes this to sloppiness, potentially involving agents or a lack of manual review, and highlights the severe personal and professional consequences of false DMCA strikes, especially on platforms like YouTube where three strikes can lead to a permanent loss of income.
Finally, Ben introduces "Pi," a minimalist coding agent harness that he finds superior to Cloud Code due to its simplicity and extensibility. Unlike Cloud Code, which sends tens of thousands of tokens to the API with every request due to its overloaded system prompt and numerous tools, Pi features only four basic tool calls and a concise system prompt. Ben praises Pi's ability to be customized by the agent itself and its self-healing capabilities. He contrasts this with the token-hungry nature of Cloud Code, arguing that "less tools is better" and that excessive token usage, even with large context windows like Opus's 1 million tokens, can lead to dumber agent performance.
The hosts conclude by expressing concern about the lack of competition for OpenAI, hoping that Anthropic will improve. They note OpenAI's strategy of learning from others' mistakes and being pleasant to work with, contrasting it with Anthropic's perceived arrogance and unwillingness to listen to feedback. They emphasize the importance of open-source initiatives and OpenAI's generous approach to providing free inference to open-source developers, a strategy Theo claims to have advocated for.