
Anthropic thinks they're Apple. They're actually hypocrites.
Audio Summary
AI Summary
The speaker criticizes Anthropic for its recent actions, specifically for advertising an iMessage plugin for Claude Code, which directly violates Apple's terms of service. This is highlighted as a significant act of hypocrisy, given Anthropic's own strict and often unclear terms of service for its products, and its recent legal actions against other developers for similar perceived violations.
The core of the issue revolves around Anthropic's Claude Code and its subscription model. Claude Code is a tool that allows users to interact with Anthropic's AI models. While the Claude Code application itself is free, users pay for a subscription to access the underlying AI models through specific endpoints. Anthropic's terms of service explicitly state that these subscription-based OAuth tokens are "intended exclusively for Claude Code and Claude.AI" and that "using OAuth tokens that are obtained through those methods and through those accounts in any other product, tool, or service... is not permitted and constitutes a violation of the consumer terms of service."
This policy prevents users from using their paid Claude Code subscriptions with alternative interfaces or "harnesses" like Open Code, VS Code, or OpenClaw, even if users prefer these other interfaces. The speaker argues that users are primarily paying for the usage of the AI models, not for the Claude Code interface itself, which is free. By restricting where users can access the models with their subscription, Anthropic aims to lock users into its ecosystem and prevent competition from other developers who might offer better user experiences.
The speaker emphasizes the hypocrisy by contrasting Anthropic's actions with its recent enforcement against Open Code. Just a week and a half prior, Open Code was forced to remove a plugin that allowed users to access Claude Code's subscription endpoint through its own interface. This was a direct request from Anthropic, citing a violation of their terms. The speaker points out that Anthropic's new iMessage plugin is doing the exact same thing—accessing a service (iMessage) through an unauthorized interface, which is against Apple's terms of service.
Apple's terms of service for iMessage are very clear: iMessage is "intended for communicating with family and friends and is not for conducting commercial activities or disseminating unwanted messages." Furthermore, users agree not to "copy, decompile, reverse engineer, disassemble, attempt to derive source code of decrypt, modify or create derivative works of the Apple software or any services provided by the Apple software or any part thereof." The iMessage plugin for Claude Code would necessarily involve reverse engineering and unauthorized access to Apple's servers, directly violating these terms.
The speaker draws a parallel between Anthropic's desire to control access to its AI models and Apple's control over iMessage. Both companies want users to interact with their services through their designated applications. However, the speaker notes a crucial distinction: Anthropic is actively punishing others for doing what it is now promoting for its own product. This makes Anthropic "hypocritical bastards."
The speaker also highlights the difficulty faced by developers like Matt PCO, who have been trying for weeks to get clear guidance from Anthropic on whether they can build open-source wrappers for Claude Code that allow users to utilize their existing subscriptions. Anthropic's vague and non-committal responses are seen as an intentional strategy to maintain arbitrary control and kick out users or developers as they see fit.
The speaker argues that while companies have the right to build products as they see fit and set terms of service, the hypocrisy of Anthropic's actions is egregious. The company is actively promoting a product that violates another company's terms of service (Apple's) while simultaneously sending legal threats to developers who attempt to use Anthropic's own services in ways Anthropic disapproves of, despite those uses often being more reasonable for users.
In conclusion, the speaker expresses outrage at Anthropic's "absurd level of hypocrisy." They suggest that Anthropic's actions are more restrictive and anti-user than even Apple's, which is a company often criticized for its "walled garden" approach. The speaker hopes that Apple will take action against Anthropic for violating its terms, seeing it as a deserved consequence for Anthropic's hypocritical behavior. The speaker encourages users to support companies that are more transparent and user-friendly, such as OpenAI, GitHub, and Kilo, which allow users to utilize their subscriptions and models across various interfaces and tools, unlike Anthropic.