
Louis loses mind on Anthropic speedrun towards enshittification
Audio Summary
AI Summary
The speaker, Louis Rossman, discusses Anthropic, the company behind the AI model Claude, and their recent controversial practices. He highlights Claude's popularity, particularly Claude Code, which is useful for various tasks including software development and driver editing. Anthropic gained significant popularity a few months prior due to their stance against the Department of Defense's requests for mass surveillance of citizens and the use of AI in fully autonomous weapons. This positioned Anthropic as an "ethical AI" company, contrasting with OpenAI's decision to work with the DoD.
However, Rossman criticizes Anthropic for seemingly abandoning this ethical image. He recounts an incident where users were allegedly billed extra for their usage if Anthropic's system disliked how they were using the service, such as using BitTorrent or having specific file types like Hermes files in their projects. When users pointed this out, Anthropic reportedly refused refunds, calling it a "billing misrouting issue" but still not compensating the affected customers. Rossman suggested chargebacks as a response.
He then draws a parallel to a thermostat company, TTO, which attempted to introduce a monthly fee for its app, claiming increased server costs. Many users protested, as these features were previously included with their purchased thermostat. TTO described it as a "test" and stated affected users would still have free access, a claim Rossman disputes. He argues that TTO was testing user willingness to pay for previously free features, using the prompt for credit card information as a gauge of this willingness. When caught, TTO, like many companies, cited routine marketing research and customer feedback.
Applying this logic to Claude, Rossman discusses a tweet suggesting that $20 Claude Pro users would soon lose access to Opus models, their most advanced version, unless they purchased extra usage. Anthropic explained this as an outdated support article, but Rossman questions why such an article would be on their website if it were irrelevant. He points out that Claude removed Claude Code access for some users on the $20 plan, even though it was previously available. Anthropic admitted to a "small test" on 2% of pro user signups, affecting only certain subscribers, to gauge willingness to pay more for the same service.
Rossman strongly condemns this practice of "testing" users to determine how much they can extract. He contrasts this with his own business, where he might offer discounts to customers facing hardship. He argues that Anthropic's actions are deceptive and damage their reputation, especially given their prior "ethical AI" branding. He suggests that instead of sneakily billing users or testing them, Anthropic should be transparent about pricing changes or feature limitations.
He reflects on how such behavior would have destroyed his own business's goodwill if he had acted similarly after gaining media attention. He criticizes Anthropic for "speedrunning" the process of eroding customer trust, turning a significant PR advantage into a major setback within months. He believes that whoever is making these decisions at Anthropic is making a grave business error, even from a purely strategic standpoint.
Rossman concludes by stating he is canceling his own Anthropic subscriptions due to these practices. He emphasizes the difference between genuine customer support and manipulative testing, and he encourages viewers to share their thoughts on this pricing and testing methodology. He reiterates that using deceptive tactics to extract more money from customers, especially after claiming to be an ethical company, is a fundamental betrayal of trust and a poor business strategy. He suggests that even if companies cannot fire individuals responsible for such decisions, they should be moved to roles where they cannot cause further damage.