
We all fell for it…
Audio Summary
AI Summary
The speaker discusses the impact of AI in coding, acknowledging its significant productivity gains while expressing concerns about potential skill atrophy among developers. They highlight the paradox of AI coding tools: while they can generate thousands of lines of code daily, reliance on them may lead to a decrease in core coding abilities and critical thinking.
The speaker shares their personal experience, noting that they now spend less time writing code directly, instead focusing on prompt engineering and data analysis for AI agents. While they've improved in areas like Git and SSH, they also feel some coding skills atrophying, leading to a tendency to "roll the slots" by re-running AI prompts when something doesn't work, hoping for a correct output.
A sponsored segment for Browserbase is introduced, emphasizing its utility for AI agents that frequently make cURL requests to access web content. Browserbase's new fetch feature and ability to spin up real cloud browsers address common failures agents face when encountering JavaScript-dependent pages, CAPTCHAs, or firewalls. It also offers enhanced search capabilities, providing agents with better context.
Returning to the main topic, the speaker reviews an article by Lars Fay titled "Agentic Coding is a Trap," which focuses on "cognitive debt" rather than "technical debt." The speaker finds this angle particularly interesting, as they believe AI is excellent at solving technical debt by automating tedious tasks like lint rule updates or package migrations that previously required immense manual effort. However, cognitive debt, the loss of deeper understanding and critical thinking, is a more pressing concern.
The article critiques the industry's hype around "inspect-driven development," where humans orchestrate and review AI-generated code without engaging in the implementation details. The speaker questions the meticulousness of plans developers claim to create if they aren't actively monitoring the agent's execution.
Key trade-offs of coding agents are identified:
1. **Increased system complexity** due to AI's non-determinism.
2. **Atrophying of skills** across a broad developer population.
3. **Vendor lock-in** for individuals and teams, with cloud outages causing standstills.
4. **Fluctuating and increasing costs** for AI tools, contrasting with fixed employee costs.
The speaker challenges the notion of consistently increasing AI costs. While overall token usage might rise due to increased AI adoption, the cost per unit of intelligence is decreasing. They provide data from artificial analysis showing that newer, more intelligent models like GPT-5.5 Medium offer the same intelligence as older, more expensive models at less than half the price. This suggests that while raw token counts might go up, the value derived per dollar spent on intelligence is improving significantly. However, they acknowledge that companies still face legitimate cost increases on their bills as developers use more AI tools.
The article emphasizes that only skilled developers with critical thinking and architectural understanding can spot issues in thousands of lines of generated code. Ironically, AI tools have been shown to negatively impact these very skills. The concept of cognitive debt is further explored, with mentions of Simon Willis and Martin Fowler describing developers getting lost in their projects and losing the connection between decisions, intent, and code.
The speaker offers a "hot take": if developers weren't already experiencing some level of cognitive debt, they weren't shipping fast enough. They relate this to their own experience of rapidly building and moving on from projects, often forgetting the details, which they attribute to their "vibe coder" approach long before AI. This ability to quickly build and understand systems at a high level, even if the specifics were forgotten, was once a unique and valuable skill. Now, AI allows many more people to operate this way, but without the foundational understanding, leading to problems when things break.
A critical point raised is that AI disincentivizes learning the "building blocks" of code. The immediate gratification of AI solving a problem bypasses the "pain" of feeling dumb and learning complex documentation or debugging. This is likened to learning to skateboard, where the initial pain and feeling of incompetence deter most people. The speaker notes a personal desire to feel dumb again and intentionally seeks out new, unfamiliar languages like Rust to force themselves to learn. They argue that a developer's willingness to "say no" to AI and push through the discomfort of learning is now a superpower.
The article addresses the argument that AI is "just another abstraction," similar to how assembly evolved into C or C++ into Python/JavaScript. The speaker initially thought this might be true but now believes natural language is fundamentally different. Previous abstractions, while reducing the need to know lower layers, still incentivized engineers to understand at least one layer above and below their primary focus. However, AI, through natural language prompts, creates a higher level of ambiguity rather than a clear abstraction layer.
Reddit posts illustrate the growing concern: developers with years of experience report feeling insecure, losing skills, and even having their companies enforce AI-only development, leading to brain fog and a decline in professional competence. The speaker contrasts this with their own experience as a team lead, where AI helps them maintain system understanding and guide their team, rather than replacing their foundational knowledge. They argue that less experienced engineers, without the prior "years of friction" to build deep understanding, are more susceptible to becoming addicted to the "slot machine" and struggling when AI fails.
The speaker also highlights the growing gap between great and less-great engineers, with AI accelerating this divide. Juniors learning with AI can ship fast but struggle to debug code they didn't write. The speaker advocates for using AI in debugging, sharing a personal anecdote of quickly resolving an outage using AI to analyze logs and generate SQL queries, but emphasizes that this only works when one already understands the system.
The article distinguishes current AI impacts from previous technological shifts, noting that past fears about new languages or compilers were speculative, whereas AI's negative effects on cognitive skills are already observable. The speaker agrees with the article's point that the natural progression of expertise, where senior engineers gained wisdom through decades of hands-on coding, is now threatened if we abdicate the friction of writing, problem-solving, and debugging. They express concern about the lack of a new wave of genuinely skilled senior engineers if this trend continues.
The speaker acknowledges their own unique background, having stepped into leadership and architectural roles early, forcing them to build strong system understanding and debugging skills. They believe these skills *can* still be built, but the incentives to do so are diminishing. They revisit their past stance on fundamental CS knowledge ("fundamentals don't matter until they do"), worrying that AI will prevent developers from ever encountering the friction that necessitates learning these fundamentals.
A key concern is that AI will exacerbate the problems of "bad devs" who avoid learning new things. These developers, who previously resisted adopting beneficial tools like React Query due to the perceived effort of learning, now have AI as a lever to bypass even more fundamental understanding. Conversely, the speaker holds hope that genuinely curious and motivated developers will leverage AI to accelerate their learning and growth.
The article notes that current trends are moving junior developers into high-level workflows requiring skills that senior engineers took decades to acquire, without the necessary foundation. The speaker adds that historically, maintaining large codebases required a certain level of intelligence and competence, a "friction" barrier. AI lowers this barrier, allowing less experienced individuals to manage complex systems they don't truly understand, leading to problems when things break.
Open-source maintainers are cited as an example of individuals outperforming average developers in the AI era, as they are accustomed to dealing with diverse code and learning from various contributions. Simon Willis, the creator of Django, with 30 years of experience, also reports losing a firm mental model of his applications, making new features harder to reason about. This highlights the "paradox of supervision": effectively supervising AI requires the very coding skills that AI usage can atrophy.
The article then discusses how LLMs "accelerate the wrong parts" of coding. It argues that the industry didn't necessarily need faster code generation, especially for code not fully understood or reviewable. A good developer's priority list, before AI, might have prioritized understanding, adherence to standards, conciseness, and then turnaround time. The speaker disagrees with this universal list, emphasizing that "good dev" is subjective and can mean different things (e.g., meticulous testing, understanding users, novel solutions, or efficient code). However, they agree that AI often inverts this list, prioritizing speed, which, when forced, leads to lower accuracy.
The speaker shares their "Theo method" of planning in code: starting with a minimal viable implementation to learn before writing a detailed spec. They find that this hands-on coding process is essential for effective planning. They agree with Dax (from Open Code) that typing code is often how developers figure out what to do. While AI can help with this iterative process, the speaker cautions that LLMs can fill ambiguities with assumptions or hallucinations, leading to more review and token burn.
Regarding vendor lock-in, the speaker argues that it's often a "competence failure." They advocate for using tools like T3 Code that allow switching between multiple AI models and providers, ensuring resilience even if one service is down. They also challenge the idea that token costs are unpredictable, asserting that while costs fluctuate, the cost per unit of intelligence is decreasing. They see vendor lock-in as a developer's responsibility to diversify tools, not an inherent flaw of AI.
In conclusion, the article advocates for demoting AI's role from primary code generator to a secondary process. It suggests leveraging LLMs for brainstorming and planning, while maintaining active engagement in implementation, delegating "as needed." The speaker agrees with this approach, using AI for investigation and pseudo-code generation, and for ad-hoc tasks where code quality is less critical.
A crucial insight from the speaker is distinguishing between code that runs thousands of times (requiring deep understanding and quality) and one-off code (where AI can quickly generate a solution without extensive human effort). AI makes it valuable to write code for tasks that previously weren't worth the manual effort, like personal calculators or data analysis scripts.
The article ends by stressing that while AI offers real productivity gains, it also introduces friction in understanding. You cannot understand code without engaging with it, and disengaging leads to loss of understanding, making one a less capable orchestrator. The speaker fully agrees with Jeremy Howard's quote: "People who go all in on AI agents now are guaranteeing their own obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent." The ultimate message is to use AI to advance thinking and learning, not to replace it, to avoid long-term career damage.