
What an AI-designed car looks like | The Vergecast
Audio Summary
AI Summary
The Vergecast discusses the evolving landscape of video podcasts, car design with AI, and the latest developments in the AI business, including the competition between Claude Code and CodeX, and the Department of Defense's engagement with AI companies.
David Pierce, the host, opens by reflecting on the shift of podcasts to video, a trend he finds challenging but necessary due to the increasing consumption of content through short clips on social media feeds. This shift has influenced his home office setup, requiring him to sit further back from the camera to allow for better cropping in video clips. He humorously describes his new, somewhat makeshift, dual-desk arrangement, emphasizing the efforts made to produce video-first and audio-first content simultaneously.
The episode features two main segments. First, Tim Stevens, a freelance tech and automotive journalist, joins to discuss the impact of AI on car design and manufacturing. Second, Hayden Field, The Verge’s senior AI reporter, covers current events in the AI business, including the rivalry between AI models and job loss concerns.
Tim Stevens begins by explaining the traditional car design process, which is surprisingly long and intricate, often taking five to six years from initial sketch to production. This process involves multiple stages: designers start with sketches, which evolve into digital and 3D models. Physical clay models, both small and full-scale, are created for visual assessment and wind tunnel testing. Engineering and virtual crash testing follow before a car enters production. This lengthy cycle makes it difficult for manufacturers to predict market trends five or six years in advance, leading to challenges in adapting to rapidly changing consumer preferences, such as the fluctuating demand for EVs or the shift away from touch-focused interfaces back to physical buttons.
AI is being introduced to significantly shorten this development timeline. Instead of replacing human creativity, AI is primarily used to enhance efficiency. For instance, GM is employing AI to convert sketches into 3D models in minutes, a task that traditionally takes a designer several weeks. AI is also being used in generative design for improving aerodynamics and in battery chemistry engineering, where it can quickly test numerous material compositions for cathodes and anodes, optimizing factors like charging speed, battery life, and temperature sensitivity without the need for extensive physical prototyping.
Stevens highlights the manual nature of traditional testing, such as building and testing slightly different components in wind tunnels repeatedly. While computational fluid dynamics (CFD) has offered some improvements by simulating wind tunnel runs, these simulations still require supercomputers and specialized training, taking weeks to complete. AI, through companies like Neuro Concept, aims to reduce CFD simulation times from hours to minutes, further accelerating the iterative design process.
The discussion touches on the artistic aspect of car design, particularly the creation of full-size clay models. Stevens notes that while 3D milling machines can now produce these models, significant hand-tuning and painting are still required. He expresses concern that while AI automates tedious tasks, it might also eliminate entry-level design roles, making it harder for new graduates to gain experience and climb the career ladder. This parallels similar concerns in software development and other industries where AI is taking over foundational tasks.
The conversation then shifts to the role of software in modern cars, which are increasingly referred to as "software-defined vehicles." This trend means that even basic functions like turn signals are now controlled by software, leading to a massive increase in code, integration efforts, and new cybersecurity regulations. AI is seen as a crucial tool for tasks like documentation, automated unit testing, and ensuring regular updates and patches, which are essential for maintaining cybersecurity and compliance. These are tasks typically assigned to junior developers, again raising concerns about the entry-level job market.
Regarding regulatory issues, Stevens explains that AI's impact on real-world crash and emissions testing is minimal, as physical tests remain mandatory. However, AI can assist in meeting new software-related regulations, such as ensuring continuous cybersecurity updates throughout a vehicle's life.
If AI successfully shortens the car development cycle to three years, Stevens believes it could make cars more affordable by reducing R&D costs. More importantly, it would allow manufacturers to keep pace with rapid global market changes, bringing cars to market that align more closely with current trends and consumer desires. However, he also worries about a potential homogenization of design, where AI might lead to generic, algorithmically generated cars that lack the "big bets" or iconic designs that define a brand. He hopes AI will empower designers to be more provocative and experimental, using the tools to quickly validate bold ideas rather than becoming conservative.
Finally, Stevens provides an update on the "Slate Truck," an electric vehicle concept that gained attention for its extreme minimalism and low price point. The truck, initially projected to cost around $25,000 (effectively $18,000-$19,000 with rebates), aimed to attract DIY enthusiasts willing to customize it themselves. However, the loss of federal EV rebates and the current market's lower enthusiasm for EVs have made its mid-$20,000 price less compelling compared to competitors like the Ford Maverick XL, which offers more features for a slightly higher price. Despite recent funding and a new CEO (a former Amazon executive), the Slate Truck faces challenges in its market positioning, though Stevens remains optimistic about its potential to foster a strong DIY community.
After a break, Hayden Field discusses the competition between Anthropic's Claude Code and OpenAI's CodeX. While Claude Code is widely beloved in the AI industry, CodeX is gaining traction, with OpenAI making significant marketing efforts. Field notes that Claude Code is starting to face some backlash, a common phenomenon for market leaders. She highlights a strategic shift by AI companies: initially focusing on consumer-facing chatbots, they are now pivoting to developer tools like Claude Code and CodeX, aiming to build "everything apps" from a foundation of productivity and efficiency. Field believes this approach, focusing on money-making backend tools and enterprise solutions, is more likely to succeed than starting with general chatbots.
Field then provides a "vibe check" on OpenAI, noting that while the atmosphere is slightly better than a month ago, it's still not great. Positive developments include CodeX's momentum and OpenAI's favorable position in the lawsuit with Elon Musk. OpenAI is actively engaged in a PR campaign to rebrand its public perception, emphasizing principles like democratization and empowerment.
The conversation delves into the "doomerism" surrounding AI and the shift in rhetoric from AI companies. Sam Altman of OpenAI, in particular, has adopted a more optimistic "Pollyannaish" vision of AI leading to universal prosperity. However, Field is skeptical that this positive spin will resonate with the public, who are increasingly concerned about job displacement and the widening wealth gap. She points to recent layoffs in software engineering and broken promises about AI replacing creative jobs, suggesting that people are more interested in addressing the fallout of AI rather than hearing overly optimistic predictions.
The discussion moves to Anthropic's relationship with the Department of Defense (DoD). Anthropic initially distinguished itself by refusing an "any lawful use" clause, limiting how the DoD could use its models. However, the Pentagon recently struck a deal with seven other AI companies—OpenAI, Google, Microsoft, Amazon, Nvidia, XAI, and Reflection—granting them "any lawful use" and allowing their tools on classified networks. This deal is seen as a significant move against Anthropic's previous unique position. Despite this, Field indicates that the DoD still highly values Anthropic's models, particularly "Mythos," for cybersecurity, and Anthropic is actively working to re-engage with the government through new hires and lobbying efforts.
Regarding Mythos, Anthropic's powerful cybersecurity model, Field states that while its capabilities are shrouded in mystery, it's not something to be "terrified of." Its unique ability to autonomously flag gaps and vulnerabilities in critical systems makes it a significant tool for defense. She anticipates that similar, or even better, models will be open-sourced by other labs within the next year.
Finally, the hosts discuss the concept of Artificial General Intelligence (AGI). Field suggests that AGI is "dying a slow, gradual death" as its definition remains ambiguous and companies are increasingly creating their own terms like "human-centered AI." She welcomes this shift, arguing that the focus on AGI as a singular, transformative moment has been problematic, creating unrealistic expectations of an overnight, world-altering event. Moving away from the AGI narrative allows for more rational conversations about AI as an evolving technology that incrementally improves and impacts society, rather than a sudden, catastrophic "singularity." This also shifts attention from a distant, opaque future to the immediate consequences and ethical considerations of AI's current development.
The episode concludes with a hotline question from Paul, asking whether the massive investment in AI and subsequent layoffs are genuinely driven by ROI and effectiveness calculations, or merely by FOMO (fear of missing out) and a convenient excuse for post-pandemic overhiring. Field believes it's a mix of both. While AI can genuinely boost productivity, some studies suggest that users often overestimate their own efficiency gains. She also notes a trend where remaining employees become overworked, having to use AI tools to cover the jobs of laid-off colleagues, which still requires significant time and effort. This often leads to a cycle of layoffs followed by re-hiring, a pattern seen repeatedly in recent decades. Field concludes that while essential backend AI integrations are not FOMO-driven and are necessary for large enterprises, companies still lean into the glamorous, short-term "fly high and burn out quickly" aspects of AI to attract investor attention.