
Daniela Amodei, Co-Founder and President of Anthropic: Building AI the Right Way
Audio Summary
AI Summary
Daniela, a co-founder of Anthropic, shared insights into her career path, the founding of Anthropic, AI safety, job displacement, and the future of AI.
Her career journey was unplanned, driven by curiosity, a desire for impact, and an interest in making the world fairer. Graduating in 2009 with a literature degree, she initially worked in international development and global health, aiming to ensure everyone had access to basic necessities. This early experience provided a foundation for building things of consequence and purpose. She then worked on Capitol Hill, a political campaign, and eventually joined Stripe when it was a small startup. These experiences, particularly at Stripe, prepared her for the technology industry, fostering a generalist mindset and a willingness to learn across disciplines.
Her entry into AI began in 2018 when she joined OpenAI, a small research lab. Despite her non-technical background, she learned the language of neural networks and transformers through curiosity, asking questions, and leveraging her experience working with engineers at Stripe, as well as growing up with a physicist sibling, Dario, who is also her co-founder at Anthropic. She emphasized the importance of understanding one's comparative advantage and how to fit into a broader ecosystem, bringing interpersonal skills and curiosity to the table.
In December 2020, Daniela, Dario, and five other co-founders left OpenAI to start Anthropic. The decision was driven by a shared vision to prioritize safety and responsibility in AI development. They wanted to create an organization where these values were at the forefront, leading them to incorporate as a public benefit corporation. This structure allowed them to be a commercial entity while ensuring they developed AI in a responsible way. Their collective experience at OpenAI, working on both capabilities and safety, solidified their desire to build a new company with this specific focus.
On co-founding, Daniela highlighted the importance of strong interpersonal relationships and a shared vision. She noted that her long history of working through conflict with her brother Dario, and the pre-existing relationships and reporting structures with their other co-founders, provided a solid foundation. Crucially, all co-founders had a consistent understanding of what they were trying to build, avoiding situations where they might have fundamentally different goals. Her advice for aspiring co-founders included testing compatibility through shared experiences, like going on vacation together, to see if their working relationship would be sustainable and enjoyable.
Daniela defined AI safety as taking radical responsibility for the technology being developed. She drew an analogy to social media companies, noting that early developers likely didn't intend to cause harm but optimized for metrics like rapid scale growth without fully considering unintended negative externalities. In AI, they have the privilege of learning from past mistakes and proactively thinking through potential harms. For Anthropic, safety encompasses preventing the misuse of their technology for developing chemical and biological weapons or cyber warfare, as well as addressing user wellness, child safety, misinformation, and election integrity. She emphasized that they build upon the work of previous safety and security teams in other consequential technology companies.
Addressing the tension between AI safety and generating revenue, Daniela stated that for most businesses, safety and commercial success are aligned. Businesses generally do not want unsafe models that hallucinate or produce harmful outputs, making safety a good business decision. However, as AI capabilities rapidly advance, the tension arises from the time needed to fully understand and mitigate serious risks. This can lead to difficult decisions, like withholding a powerful model (Project Glasswing) from customers until sufficient safety work is completed, despite customer demand. In these instances, the company's mission to develop AI responsibly guides their actions.
Regarding job displacement, Daniela acknowledged that AI will change the types of jobs available. While some jobs may be replaced, particularly in areas like customer service, current data suggests AI primarily acts as a complement to human skills, enabling work rather than replacing it. She predicted that many future jobs will "rhyme" with existing ones but will not be identical. For software developers, AI might reduce the amount of code written, but it will expand their roles in areas like collaboration with product managers and customers.
To prepare society for these changes, Daniela stressed the need for humility, transparency, and research from AI companies. Anthropic publishes an economic index to help people understand current AI usage and future trends. She also called for creativity and experimentation in rethinking the connection between work, meaning, and social life in an AI-driven world. Finally, she emphasized that AI's impact on jobs will become a significant social and political issue, requiring broader discussions involving government, civil society, and universities to shape a desirable future.
On AI adoption outside the Silicon Valley bubble, Daniela noted that while AI is a dominant topic within the tech community, it's not universally embraced or understood. Adoption currently skews towards college-educated men in higher-income demographics, and global distribution is unequal. Interestingly, developing countries show more optimism about AI, viewing it as a potential equalizing force, while higher-income countries express more anxiety about disruption. She believes the "race" for AI adoption has just begun, leaving ample opportunity to positively shape its use, development, access, and inherent values.
When considering what humans risk losing by delegating too much to AI, Daniela referred to a large qualitative study by Anthropic. While AI enables people to achieve things they never thought possible (e.g., building a website), some users expressed a feeling of not needing to engage their brains. This "turn your brain off" mentality, where people blindly trust AI outputs without critical thinking, is a source of anxiety. She advocated for "learning mode" applications of AI, where tools act as patient tutors to help users get unstuck and expand their knowledge, rather than simply providing answers (like cheating).
In an AI-driven world, Daniela believes human skills like social interaction, creativity, and empathy will become more important and highly valued. As AI handles more day-to-day productive work, the unique human desire to connect, learn from, and understand each other will be amplified. She used the example of medicine: while AI will excel at diagnosis, the "bedside manner" of doctors—their ability to empathize and build relationships—will become five times more crucial, as it positively impacts patient outcomes.
Personally, Daniela is excited about AI's potential as a management coach and a tool to support overwhelmed parents. She uses Claude to identify patterns in her reports' performance reviews over several years, spotting areas for development that might otherwise be missed. Claude also provides candid feedback on her own performance. For parents, she found Claude immensely helpful with potty training, providing empathetic, actionable, and measured advice compared to often anxiety-inducing online searches.
Her advice for the next generation of AI leaders is twofold: first, follow your passion. The "burning feeling" that something needs to exist in the world is crucial for persevering through difficult times. Second, embrace the idea that business and doing good are not in tension. She sees a growing trend of founders combining innovation with social impact, believing that a desire to do good strongly correlates with doing well.
Addressing a question about an "AI bubble," Daniela expressed concern about the high capital expenditure required for AI development. Training models is incredibly expensive, requiring significant compute power that must be purchased far in advance, representing a substantial bet on future returns. While Anthropic and OpenAI are bullish and generating impressive revenue, she noted this capital-intensive nature inherently carries risks, and the industry's success is ultimately a calculated bet.
On government regulation, Daniela advocated for a nuanced discussion, fearing politicization. She believes sensible regulation will be necessary for AI, given its unique nature, and that previous technologies could have benefited from more oversight. However, she also stressed the need for companies to have room to innovate. Her ideal scenario involves technology companies and regulators working hand-in-hand: companies provide insights into potential abuses and risks, while regulators provide enforceable frameworks. The goal is to develop amazing new technologies while implementing common-sense regulations to protect people.
Regarding individual privacy with AI accessing sensitive personal data, Daniela stated that companies bear the primary responsibility to use and protect data with care. She noted that people have highly personal conversations with AI tools, more so than on social media, which necessitates greater responsibility from tech companies. She mentioned Anthropic's decision not to use ads in Claude as partly driven by this belief. From a personal perspective, she advised healthy skepticism, especially for medical questions. While AI can be a helpful guide or "friend who is a really good doctor," it should not be blindly trusted for medical advice, and professional consultation is always necessary.
In a rapid-fire session, Daniela said she would major in literature again if she went back to college because she loves reading. Her favorite thing about working with her brother Dario is their deep mutual understanding, allowing them to communicate directly and resolve conflicts effectively. Her least favorite aspect is the need to intentionally separate their personal and professional relationships, ensuring they nurture their sibling bond outside of work. She recommended "The Guns of August" as a book from her office library, highlighting its study of individual personalities and events leading to World War I. Finally, she revealed that Anthropic had considered many "tragic" bird-themed names like "Sparrow Systems" before settling on Anthropic. Her best advice received was that in moments of life-altering decisions, one often already knows the right answer.