
La technophobie passe à l'acte : qui a armé la colère ?
Audio Summary
AI Summary
This episode of Silicon Carnet discusses two recent attacks on Sam Altman in San Francisco within 48 hours—a Molotov cocktail and gunshots at his home. While some online applauded these acts, the CEO of OpenAI called for calm. The attackers, all under 26, highlight a generational divide, with Gen Z's enthusiasm for AI collapsing and fear among global workers of losing their jobs increasing from 28% to 40% in two years. This fear is analyzed alongside Altman's paradox of publishing a quasi-socialist manifesto while lobbying against regulation.
The attacks on Altman are seen as more than isolated incidents, signaling a deeper societal concern. A significant 64% of Americans believe AI will destroy jobs, and public perception of AI is now worse than that of the immigration agency ICE. This rapid change, happening in mere weeks, leaves little time for individuals to adapt, disrupting career plans and creating immense insecurity. The rapid advancement of AI, from being poor at coding a year ago to coding better and faster now, is destabilizing for humans.
One panelist noted that even top AI researchers, earning packages of $100 million annually, fear their skills could become obsolete in two years. While some compare this to historical shifts like the advent of the automobile replacing carriage drivers and blacksmiths, the current situation is different: it's the designers of the "car" (AI) who are worried, and "we" are the "horse."
This negative public perception could lead to violence, as seen with Altman, and ultimately, to AI being banned or heavily regulated if a large majority of the population opposes it. Politicians, eager for re-election, would likely capitalize on such sentiment.
A more nuanced view suggests that while the US market, being deregulated and overcapitalized, might face significant job displacement due to AI, Europe could experience different outcomes. Europe, being technologically behind and often undercapitalized, might find an opportunity for full employment by leveraging AI to catch up, particularly for developers. This perspective argues against viewing the AI revolution solely through an American lens, suggesting Europe could harness AI to create sovereignty and address its technological deficit.
However, the rapid shift in perspectives—from fear of job loss to potential full employment—highlights the uncertainty surrounding AI's impact. While not everyone will be jobless in six months, the attacks on Altman are symbolic and reveal a crisis of AI's legitimacy. This crisis has been fueled by AI leaders themselves, who have, for years, propagated apocalyptic narratives about AI. This narrative has led to widespread concern, including a petition signed by 3,000 researchers and academics in France declaring themselves "conscientious objectors" to AI.
Sam Altman is seen as a symbol of AI, and the attacks against him are perceived as attempts to target this symbol rather than the individual. His past actions, such as declaring GPT-2 "too powerful" and later releasing more powerful versions, coupled with controversies like misleading his board, firing security personnel, and engaging in defense contracts despite initial opposition, have made him a lightning rod for criticism. While violence is condemned, some online comments suggest he "deserved" it due to these perceived missteps. The panel argues that the Silicon Valley has lived in a bubble, failing to recognize the growing disconnect and anger among the general population, which is now manifesting in violent acts.
Altman's response to the first attack, a photo of him, his husband, and their baby, failed to calm the situation, as a second attack followed. This occurred shortly after a critical New Yorker investigation questioned his trustworthiness regarding AI's future, a stark contrast to a laudatory article about him a decade prior. The media's role in shaping public opinion is also scrutinized, with some outlets publishing highly charged articles against AI figures like Peter Thiel.
The root of this public anger, particularly among Gen Z (aged 20-25), is attributed not just to media narratives but to a profound societal transformation. Unlike climate change, which has a distant impact, AI directly threatens jobs in the short term (6-18 months), intensifying personal anxiety and leading to more radical behavior. For Gen Z, burdened by student debt and facing an uncertain job market, the rapid obsolescence of skills due to AI is particularly distressing.
Regarding employment, while experts in AI believe 39% of jobs might be eliminated, the general public's fear is higher at 64%. This highlights a need for education and adaptation. While some suggest manual labor as an alternative, the rapid pace of technological change means that even new skills can quickly become outdated, leaving many feeling trapped.
The demographic shift, with aging populations and declining birth rates in countries like France and Italy, is ironically seen as a potential "savior" for employment. With fewer young people entering the workforce and a reluctance to take on certain jobs, AI could boost productivity and fill labor gaps, creating new opportunities. However, this requires embracing technology and adapting, rather than resisting it.
A critical issue is the lack of engagement and vision from political leaders, who often fail to understand AI or address public anxieties effectively. Instead of offering comprehensive plans for transition and education, some politicians, like François Ruffin in France, are accused of manipulating public opinion by using AI tools like Claude to echo anti-AI sentiments for electoral gain. This political opportunism, reminiscent of the 1920s and 30s, is seen as dangerous, hindering necessary societal adaptation.
Another significant driver of fear is AI's massive energy consumption. Data centers can consume as much electricity as a city of 50,000-100,000 people, raising environmental concerns among younger generations sensitive to climate change. The rising cost of electricity in the US, partly attributed to AI, is also sparking local rebellions against data center projects.
Some suggest that AI labs should slow down innovation, form cartels, and release models over longer periods to manage societal impact. However, the "elephant in the room" is China, which is aggressively pursuing AI development as a national priority, making it impossible for Western countries to slow down without falling behind.
Four days before the attacks, Sam Altman published a 13-page document titled "Industrial Policy for the Intelligence Age," which unexpectedly acknowledged AI's serious risks: economic disruption, malicious use in cybersecurity and biology, and loss of control over powerful systems. He warned against AI's benefits being concentrated among a few, leading to social inequality.
Altman's proposals include:
1. **Public Sovereign Fund:** A fund to give every citizen a share of AI profits, aiming to distribute the colossal productivity gains beyond just shareholders. While OpenAI's CFO mentioned reserving 20% of its IPO for public ownership, this is deemed insufficient.
2. **Four-Day Work Week:** Encouraging companies to reduce the work week to 32 hours without salary loss, translating productivity gains into financial benefits, more free time, or better social benefits.
3. **Employee Empowerment:** Allowing employees to decide how AI is deployed in their companies.
4. **Right to AI:** Making affordable access to AI tools a fundamental, almost constitutional, right.
5. **Tax Reform:** Suggesting increased taxes on capital gains and income, resembling a socialist program.
These proposals are interpreted as Altman's attempt to control the narrative, particularly before OpenAI's IPO, by presenting a more positive, socially conscious image. Some see it as a strategic move to buy social peace at little cost, ensuring he profits regardless of public sentiment. Others, like Elon Musk, genuinely believe in technoutopian solutions, proposing high universal basic income ($15,000/month) and believing technology will solve its own problems, with redistribution as the primary solution.
The conversation concludes with a pessimistic outlook from some panelists, noting the anxiety among young people and the perceived dysfunction of the current system. The lack of visionary political leadership to guide society through the AI transition is a major concern, as politicians prioritize short-term electoral gains over long-term societal planning.