
Stanford Leadership Forum 2026: Media and the Disinformation Ecosystem
Audio Summary
AI Summary
The panel discussion, moderated by Anat Mati, a finance and economics professor, delved into the complex and often problematic relationship between media, disinformation, and democracy. Mati introduced the topic by referencing Neil Postman's "Amusing Ourselves to Death" (1985), which critiqued the impact of television and talk radio on political discourse, suggesting it led to confusion. The current media ecosystem, particularly with the rise of social media, has escalated these concerns, creating a "disinformation ecosystem." The discussion aimed to explore the harms to individuals and society, and potential solutions, rather than solely celebrating technological advancements.
Guy Rolnik, a clinical professor at Booth School of Business with extensive media experience, initiated the conversation by acknowledging that while we might not be in the "worst of times," the current information ecology presents significant challenges. He revisited Postman's warning that we would "amuse ourselves to death," turning civic discourse into entertainment. Rolnik highlighted Postman's observation, contrasting it with George Orwell's fear of government censorship. Huxley, 30 years prior to Orwell, predicted that censorship would be unnecessary because people would be too distracted. Rolnik argued that Huxley was largely correct, as we now live in a world of constant digital distraction. He emphasized that for-profit companies controlling information dissemination lack incentives to prioritize quality information, unlike traditional media which, despite their flaws, often had business, legal, reputational, and ethical incentives to pursue truth. Rolnik concluded that we are experiencing the "worst epistemic crisis" in generations.
Renee Desta, a research professor at Georgetown and author of "Invisible Rulers," focused on the shift from top-down to bottom-up opinion formation on social media. While social media empowers more voices and conversations, it also encourages content creators to cater to specific niches, rather than broad segments of the public. This leads to "audience capture," where creators risk losing their audience if they deviate from expected perspectives. Content creation is also heavily influenced by algorithms, forcing creators to adapt their style and structure to algorithmic curation. Desta noted the increasing prevalence of "unconnected content" (e.g., TikTok's model, now 46% on Facebook), where algorithms decide what users might serendipitously like, regardless of who they follow. Users are often unaware of how curated their information environment is, or the differing incentives of influencers compared to traditional media. Desta also referenced Daniel Boorstin's "The Image" (50 years old), highlighting the constant prevalence of "pseudo-events" – small, irrelevant moments amplified into scandals by algorithmic incentive structures.
Alexandra Geese, a Member of the European Parliament from Germany, offered a slightly different perspective on Desta's "Invisible Rulers." Geese argued that these "rulers" are very visible, with well-known names and companies in the Bay Area, as they design the incentive structures of social media. She cited Governor Cox of Utah, who described these incentives as "rage reward and the division dividend." Geese explained that negative emotions like outrage, hate, and fear are systematically rewarded, making polarizing content more likely to go viral. As a politician, she testified to the strong incentive to create aggressive content for visibility. This "boosting" of divisive content is detrimental to democracy, which requires a shared understanding of facts and the ability to converse. Geese also raised the question of "modern censorship," where content not designed for "rage reward" is systematically downranked, making it invisible. She asserted that there are always rules, and the crucial question is who makes them.
Mati clarified that the US has an extreme form of free speech, with Section 230 of the Communication Decency Act (1996) providing immunity to internet companies for user-generated content, unlike traditional media. This means lawsuits often focus on "product liability" (e.g., addictive algorithms) rather than content itself. Europe, without a First Amendment equivalent or Section 230, approaches these issues differently.
Rolnik reiterated that the core problem isn't content choices, but the system's design, which prioritizes and amplifies content incompatible with democratic civic discourse. He noted a global trend over the last 10-15 years where politics has worsened, coinciding with major algorithm changes by a few dominant companies. These changes, he argued, have "rewired our minds and our brains and who we are," creating societies incompatible with democracy. Rolnik highlighted "individual harms" like scams and frauds, which he claimed are part of the business model. Citing whistleblower Frances Haugen and subsequent leaks, he stated that Meta knew about these issues, with internal complaints about changing political culture and platforms forcing politicians to alter positions. A recent leak suggested $18 billion of Meta's revenue comes from fraud. Rolnik asserted that companies have no incentive to stop this fraud, despite having the means to do so. He concluded that it's time for laws to regulate architectural design, recalling that communication technology has always shaped societies, and historical technological shocks led to societal reactions, guardrails, and institutions. The current shock, however, is controlled by a few companies with immense political power, hindering a societal response.
Desta described the institutional response, particularly in the US. Unlike Europe, which has laws for researcher data access, US access was often granted voluntarily by platforms, especially after 2016 when the extent of foreign interference (e.g., Russia's Internet Research Agency) became clear. Platforms created internal "integrity teams" and transparency tools for researchers and journalists. The Stanford Internet Observatory (SIO), co-founded by Alex Stamos (former Facebook SISO), had open communication channels with platforms to identify foreign interference and influence operations. Desta noted that propagandists and scammers are always the first to leverage new media shifts. She gave an example of SIO collaborating with Twitter and Facebook to identify Wagner Group influence operations in Libya. However, these channels were never regulated. After the 2022 US House flip and subpoenas from committees like Jim Jordan's "weaponization committee," these communications ceased due to legal liability, creating a "chilling effect." Platforms responded to changing political winds by dismantling communication infrastructure and data transparency (e.g., Twitter charging for its API), disintegrating the ability to understand the US information ecosystem.
Geese shared a European example: Elon Musk activating a "Gro Gro spicy mode" on X over Christmas, which generated 3 million sexualized images of women and 23,000 of children in 11 days. Geese, who has worked with victims of deepfake sexualized images, emphasized the severe, lasting impact, comparable to physical sexual violence. Europe's Digital Services Act (DSA) mandates mitigating measures for systemic risks like gender-based violence. The European Commission launched an investigation, but it was reportedly delayed due to geopolitical considerations (President Trump's visit and threats regarding Greenland). This delay, Geese stressed, caused irreparable harm to victims. While the investigation is ongoing, Geese noted that this event spurred the European Parliament to action, leading to a ban on apps creating only deepfake sexualized images and requiring safeguards for general AI image systems, likely to be included in the AI Act.
Mati mentioned the US "Take Me Down Act," signed by Trump, which requires platforms to remove AI-generated deepfake nudity, a rare instance of content-related regulation. She noted the widespread belief that failing to impose constraints or transparency on social media has been a huge mistake.
The panel then discussed actionable steps. Rolnik suggested five policy directions:
1. **Accountability and Responsibility:** Platforms, with immense power, manipulate opinion and control politics. Rolnik cited Reed Hundt, former FCC chairman, who admitted the 1996 Section 230 was a mistake. Platforms are publishers, not neutral conduits, and should be responsible for harms.
2. **No Protection for Bots/Troll Farms:** Implement "know your customer" for users, eliminating coordinated inauthentic behavior networks. Companies with billions in profits can identify and stop bots and fake accounts.
3. **Stop Data Collection for Political Profiling:** Companies collect granular psychological and political profiles on billions, enabling them to "rig the entire political system." This must stop.
4. **Sovereignty:** Platforms operating globally should adhere to local laws, with directors accountable to local legal systems, enabling lawsuits when societies are harmed.
5. **Age-Gating:** Platforms should be age-gated, as seen in Australia and Europe. Rolnik argued that allowing children to be manipulated by addictive, harmful machines, which have clearly impacted mental health, is unacceptable, comparing it to alcohol or driving restrictions. This is the "lowest hanging fruit."
Desta, pragmatic about US legislative gridlock, focused on non-content-related solutions:
1. **Mandate Transparency:** This is more easily passed and less prone to First Amendment challenges, as transparency (like nutrition labels) is not compelled speech in high-risk areas.
2. **Interoperability:** Enable users to move their profiles, data, and content between platforms. This fosters market-based competition, allowing new platforms to thrive and reducing the dominance of a few companies. She cited protocols like AT protocol (Blue Sky) as examples of enabling community-specific social architectures.
Geese agreed with Desta on interoperability, noting European efforts to build common infrastructure for local app developers, especially for expensive aspects like content moderation. She saw potential for collaboration with traditional media, who could bring large user bases to new, community-based social media platforms, overcoming network effects. Geese also emphasized "freedom of choice" for users, advocating for personalized algorithms based on explicit user preferences (e.g., "I like cats and horses and this political content"), rather than "revealed preferences" derived from click behavior. She suggested allowing users to choose trusted third-party providers (e.g., public broadcasters) to curate their news algorithms. Geese argued that "engagement" as currently defined by platforms exploits biochemical wiring (fear, rage) rather than conscious choice.
During the Q&A, a question arose about China's policy of requiring educational certification for content creators in certain fields (e.g., medicine, education). Desta said this would never pass in the US due to First Amendment hurdles and the argument that experts can be wrong. She suggested alternative approaches like elevating expert voices (e.g., Twitter blue-checking frontline physicians during COVID) and using "middleware" or third-party providers to signal expertise and reputability in content curation, without banning participation.
Regarding personal practices for children, Rolnik advised parents to shed guilt, recognizing that 50,000 engineers are working to keep kids on platforms. He stressed that societal intervention is needed, not just individual parenting. He noted the irony that those who run these companies often don't allow their own children on them.
Another question addressed forcing governments or industry to adopt technologies like proof of personhood and unique anonymous user IDs, with "scorecards" for misinformation. Desta responded that in the US, the First Amendment and Section 230 protect platforms' right to curate. She suggested creating incentives through public pressure and highlighting what's possible with new technologies. She also pointed to the FTC's regulation of influencer disclosures for commercial speech, contrasting it with the lack of regulation for political communications, which benefits both parties. Geese agreed that forcing specific tools is hard, even in Europe, but noted a strong public desire for authenticity and real people online, as shown by phenomena like political accounts supported by foreign bots. She advocated for strong competition law to allow diverse business models and give users freedom to choose content and trust who they follow, rather than platforms dictating visibility. Geese highlighted the paradox of companies claiming to bring freedom while contributing to global autocratization, and expressed hope for international cooperation on these issues.