
Invisible Rulers: Information Warfare and Public Trust
Audio Summary
AI Summary
The discussion centers on the governance of speech and information warfare, particularly in the context of social media platforms. Renee DiResta, a former Stanford Internet Observatory researcher, shares her accidental entry into the field in 2013, sparked by concerns over vaccine rates and the anti-vaccine movement's online presence. She observed how social media platforms facilitated the amplification of opinions through fake accounts and bots, a phenomenon that was initially transparent.
Her early work involved analyzing the spread of information, including ISIS's online popularity and later, the Internet Research Agency's influence in the 2016 US election. DiResta led one of two outside teams for the Senate Intelligence Committee investigating data sets provided by Facebook, Twitter, and Alphabet. This led to her involvement with the Stanford Internet Observatory, founded by Alex Stamos.
DiResta emphasizes the importance of focusing on "actors, behaviors, and content" rather than solely on the content itself. Actors refer to the entities behind the content, distinguishing between authentic influencers and those pretending to be part of a community. Behaviors involve manipulating platform affordances for amplification, such as using automated accounts or buying fake engagement. Content refers to the actual message, which becomes particularly salient around events like elections, where platforms implement specific policies against voter suppression or premature victory claims. DiResta primarily focused on state actors, design, and incentives, with some work on elections and COVID-19.
She recounts the evolution of her relationship with platforms, from an adversarial stance in the early days to a more collaborative one. Initially, researchers were critical of platforms' lack of transparency regarding issues like the Internet Research Agency's organic reach. This led to calls for public hearings. Over time, platforms developed internal investigation teams that sometimes shared vetted data with researchers, enabling joint investigations. DiResta believes this collaboration, with academics conducting independent analyses and publishing reports, was an ideal working relationship, offering transparency and accountability that unaccountable private power alone could not provide.
DiResta argues against takedowns, except for inauthentic accounts, due to the "Streisand effect," where removing content can make it more interesting and appear as "forbidden knowledge." She advocates for transparency and user agency, giving users more control over their experience. Her personal theory, articulated in a 2018 article co-authored with Raskin, "Freedom of Speech, Not Freedom of Reach," suggests that curation is more important than takedowns. Since every piece of content in a feed is ranked, there is no neutral presentation. This immense power of ranking, she believes, should be devolved to users, or at least exercised transparently with a right to appeal.
She also advocates for more platforms, allowing individuals who are moderated or deplatformed to find alternative communities with palatable rules and values. This has led to the emergence of platforms like Parler, Truth Social, and Rumble on the right, and later Blue Sky, Threads, and Mastodon for left-leaning users after Elon Musk acquired X.
DiResta highlights the challenge of echo chambers that arise from such platform proliferation. She notes that the intentional reframing of moderation as censorship began on the right in 2018, expanding to include even content labels as censorship by 2020. She believes the question of depolarizing society is larger than tech. From a design standpoint, she suggests "bridging-based recommenders," which prioritize content liked by divergent publics, thereby reducing the amplification of "rage bait" and content from "rage entrepreneurs."
Regarding platforms' responsibility, DiResta observes a period from 2018 to 2022 where platforms publicly articulated moral arguments for their content policies, balancing free expression with the responsibility to provide accurate information on health and elections. However, this moral language diminished rapidly after 2022, following investigations and political pressure. She acknowledges high-profile mistakes, such as the throttling of the Hunter Biden laptop story, which, while a "bad call," was also "Streisanded to all hell" and not as impactful as the media circus suggested.
The political climate shifted significantly in 2022 when the House flipped, and the "weaponization committee" began investigating claims of government-directed censorship. DiResta explains how a narrative of mass censorship, originating from a single blog post, was laundered through various media outlets. This led to subpoenas for researchers, including the Stanford Internet Observatory, requesting years of their communications with platforms and the executive branch. This "chilling effect" led to a profound and instantaneous retreat by all entities, who ceased communication due to liability concerns.
DiResta criticizes the failure to publicly defend their work against these accusations, particularly the demonstrably false claim that SIO censored 22 million tweets. She emphasizes the importance of requesting corrections and actively combating misinformation, drawing a parallel to their own advice to election officials not to let rumors ossify.
She discusses the ongoing lawsuits against her and others, which she believes are designed to silence them. She argues that while litigation proceeds slowly, there's a need for a defense in the court of public opinion. She points out that congressional reports claiming government censorship are often based on misinterpretations or selective readings of documents, and that LLMs often cite these reports as reality, even when contradicted by extensive appendices.
Regarding the impact on platforms, DiResta notes that while platforms are generally winning legal cases asserting their First Amendment right to moderate, they have retreated from publicly defending their choices and the ethics behind them. This, she argues, is a response to political pressure from both the left and right.
Looking to the future, DiResta sees the dismantling of collaboration between researchers and election officials as a major concern, with universities becoming wary of the liability associated with election work. On the issue of deepfakes and generative AI, she believes these technologies will further erode public trust by making it harder to discern what is real, creating a "significant degree of distrust." She views AI as an extension of propaganda, capable of diminishing confidence in authentic content and creating convincing fakes. While not yet having a "profound and determinative effect" on elections, she notes instances of fake leaked audio impacting votes.
Regarding tools for average citizens to navigate misinformation, DiResta states there are "not very many." She suggests that content provenance tools, which track the origin and editing history of media, could help restore trust in certain outlets. This would involve credentialing "good stuff" rather than constantly detecting and correcting "bad stuff." However, she acknowledges a "weird intermediate period" where access to such tools will be uneven, leading to challenges in discerning truth, particularly in less-resourced areas. Ultimately, she believes people will rely on their "communities of trust" to decide what to believe.