
Power to Truth: How Big Tech Is Rewriting Reality and Weakening Democracy
Audio Summary
AI Summary
Guy Rolnik, a professor at the University of Chicago Booth School and founder of a financial economic newspaper in Israel, discussed the information ecosystem, media, and social media, particularly issues of misinformation. He shared a personal experience where AI content impersonated him on YouTube and Facebook, promoting fraudulent stock tips. While initially finding it amusing, he quickly realized the gravity of the situation as hundreds of people reached out, some believing the scam and others wanting to join.
His attempt to report the fraud to Meta's law firm revealed a problematic process. Meta shifted the burden of proof and responsibility onto him, the victim, despite the company enabling fraudsters and harming its own users. He described the process as an "ordeal," with Meta imposing numerous hurdles for complaints and even logging him off their platforms. He noted the company's dismissive responses, such as stating they couldn't monitor everything on their platforms or that they "don't work over the weekend," despite being a multi-trillion-dollar company investing billions in AI. Rolnik highlighted the irony of one of the world's most powerful companies claiming inability to address impersonation and fraud on its platforms.
He further explained that Meta's lawyers informed him that he could not sue them in California due to Section 230 of the 1996 Communication Act, which grants immunity to platforms for user-generated content. This legal protection, combined with Meta's business model, creates incentives for them to defraud their users. Rolnik argued that this system is incompatible with democracy, as the same incentives and lack of accountability permeate the algorithmic design, leading to a breakdown in trust and a lack of corporate governance and regulation.
He asserted that society has ceded control over critical functions to a handful of companies, leading to an epistemic crisis—a total breakdown in the trust and legitimacy of knowledge institutions. The ascent of these institutions, including science, universities, state capacity, and journalism, enabled modern society and liberal democracy. However, five major companies are now destroying this information infrastructure, corrupting and manipulating it, and controlling it for their own interests.
The discussion then turned to the broader implications of AI and cyber security. Nicole Pearl Roth, a cybersecurity expert, identified disinformation as the number one concern for society, surpassing climate change and cyber security. Rolnik elaborated that the problem extends beyond misinformation and disinformation; social media and large language models (LLMs) rewire the way people think. The algorithms, optimized for engagement and moral outrage rather than truth, have created a society full of rage. He noted that younger generations, growing up with these machines, often do not understand the concept of truth or trust any information, viewing truth as merely a narrative. This creates an electorate susceptible to manipulation, a dream for dictators.
Rolnik cited a New York Times report indicating that Google AI search results systematically contain hallucinations and inaccurate information, with 10% of answers being unreliable. Users, however, often conflate speed with authority, accepting these answers without doubt. He emphasized that a few opaque individuals within these companies decide what data to train their models on, thereby shaping the information billions of people receive. He concluded that it is absurd to expect a democracy to thrive with unregulated, concentrated big tech companies controlling social media, instant messaging, and LLMs. Drawing a parallel to Justice Louis Brandeis's quote about wealth concentration, Rolnik stated that society cannot have both unregulated big tech and democracy.
To address these issues, Rolnik proposed several solutions, cautioning against relying on technology itself to solve problems created by technology. He argued that laws, regulations, and rules are the historical means of solving such societal challenges.
1. **Liability:** Companies with immense power should not be exempt from liability for harm caused on their platforms.
2. **Data Collection:** Regulation is needed on the amount, type, and usage of information collected by these companies, particularly concerning surveillance and targeting.
3. **Know Your Customer (KYC):** Similar to banking regulations, tech platforms should implement KYC policies to prevent anonymous bad actors and bots.
4. **Local Accountability:** Tech companies operating globally should be accountable under local laws, with local executives and directors who can be sued.
5. **Child Protection:** Prohibit children under 18 or 21 from using these platforms, as internal data shows significant harm to their development and well-being. He stressed that allowing children on these platforms, knowing the addictive and manipulative nature of their design, will be viewed as a grave mistake in the future.
Rolnik concluded by asserting that the internal culture of these companies is corrupt, as they are aware of the harm they inflict, especially on children, but prioritize their business model. The discussion underscored the urgent need for societal awareness and robust regulatory frameworks to reclaim control over the information ecosystem and protect democratic values.