
How Bots, Deepfakes and AI Agents Are Forcing a New Internet Identity Layer | Alex Blania on a16z
Audio Summary
AI Summary
The concept of "Proof of Human" addresses the increasingly complex challenge of distinguishing human interactions from those of AI agents or bots on the internet. As AI capabilities advance, it becomes crucial to ascertain whether one is interacting with a human, an agent acting on behalf of a human, or merely an autonomous agent. The core problem of "Proof of Human" is ensuring that each individual on a platform has only one account, or a limited number, and remains the sole owner of that account. This requires an initial, privacy-preserving verification and ongoing authentication that the same person controls the account.
The current online environment, exemplified by platforms like X (formerly Twitter), is inundated with bots. A single human can control thousands or hundreds of thousands of AI agents, leading to a constant catch-up game where platforms block millions of bots daily, yet this barely scratches the surface. The future is envisioned to include personal AI agents for everyone, acting on behalf of humans for tasks like posting on social media, but with the unique human owner's approval and identity still linked.
Proving someone is human is a surprisingly difficult problem, especially given that AI agents can already pass the Turing test and mimic human behavior online. Early approaches to this problem included "web of trust" systems, which analyze online behavior and attestations from known individuals. However, this was quickly dismissed because AI can replicate any purely digital activity, including creating fake accounts and attesting to other AI entities.
Another approach considered was using government IDs. This was also largely disregarded due to concerns about free speech, loss of anonymity, and the practical limitations of government identity systems, which are not built for global, real-time verification across billions of users in diverse regulatory environments. Even countries with advanced digital infrastructure, like Singapore, represent only a tiny fraction of the global online population.
Biometrics emerged as a more viable, albeit controversial, solution. The initial reaction to biometrics is often negative due to privacy concerns. However, the critical challenge for "Proof of Human" is uniqueness – not just verifying that a person is the same individual (one-to-one authentication, like Face ID), but verifying that a new individual has not signed up before (one-to-N authentication, where N is the size of the network). This one-to-N problem requires a high level of mathematical entropy that common biometrics like face or fingerprints cannot provide beyond tens of millions of users. Iris recognition, which analyzes the muscle of the eye, offers sufficient entropy for unique identification on a global scale.
To address the privacy concerns and prevent replay attacks, a custom hardware device called the "Orb" was developed. The Orb uses multiple sensors across the electromagnetic spectrum to prevent spoofing, such as showing a display instead of a real eye. For verification, the Orb checks uniqueness in an anonymous and privacy-preserving manner. It also sends a signed face image to the user's phone for later re-authentication. Re-authentication on newer phones can be done locally against this signed image, but older Android phones, which are more susceptible to deepfake injections into camera streams, may require periodic re-verification with an Orb, perhaps a few times a year.
Privacy is maintained through advanced cryptographic techniques like multi-party computation (MPC) and zero-knowledge proofs (ZKP). When an iris scan is taken, the iris code is broken into multiple pieces and sent to different, independent computers. No single computer or entity holds all the pieces, preventing the reconstruction of an individual's biometric data. These distributed parties then collectively perform a computation to determine uniqueness without ever fully assembling the iris code. A zero-knowledge proof ensures that users can prove their uniqueness to platforms without revealing their identity to the platform or the system itself. This creates a counterintuitive property where biometrics are used, yet anonymity and extreme privacy are preserved.
The need for "Proof of Human" extends beyond social media to virtually any online interaction primarily involving humans. Examples include dating apps, where knowing the authenticity of the other person is paramount (Tinder is already piloting a badge for Orb-verified users in Japan). Another crucial area will be video conferencing, as real-time deepfakes become indistinguishable from reality, posing risks for high-value interactions like financial calls where impersonation could lead to massive fraud. Gaming is another sector where players want to ensure they are competing against humans, not superhuman AI, especially in competitive or money-betting scenarios.
Even the entire model for video platforms like YouTube is at risk. With AI generating hyper-scalable content (e.g., hundreds of AI-generated videos per day earning significant income), it becomes critical for platforms to know if content is human-created and if it's being watched by humans or other AIs. Advertisers, in particular, need to know if their ads are reaching human audiences. The creator economy, built on personal relationships between creators and their human supporters, would also be undermined if creators or their audiences were discovered to be bots.
The current state of AI is merely a glimpse of what's to come, with intelligence costs dropping exponentially and AI agent capabilities increasing super-linearly. Within a year or two, AI will be superhuman in many ways, capable of deeply understanding and manipulating human psychology, making it incredibly effective at "programming humans." This makes "Proof of Human" not just useful but essential to identify sophisticated AI-driven propaganda or scams.
The Worldcoin project currently has 18 million verified users and 40 million app users. The immediate focus is on expanding in the US, aiming for 50,000 Orb devices to make verification accessible within 15 minutes for most people. This requires significant investment in device distribution, potentially through partnerships with large retailers or even "Orb on demand" services where a mobile Orb comes to the user. On the platform side, integration with large platforms is expected to drive user adoption. The initial skepticism towards the Orb and the idea of "Proof of Human" has significantly diminished following the widespread impact of ChatGPT and advanced AI models like Claude, making the problem undeniable.
Beyond individual online interactions, "Proof of Human" has profound implications for governance and economic policy. Governments struggle to efficiently distribute money to citizens and combat fraud in social programs. During the COVID-19 stimulus, an estimated $400 billion was stolen, highlighting the need to verify recipients as unique humans. Systems like Social Security and Medicare are riddled with inefficiency and fraud, a problem AI will exacerbate by making fraudulent claims and identity theft (e.g., buying Social Security numbers) massively scalable. A cryptographically strong infrastructure for identifying unique humans, potentially tied to citizenship, is seen as crucial for maintaining democracy and efficiently managing public funds in an AI-dominated future.
While the Orb provides the highest level of verification, other, less accurate methods like "face check" (using phone cameras with MPC for anonymity) and government ID checks (using NFC chips with MPC) are also offered as temporary or supplementary solutions. However, these are acknowledged to be less robust against advanced deepfakes and are primarily seen as transitional until the Orb's widespread adoption. The long-term vision emphasizes the necessity of the Orb-like solution due to its unique ability to provide scalable and secure "Proof of Human."