How Bots, Deepfakes and AI Agents Are Forcing a New Internet Identity Layer | Alex Blania on a16z
The concept of "Proof of Human" addresses the increasingly complex challenge of distinguishing human interactions from those of AI agents or bots on the internet. As AI capabilities advance, it becomes crucial to ascertain whether one is interacting with a human, an agent acting on behalf of a human, or merely an autonomous agent. The core problem of "Proof of Human" is ensuring that each individual on a platform has only one account, or a limited number, and remains the sole owner of that account. This requires an initial, privacy-preserving verification and ongoing authentication that the same person controls the account.
The current online environment, exemplified by platforms like X (formerly Twitter), is inundated with bots. A single human can control thousands or hundreds of thousands of AI agents, leading to a constant catch-up game where platforms block millions of bots daily, yet this barely scratches the surface. The future is envisioned to include personal AI agents for everyone, acting on behalf of humans for tasks like posting on social media, but with the unique human owner's approval and identity still linked.