
Vercel Hacked: A Simple Failure of OAuth Hygiene | THREAT WIRE
Audio Summary
AI Summary
This week's cybersecurity news roundup, hosted by Ally Diamond on ThreatWire, covers several significant events and trends.
A major data breach at Verscell, stemming from a security incident at Context.ai, has raised concerns about OAUTH token management and shadow IT. Context.ai, a company that builds AI agents for specific industries like semiconductors and legal, experienced unauthorized AWS access. While Context.ai used CrowdStrike for remediation, Verscell, a user of Context.ai, was unknowingly affected. The compromise occurred when an attacker used a leaked OAUTH token, originally intended for a Verscell employee's Google Workspace account via Context.ai, to gain access to Verscell's Google Workspace. This token had "allow all" permissions granted, allowing the attacker to then access Verscell's environment variables. Verscell's CEO stated that only non-sensitive environment variables were compromised and that a small number of customers were affected, attributing the attack's acceleration to AI. However, the host argues this was primarily a failure of OAUTH management and shadow IT on Verscell's part, not an AI-driven attack, highlighting the need for better employee education on OAUTH tokens and permissions.
The discussion then pivots to the broader implications of AI in cybersecurity and data collection. The host poses a question to the audience: do they consider Context.ai a form of spyware, and do they allow such data collection on their internal systems? This is framed against the rapid emergence of AI companies and questions about their security practices, particularly concerning sensitive systems and keylogging-adjacent information collection.
The topic of age verification, previously discussed, is revisited. Many comments suggest that government mandates for age verification are less about child protection and more about internet censorship and control. A viewer, Adam JL7i, commented that any age verification laws should include stipulations preventing companies from harvesting that data, emphasizing it should be confirmation-only. The host agrees, drawing a parallel to the need for guarantees that AI systems like Context.ai won't sell collected data. The host shares a personal anecdote about forming an LLC and subsequently being inundated with solicitations for business, expressing surprise that the government allowed this level of data sharing. The host also clarifies their previous stance on internet age restrictions, explaining it stems from personal experience with the negative impact of unmitigated internet access on mental health and focus, likening it to addiction.
The episode then highlights advancements in AI cybersecurity capabilities. The AI Security Institute in the UK has tested Claude Mythos, deeming it the most impressive AI model for cybersecurity capabilities they've seen since 2023. Claude Mythos demonstrated significant success in expert-level Capture The Flag (CTF) challenges, outperforming previous models. It also significantly outperformed other models in a complex corporate network attack simulation called "the last ones." OpenAI has also released its own top-of-the-line cybersecurity-focused model.
A major shift in NIST's approach to the National Vulnerability Database (NVD) is reported. Due to an overwhelming influx of Common Vulnerabilities and Exposures (CVEs), NIST is de-prioritizing the enrichment of all CVE submissions. While they are not stopping enrichment entirely, they will now prioritize CVEs that appear in the CISA Known Exploited Vulnerabilities catalog, affect federal government software, or are for critical software. NIST will also no longer provide official severity scores, relying instead on scores from the numbering authority. Backlogged CVEs have been moved to a "not scheduled" category. The host notes that CVE finding was once a minimum requirement for cybersecurity jobs and that CVEs have been on the rise, with or without AI.
The trend of companies moving towards closed-source models due to AI security concerns is also mentioned, with Cal.com announcing its shift from open source to closed source for this reason. The host questions whether this will become a more widespread trend.
Other cybersecurity news includes:
* Zuma releasing a feature to distinguish between humans and AI using iris scanning technology from Sam Altman's World.
* Trail of Bits publishing a blog post on how they bypassed Google's zero-knowledge proof for quantum cryptoanalysis.
* Meta partnering with Portziger to provide Burp Suite professional licenses to select bug bounty participants.
* Google using Gemini AI to remove over 602 million scam ads.
The host concludes by thanking viewers for watching ThreatWire for the week of April 20th, 2026, and encourages engagement, likes, comments, and subscriptions to help reach one million subscribers. They also mention a refresh of the show's look and solicit feedback in the comments. The host can be found online at "ending with Ally."