Twelve Tech Rivals Unite Behind an AI Too Dangerous to Release
Anthropic's withheld frontier model has found thousands of zero-day flaws across critical software, prompting an unprecedented industry coalition to deploy it under restricted access.
Twelve of the world's largest technology companies — including AWS, Apple, Google, Microsoft and CrowdStrike — have joined a cybersecurity consortium built around an artificial intelligence model that its creator has deemed too dangerous for public release.
Project Glasswing, announced by Anthropic on Monday, will deploy Claude Mythos Preview, an unreleased frontier model, to systematically identify and patch vulnerabilities across widely used software. The model has already discovered thousands of zero-day flaws, including a 27-year-old bug in OpenBSD and a 16-year-old vulnerability in FFmpeg — a multimedia framework embedded in billions of devices — that had evaded five million automated test runs without detection.
Anthropic has committed $100m in usage credits and $4m in donations to open-source foundations including the Linux Foundation and the Apache Software Foundation. All three major cloud providers — AWS, Google Cloud via Vertex AI, and Microsoft's Azure AI Foundry — will distribute restricted access to Claude Mythos Preview to vetted security teams.
The coalition marks the first time that Amazon, Google and Microsoft have agreed to distribute the same AI model through their competing cloud platforms simultaneously. Cisco, NVIDIA, JPMorganChase, Palo Alto Networks, Broadcom and more than 40 additional organisations have also signed on.
The arrangement reflects a calculation that the cybersecurity threat landscape has shifted faster than any single company can manage. Elia Zaitsev, chief technology officer at CrowdStrike, said the economics of vulnerability exploitation have been fundamentally altered. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI," he said.
Claude Mythos Preview's benchmark performance underscores the gap Anthropic claims to have opened. On CyberGym, a standardised test for AI-driven vulnerability detection, the model scored 83.1 per cent — compared with 66.6 per cent for Anthropic's existing top-tier model, Claude Opus 4.6. On SWE-bench Verified, a software engineering benchmark, it achieved 93.9 per cent against Opus 4.6's 80.8 per cent.
The FFmpeg discovery is particularly striking. The bug had persisted through sixteen years of continuous integration testing — automated processes designed precisely to catch such flaws. That it required an AI model operating outside conventional test frameworks to surface it raises uncomfortable questions about the reliability of existing software assurance methods across the industry.
Among the vulnerabilities found were chained Linux kernel flaws that, when exploited together, enabled full privilege escalation — the kind of attack chain that nation-state actors typically spend months developing manually.
Anthropic has explicitly framed the initiative in geopolitical terms. The company said it is in active discussions with the US government about the model's offensive and defensive capabilities, and named China, Iran, North Korea and Russia as the primary threat actors motivating the project. The decision to withhold Mythos Preview from general availability — even as it is deployed across allied infrastructure — represents a deliberate dual-use calculus: the model is powerful enough to find vulnerabilities at scale, which means it is also powerful enough to create them.
Lee Klarich, chief product officer at Palo Alto Networks, acknowledged the tension directly. "This is not only a game changer for finding previously hidden vulnerabilities, but it also signals a dangerous shift where attackers can soon find even more zero-day vulnerabilities," he said.
To manage these risks, the consortium has proposed an independent third-party governance body to oversee large-scale AI cybersecurity projects. Details of its structure and authority remain undisclosed.
The open-source community stands to benefit disproportionately. Jim Zemlin, executive director of the Linux Foundation, said the initiative addressed a long-standing asymmetry. "In the past, security expertise has been a luxury reserved for organisations with large security teams. Open source maintainers have historically been left to figure out security on their own," he said. The $4m in direct funding, while modest relative to the scale of the problem, represents one of the largest single corporate commitments to open-source security.
The broader implication is structural. If AI models can find critical vulnerabilities that decades of human review and automated testing have missed, the baseline expectation for software security — and the liability framework around it — will need to be rewritten. For now, Anthropic has positioned itself at the centre of that rewrite, holding the most capable tool and deciding who gets to use it.