Decades of Hidden Bugs, Found in Weeks
Anthropic's [Claude](/tools/claude) Mythos Preview found thousands of zero-day vulnerabilities in every major operating system and web browser—bugs that survived decades of human security review and millions of automated tests. Some exploits developed autonomously, no human help. Project Glasswing: Apple, Google, Microsoft, AWS, NVIDIA, and 40+ orgs get $100M in usage credits to scan critical software defensively before attackers can weaponize the same AI.
On April 7, 2026, Anthropic announced Project Glasswing—an industry consortium bringing together Apple, Google, Microsoft, AWS, Cisco, CrowdStrike, NVIDIA, Palo Alto Networks, JPMorgan Chase, Broadcom, Linux Foundation, and 40+ additional organizations to use Claude Mythos Preview for defensive cybersecurity.
The context: Mythos Preview found thousands of zero-day vulnerabilities (previously unknown flaws) in every major OS and browser. Some bugs had existed for decades, surviving exhaustive human security audits and millions of automated scans. Mythos developed exploits for many of these vulnerabilities—entirely autonomously, no human assistance.
The problem: AI models now surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Ten years after DARPA's Cyber Grand Challenge, frontier AI is competitive with elite security researchers. Without defensive action, offensive AI will proliferate to nation-state actors, criminal groups, and anyone with API access.
The solution: Project Glasswing gives defenders first-mover advantage. Anthropic is committing $100M in Mythos Preview usage credits plus $4M in direct donations to open-source security organizations. Launch partners scan critical infrastructure before attackers can weaponize the same capabilities.
For CISOs, this marks the end of "AI-assisted" cybersecurity and the beginning of AI-native offensive capabilities. The question isn't whether AI will find your zero-days—it's whether defenders or attackers find them first.
What Claude Mythos Preview Can Do
Capabilities demonstrated (official Anthropic disclosures):
Zero-day discovery: Thousands of previously unknown vulnerabilities identified across:
- Every major operating system (Windows, macOS, Linux)
- Every major web browser (Chrome, Safari, Firefox, Edge)
- Critical software infrastructure (unnamed by Anthropic for responsible disclosure)
Autonomous exploitation: Mythos Preview develops working exploits for many vulnerabilities without human guidance. It doesn't just identify bugs—it proves they're exploitable by writing proof-of-concept attacks.
Decade-old bugs found: Some vulnerabilities discovered by Mythos had existed for 10+ years, surviving:
- Continuous human security audits
- Millions of automated security tests
- Bug bounty programs (white-hat hackers incentivized to find bugs)
Why existing defenses failed:
Human limitation: Elite security researchers are rare (estimated <10,000 globally with skills to find these bugs). Mythos scales that expertise infinitely.
Automated scanner limitation: Traditional tools check for known patterns. Mythos reasons about code semantics, finding novel vulnerabilities that don't match known signatures.
Economic limitation: Manual code review costs $100-$500/hour. Mythos runs 24/7 at API cost (estimated <$1/hour for compute, though Anthropic hasn't disclosed Mythos pricing).
The Offensive AI Timeline
2024: AI assists human security researchers (autocomplete for exploit dev)
2025: AI competitive with mid-tier security teams (finds bugs humans miss)
2026 (now): AI surpasses all but elite researchers (autonomous zero-day discovery + exploitation)
2027 (projected): AI capabilities proliferate beyond responsible actors → offensive AI available via dark web, nation-state toolkits
Project Glasswing: The Defensive Strategy
Launch partners (disclosed April 7, 2026):
Tech companies: Apple, Google, Microsoft, AWS, Broadcom, NVIDIA
Security vendors: Cisco, CrowdStrike, Palo Alto Networks
Financial: JPMorgan Chase
Open-source: Linux Foundation
Plus: 40+ additional organizations (not publicly disclosed)
What they get:
Access to Claude Mythos Preview: Unreleased frontier model (not available to general public or enterprise customers)
$100M in usage credits: Anthropic covering API costs for scanning critical software
$4M in direct funding: Donations to open-source security organizations for staffing, infrastructure, remediation
Shared intelligence: Anthropic will publish learnings so entire industry benefits (responsible disclosure timeline, vulnerability patterns, defensive best practices)
What they're scanning:
First-party software: Each partner scans their own products (Microsoft scans Windows, Apple scans macOS/iOS, Google scans Chrome/Android, etc.)
Open-source dependencies: Critical infrastructure software maintained by small teams (OpenSSL, curl, Linux kernel components, web servers, databases)
Supply chain software: Third-party libraries and frameworks embedded in enterprise applications
The defensive playbook:
Scan: Run Mythos Preview against codebase (autonomous vulnerability discovery)
Validate: Confirm bugs are real (Mythos develops proof-of-concept exploits to prove exploitability)
Patch: Fix vulnerabilities before public disclosure
Disclose: Coordinate responsible disclosure (give users time to patch before publishing details)
The race condition: Defenders need to find and patch bugs before attackers discover and weaponize them. Mythos gives defenders months or years of lead time—but only if they act now.
Why Anthropic Is Restricting Access
Mythos Preview is NOT publicly available. This is a rare decision for frontier AI labs, which typically release models broadly.
Anthropic's reasoning (from official announcement):
"Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely."
The dual-use problem: Same capabilities that help defenders find bugs also help attackers. Unlike most AI risks (which are theoretical), offensive AI cybersecurity risk is immediate and well-understood.
Restricted access criteria:
Glasswing partners: 45+ organizations selected for defensive security work
No public API: Mythos Preview will not be available via Claude.ai or enterprise API
No open-source release: Unlike some frontier models (e.g., Meta's Llama), Mythos will remain proprietary
Gated research access: Security researchers may request access for defensive work (application required)
The timeline constraint: Anthropic estimates offensive AI capabilities will proliferate within months (not years). Defensive action must happen now, while access is still restricted to trusted actors.
The CISO Reality Check
What changed on April 7, 2026:
Before: AI-assisted cybersecurity (tools help human researchers find bugs faster)
After: AI-native offensive capabilities (AI autonomously finds bugs humans would never discover)
Threat model update:
Traditional threat actors:
- Nation-state APTs (Advanced Persistent Threats)
- Organized cybercrime groups
- Individual black-hat hackers
New threat actors (enabled by offensive AI):
- Any organization with API access to frontier models (when capabilities proliferate)
- Automated attack frameworks (AI finds zero-days, develops exploits, launches attacks—no human in loop)
- Supply chain attackers (AI scans open-source dependencies for vulnerabilities at scale)
Defense strategy shifts:
Old strategy: Periodic security audits (quarterly or annual code reviews)
New strategy: Continuous AI-powered scanning (Mythos-equivalent capabilities running 24/7)
Old metric: Time to patch known vulnerabilities (CVE response time)
New metric: Time to discover + patch unknown vulnerabilities (zero-day discovery rate)
Old assumption: Undiscovered bugs are safe (security through obscurity)
New reality: Undiscovered bugs are liabilities (offensive AI will find them, question is when)
Continue Reading
Decision Framework for Enterprise Security Teams
Immediate Actions (Q2 2026)
Inventory critical software:
- First-party applications (custom-built enterprise software)
- Open-source dependencies (libraries, frameworks, tools)
- Third-party integrations (SaaS vendors, APIs, plugins)
Assess Glasswing eligibility:
- Are you a launch partner? (If yes, apply for Mythos Preview access)
- Do you maintain critical infrastructure? (If yes, contact Anthropic for extended access program)
- Can you partner with a Glasswing member? (E.g., AWS customers may get indirect access via AWS security services)
Budget for AI-powered security:
- Mythos-equivalent tools will emerge (CrowdStrike, Palo Alto, Cisco will integrate offensive AI into products)
- Expect 2-5x increase in security tooling costs (AI-powered scanning more expensive than traditional tools)
- ROI justification: Cost of undiscovered zero-day >> cost of AI-powered discovery
Medium-Term Strategy (H2 2026 - 2027)
Shift from reactive to proactive security:
- Traditional: Wait for CVE disclosure, patch known vulnerabilities
- AI-native: Continuous scanning for unknown vulnerabilities, patch before disclosure
Build or buy offensive AI capabilities:
- Build: Train in-house models for code security (requires ML team + compute budget)
- Buy: Subscribe to security vendor products with embedded offensive AI (CrowdStrike Falcon, Palo Alto Prisma, Cisco SecureX)
Establish responsible disclosure program:
- When your AI finds zero-days in third-party software, coordinate disclosure (don't weaponize, don't hoard)
- Contribute to open-source security (defensive ecosystem benefits everyone)
Long-Term Posture (2027+)
Assume offensive AI is ubiquitous:
- Every threat actor will have access to Mythos-equivalent capabilities
- Undiscovered vulnerabilities in your codebase = active liabilities
- Security posture measured by AI-powered scanning cadence (daily, hourly, continuous)
Invest in secure-by-design development:
- AI-powered code review during development (find bugs before production)
- Formal verification tools (prove code correctness, not just test for known bugs)
- Memory-safe languages (Rust, Go) to eliminate entire vulnerability classes
What This Means for 2026 Budgets
For CISOs:
- Offensive AI is no longer theoretical—Mythos proves capabilities exist today
- Budget for AI-powered security tools (2-5x cost increase vs traditional scanners)
- Prioritize Glasswing partnership or equivalent (defender advantage depends on early access)
For CFOs:
- Security spend will increase 20-50% in 2026-2027 (AI-powered scanning required)
- Cost of inaction: Average data breach cost $4.5M (IBM), zero-day exploits drive breach likelihood up 10x
- ROI: $1M invested in AI security avoids $10M-$50M in breach costs
For CIOs:
- Offensive AI changes software development lifecycle (security shifts left, continuous scanning required)
- Expect vendor consolidation (only large security vendors can afford to build Mythos-equivalent capabilities)
- Open-source security becomes critical dependency (Linux Foundation partnership essential)
For procurement teams:
- Evaluate security vendors on offensive AI capabilities (does vendor have Glasswing access? In-house models? Partnership with Anthropic/OpenAI?)
- Demand proof of AI-powered zero-day discovery (vendor marketing ≠ Mythos-level capabilities)
- Budget for continuous scanning subscriptions (not one-time tools)
Sources:
- Anthropic — Official Project Glasswing announcement (April 7, 2026)
- WIRED — Glasswing analysis, 45+ partner organizations
- The New York Times — Mythos cybersecurity capabilities
- TechCrunch — Glasswing launch details
- CNBC — Restricted access rationale
