In a single 48-hour window this week, three of the largest enterprise technology vendors on the planet each launched products aimed at the same problem: the AI agents running inside your organization are unsecured, ungoverned, and multiplying faster than any security team can track.
IBM announced Autonomous Security, a multi-agent cybersecurity service that deploys AI agents to hunt, contain, and remediate threats at machine speed. OpenAI shipped a major Agents SDK update introducing sandboxing and a new harness architecture that isolates agent execution environments. And Okta unveiled its blueprint for the secure agentic enterprise, built around treating every AI agent as an independent identity that must be discovered, authenticated, and governed.
These are not incremental product updates. They represent a coordinated market signal: the enterprise AI security stack is being built right now, in real time, and the vendors building it believe the window for getting this right is closing fast.
The data supports their urgency. According to the latest industry surveys, 88% of organizations have already reported confirmed or suspected AI agent security incidents. Only 34% have AI-specific security controls in place. And 80% of IT professionals say they have personally witnessed an AI agent perform an unauthorized or unexpected action in a production environment.
The gap between those numbers is not a governance discussion. It is an active threat surface. And this week, the industry started building toward a response.
The Problem: Agents Are Proliferating Faster Than Controls
To understand why three major vendors converged on the same problem in the same week, you need to understand the velocity of what is happening inside enterprises right now.
Gartner projects that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. That is an eight-fold increase in twelve months. The global agentic AI market has crossed $10 billion. Nearly every large enterprise surveyed — 96%, according to OutSystems — is already running AI agents in some production capacity.
But the security infrastructure has not kept pace. A Dark Reading poll found that 48% of cybersecurity professionals now identify agentic AI as the single most dangerous emerging attack vector, ahead of ransomware, supply chain compromise, and cloud misconfiguration. Bessemer Venture Partners, citing IBM's 2025 Cost of a Data Breach Report, notes that shadow AI breaches cost an average of $4.63 million per incident — $670,000 more than standard breaches — because the blast radius is harder to contain when the organization does not know the agent exists.
The fundamental challenge is architectural. Traditional security controls were designed for a world where humans initiated actions, applications executed them, and logs captured both. AI agents break every assumption in that chain. They initiate actions autonomously. They spawn sub-agents that inherit permissions without explicit grants. They interact with APIs, databases, and external services through tool-calling interfaces that bypass conventional access control models. And they do all of this at machine speed, generating volumes of activity that overwhelm human review.
McKinsey's internal AI platform, Lilli, demonstrated this attack surface during a controlled red-team exercise when an autonomous agent achieved broad system access in under two hours. That was a friendly test with guardrails. In production, with real adversaries, the timeline compresses further.
This is the context that made this week's announcements inevitable.
IBM Autonomous Security: Fighting Agents With Agents
IBM's approach is the most architecturally ambitious of the three. Autonomous Security is not a dashboard or a policy engine. It is a multi-agent service — a fleet of specialized AI agents designed to operate across an organization's entire security stack, coordinating decision-making, detection, and response at machine speed.
Mark Hughes, IBM's Global Managing Partner of Cybersecurity Services, framed it bluntly: "Frontier models are creating a new category of enterprise threat. AI-powered offense demands AI-powered defense."
The service is built around several core capabilities. IBM's AI agents analyze software exposures and runtime environments to map exploit paths — not just known vulnerabilities, but the chains of weaknesses that an attacker or rogue agent could traverse to escalate privilege or exfiltrate data. They enforce security policies across the full stack, including identity, governance, and risk systems. They detect anomalies in real time and initiate containment with minimal human intervention.
The "vendor-agnostic" positioning is deliberate and important. IBM is betting that enterprises will not consolidate their security tools around a single vendor any time soon — they have too many existing investments, too many point solutions, and too many compliance requirements tied to specific products. Instead, IBM is positioning Autonomous Security as the coordination layer that sits above the existing stack and orchestrates a unified response.
For CISOs, the practical implication is significant. IBM is not asking you to replace your SIEM, your EDR, your CSPM, or your identity provider. It is offering to deploy AI agents that learn how those tools work together — and how they fail together — and then close the gaps between them autonomously.
The complementary piece is a new cybersecurity assessment from IBM Consulting, designed to evaluate enterprise readiness for agentic threats. This is smart sequencing: before you deploy AI-powered defense, you need to understand where your current defenses are blind to AI-powered offense. The assessment maps security gaps, policy weaknesses, AI-specific exposures, and potential exploit paths, providing the baseline that Autonomous Security then acts on.
What IBM is not yet disclosing is pricing, availability timeline, or specific technology partners — all of which will matter enormously when enterprises start evaluating this against their existing MSSP relationships and internal SOC capabilities.
OpenAI Agents SDK: Sandboxing the Build Layer
OpenAI's update targets a different layer of the problem — not the security of agents already in production, but the safety of agents being built.
The core addition is sandboxing: the ability to run agents in isolated compute environments where they can access files, execute code, and use tools only within defined boundaries. Think of it as containerization for AI agents. The agent operates in a silo, and the silo defines what it can see, touch, and modify.
Karan Sharma from OpenAI's product team explained the design philosophy: "This launch, at its core, is about taking our existing Agents SDK and making it so it's compatible with all of these sandbox providers." The SDK integrates with infrastructure partners including Cloudflare, Vercel, E2B, and Modal, and developers can bring their own sandbox implementations.
The second addition is the harness architecture, which OpenAI defines as everything about an agent besides the underlying model — the orchestration logic, tool definitions, memory management, and deployment configuration. By standardizing the harness as a first-class concept, OpenAI is enabling what it calls "long-horizon agents" — complex, multi-step workflows that run over extended periods and need consistent, auditable behavior throughout.
For enterprise development teams, the practical value is straightforward. Before this update, building a production-grade agent on OpenAI's platform required developers to implement their own isolation, their own permission boundaries, and their own execution controls. That meant every team built security differently — or, more commonly, did not build it at all. The SDK update moves security from an afterthought to a default.
The strategic significance is larger. Enterprise now makes up more than 40% of OpenAI's revenue and is on track to reach parity with consumer by year-end. OpenAI needs enterprise customers to trust that agents built on its platform will not become liabilities. The sandboxing update is as much a trust-building exercise as it is a technical one.
The current limitation: Python-first, with TypeScript support coming later. For organizations with polyglot development environments, this creates a temporary adoption constraint.
Okta: Treating Every Agent as an Identity
Okta's contribution may be the most conceptually important of the three, because it addresses the foundational question that IBM and OpenAI largely leave implicit: who — or what — is the agent?
The core thesis is that every AI agent should be treated as an independent, identity-bearing entity within your organization's identity fabric. Not as a service account. Not as an API key. Not as an extension of the human who deployed it. As its own identity, with its own lifecycle, its own access policies, and its own kill switch.
The statistics that motivated this approach are damning. Okta's research found that while 88% of organizations report suspected or confirmed AI agent security incidents, only 22% treat their agents as independent identities. The rest rely on shared credentials, inherited permissions, or — in the worst cases — hardcoded API keys that grant broad access with no audit trail.
Okta for AI Agents, launching April 30, 2026, is built around three questions that every CISO should be able to answer and almost none currently can:
Where are my agents? The platform includes shadow agent discovery — automated detection of both sanctioned AI agents deployed through official channels and unsanctioned agents created by employees using consumer AI tools, personal API keys, or unapproved platforms. This is the AI equivalent of shadow IT discovery, except the blast radius of an unmanaged AI agent is orders of magnitude larger than an unmanaged SaaS subscription.
What can they connect to? Okta is extending its Integration Network — which already includes 8,200+ application integrations — to cover AI agent platforms including Boomi, DataRobot, and Google Vertex AI. The Agent Gateway provides a centralized control plane for managing what resources each agent can access, including MCP servers, APIs, databases, and third-party tools.
What can they do? This is where the architecture gets granular. Okta's framework authorizes individual tool calls — not just whether an agent can access a system, but whether it can perform specific operations within that system. Privileged credentials are managed and rotated automatically. And Universal Logout provides an instant kill switch to revoke all agent access simultaneously when an incident is detected.
Ric Smith, Okta's President, summarized the positioning: "Speed is now a given, but security is the differentiator."
For enterprises that have already invested in Okta for human identity management, extending the same framework to non-human identities is a natural and relatively low-friction path. For enterprises on competing identity platforms, Okta is making a competitive land-grab — and the first vendor to establish agent identity as the default standard will own a critical layer of the agentic enterprise stack.
What Gartner Is Telling CISOs
Gartner's contribution to this moment came slightly earlier but provides the analytical framework for understanding what IBM, OpenAI, and Okta are each building toward.
In March 2026, Gartner published its first-ever Market Guide for Guardian Agents — a new product category defined as AI agents that supervise other AI agents, ensuring that their actions align with organizational goals and governance boundaries. The report identifies three mandatory capability areas: AI visibility and traceability, continuous assurance and evaluation, and runtime inspection and enforcement.
The critical recommendation is worth quoting directly: organizations need "a neutral, trusted guardian agent layer with multiple guardian agents performing separate but integrated oversight functions" that "enforces routing across all providers."
Translation: governance cannot remain platform-native. If your IBM agents are governed by IBM, your OpenAI agents by OpenAI, and your Okta agents by Okta, you have rebuilt the same fragmentation problem in a new domain. The guardian layer must be enterprise-owned, sitting above the individual platforms and enforcing consistent policies across all of them.
Gartner also predicted that AI applications will drive 50% of cybersecurity incident response efforts by 2028 — a figure that implies the guardian agent market is not optional infrastructure but load-bearing infrastructure that the entire security program will eventually depend on.
The Three-Layer Stack Taking Shape
Step back from the individual announcements and a coherent architecture becomes visible. What is emerging is a three-layer enterprise AI security stack:
Layer 1 — Build-time safety (OpenAI, Anthropic, Google). Sandboxing, harness isolation, permission boundaries baked into the development SDK. This is where you prevent agents from being built wrong in the first place.
Layer 2 — Identity and access (Okta, Microsoft Entra, CyberArk). Treating agents as first-class identities with lifecycle management, least-privilege access, credential rotation, and kill-switch revocation. This is where you control what agents can do and who they are.
Layer 3 — Runtime detection and response (IBM, Palo Alto, CrowdStrike). Multi-agent security services that monitor, detect, and remediate threats at machine speed across the full security stack. This is where you catch what goes wrong despite layers one and two.
No single vendor covers all three layers today. The enterprises that navigate this transition successfully will be the ones that architect across all three — just as the ones that succeeded in cloud security were the ones that combined DevSecOps, IAM, and CSPM rather than betting on any single tool.
What This Means for Enterprise Security Leaders
If you are a CISO or security architect processing this week's announcements, here is what matters:
The audit question has changed. The question is no longer "do we use AI agents?" — 96% of enterprises do. The question is: "can we enumerate every agent running in our environment, describe what each one can access, and shut any of them down in under sixty seconds?" If the answer is no, that is your Q2 priority.
Identity is the control plane. The single highest-leverage investment you can make right now is extending your identity infrastructure to cover non-human identities — AI agents, service accounts with agent-like behavior, automated workflows, and MCP-connected tools. Okta's April 30 launch will accelerate this market, but the architectural principle applies regardless of vendor.
Build-time controls are not optional. If your development teams are building agents without sandboxing, isolation, or harness-level governance, every agent they deploy adds unmanaged attack surface. The OpenAI SDK update makes this easier for teams on that platform. For teams using other providers, the requirement is the same: demand equivalent controls.
AI-powered defense is no longer theoretical. IBM's Autonomous Security represents a category shift: using multi-agent AI systems to defend against multi-agent AI threats. The asymmetry between human-speed defense and machine-speed offense was always unsustainable. This week, the industry acknowledged it.
Guardian agents will be mandatory. Gartner does not publish Market Guides for categories it considers optional. If your 2026 security roadmap does not include a guardian agent strategy — even if it is an evaluation and pilot phase — you are already behind the curve.
The 48-hour window in which IBM, OpenAI, and Okta each placed their bets on enterprise AI security was not a coincidence. It was a market responding to the same data: agents are everywhere, security is nowhere near sufficient, and the vendors who build the trust layer will own the next decade of enterprise infrastructure.
The question is not whether your organization will adopt this stack. The question is whether you will build it deliberately — or have it imposed on you by the next incident.
Rajesh Beri is Head of AI Engineering at Zscaler and writes about enterprise AI strategy, security, and the technologies reshaping how organizations build and deploy AI systems.
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
