Most enterprises think they have AI governance. The data says otherwise.
New research from VentureBeat reveals a stunning disconnect: 72% of organizations claim to have two or more "primary" AI platforms, yet nearly a third have no systematic mechanism to detect AI misbehavior until users or audits surface the problem. We're calling this the "governance mirage"—enterprises believe they have control while lacking the accountability structures, security processes, and unified oversight to actually enforce it.
For technical and business leaders, this isn't just a theoretical risk. With AI-driven attacks up 89% year-over-year and breach costs averaging $4.4 million, the illusion of governance is expensive.
The Strategic Paradox: Building Around Vendor Failures
Mass General Brigham (MGB), Massachusetts' largest employer with 90,000 employees, illustrates the contradiction perfectly. Last year, they shut down uncontrolled internal AI proof-of-concepts that had proliferated across departments. The strategy? Wait for major software vendors—Microsoft, Epic, Workday, ServiceNow—to deliver on their AI roadmaps.
But even with that vendor-first approach, MGB had to build a custom "skin" around Microsoft Copilot to prevent protected health information (PHI) from leaking back to OpenAI. Now fully scaled to support 30,000 users, this workaround represents the ultimate paradox: leveraging vendor AI while building around its security gaps.
CTO Nallan "Sri" Sriraman framed the challenge bluntly: "These vendors are all building agents differently. We have to invest in building a control plane that coordinates and orchestrates all of them. The marketplace is still nascent."
For CFOs and business leaders, this means AI vendor contracts don't guarantee security or interoperability—they guarantee integration work.
The Numbers Don't Lie: Confidence vs. Reality
VentureBeat surveyed enterprise companies with 100+ employees across January-March 2026, focusing on AI orchestration, security, and governance. The findings expose critical gaps:
- 56% say they're "very confident" they'd detect misbehaving AI
- 30% have no systematic detection until damage surfaces
- 43% claim a central team owns AI governance
- 23% say governance ownership is unclear or actively contested
- 29% cite "no single owner or accountable team" as the biggest governance obstacle
When asked who owns AI governance:
- 43% say a central team
- 23% say it's unclear or contested
- 20% say each platform team governs independently
- 6% say no one has formally addressed it
The disconnect is stark: Most believe they have oversight, but accountability structures don't exist.
According to IBM's 2025 Cost of a Data Breach report, telemetry leakage accounts for 34% of GenAI incidents. Shadow AI incidents—unapproved tools brought in by employees—cost an average of $670,000 more than standard breaches.
The Day Two Problem: When the Bill Comes Due
Brian Gracely, Senior Director at Red Hat, warns that enterprises fall into a "day zero trap." Spinning up AI projects is trivially easy with a credit card and API key. "Day two is when the bill comes due," he said at VentureBeat's Boston event.
Red Hat positions its OpenShift AI as a buffer against single-provider lock-in. Without a platform layer, you're "renting a cage"—speed in pilots hides technical debt that becomes obvious when you try to move AI workloads between platforms.
Gracely shared a revealing example: A senior Red Hat leader contributed to an open-source agent project (OpenClaw) during vacation. Within days, major New York banks contacted Red Hat—they'd discovered 10,000+ employees had independently brought agent-based tools into their infrastructure with zero centralized oversight.
For CIOs and security leaders, this is the reality: employees adopt AI tools faster than governance can scale.
The Security Irony: The Fox Guarding the Henhouse
Here's the most jarring finding: enterprises are using the same providers creating AI risk to manage that risk.
Survey respondents ranked "security and permissions" as their #1 criterion (37.1%) for selecting AI orchestration platforms. Yet 26% use OpenAI as their primary security solution—the same provider whose models create the risks they're securing against.
This isn't necessarily a strategic choice—it's often convenience. Enterprises already using Microsoft Azure's Copilot rely on built-in security features because they're integrated. But this creates single-provider dependency precisely when agents gain power to modify documents, call APIs, and access databases.
The risks include:
- Content injection
- Privilege escalation
- Data exfiltration
- Memory poisoning via persistent agent sessions
When your orchestration, session data, and security live inside a provider's proprietary ecosystem, you lose forensic capability. You're clicking "I agree" on whatever the hyperscaler offers.
The Path Forward: The Search for a Unified Control Plane
MGB's Sriraman argues the industry needs a "Dynatrace for AI"—a central observability platform providing:
- Model drift detection
- Safety prompting visibility
- Agent behavior analytics
- Privilege escalation alerts
- Forensic logging
Without unified oversight, enterprises risk sliding into a "swivel chair" world—jumping between siloed AI tools to complete single workflows, reminiscent of early RPA inefficiency.
The survey shows 34.3% of enterprises use a "hybrid control plane": model provider-native solutions (Copilot Studio, OpenAI Assistants) for some workflows, external tools (LangGraph, custom orchestration) for others. Enterprises trust no single provider enough for full control, yet lack engineering capacity to build entirely from scratch.
Sriraman insists any legitimate control plane must include a kill switch: "We need a big red button. Kill it. Without that, don't put anything in the operational setting." OWASP's security community has formalized this recommendation as part of their 2026 framework for agentic applications.
What Leaders Should Do Now
For CIOs and CTOs:
- Audit actual governance accountability—who owns AI security decisions?
- Map your AI platforms—if you have multiple "primary" platforms, you have platform sprawl
- Implement systematic detection before incidents surface via users
- Require kill-switch capability for any production AI deployment
- Build platform layers (OpenShift AI, LangGraph, custom orchestration) to avoid lock-in
For CFOs and Business Leaders:
- Budget for integration and orchestration—vendor AI isn't turnkey
- Track shadow AI costs—employees adopt tools faster than governance scales
- Quantify breach risk—$4.4M average, $670K premium for shadow AI incidents
- Evaluate vendor contracts for interoperability, not just features
- Require single-pane visibility across all AI platforms
For CISOs:
- Don't rely solely on vendor-native security—you're asking the fox to guard the henhouse
- Implement independent security instrumentation
- Monitor telemetry leakage (34% of GenAI incidents)
- Track privilege escalation and session persistence risks
- Build forensic logging outside provider ecosystems
The Bottom Line
The "governance mirage" is the belief you can scale AI without deciding who owns control and security.
If you're one of the 72% claiming multiple "primary" platforms, you may not have a strategy—you have a conflict of interest. The winner in the AI infrastructure war won't necessarily be the provider with the best model, but the one that helps enterprises enforce a single version of truth.
Our data suggests enterprises are resisting single-provider lock-in. That resistance needs to become formalized strategy: own your control plane with independent security instrumentation. Don't wait for a vendor to win that role for you.
The marketplace is nascent. Decisions are hard. But governance without accountability isn't governance—it's theater.
Sources:
- VentureBeat AI Impact Research Q1 2026: The AI governance mirage: Why 72% of enterprises don't have the control and security they think they do
- IBM 2025 Cost of a Data Breach: Navigating AI Security Costs
- IBM Shadow Data & AI Report: Hidden Risk: Shadow Data, AI, and Higher Costs
- Wiz Academy AI Security: Generative AI for Cybersecurity
- OWASP Top 10 for Agentic Applications 2026: Security Framework
Related Tools:
- Microsoft Copilot Studio
- OpenAI Assistants
- Red Hat OpenShift AI
- LangGraph
- OpenClaw (open-source agent project)
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
Continue Reading
- [Anthropic Kills Seat Fees: Pay $0.004/1K Tokens vs OpenAI $20/Month](/article/anthropic-usage-based-pricing-2026)
- JPMorgan Reclassifies AI as Core Infrastructure
- Scotiabank Cuts Manual Work 70% With Scotia Intelligence AI