Why Only 8% of Enterprise AI Projects Actually Deliver ROI

KPMG surveyed 2,110 C-suite leaders and found only 8% see tangible ROI from AI despite $186M average annual spending. The problem isn't technology—it's governance, orchestration, and workforce readiness.

By Rajesh Beri·April 21, 2026·6 min read
Share:

THE DAILY BRIEF

Enterprise AIROIAI GovernanceDigital AdoptionAI Strategy

Why Only 8% of Enterprise AI Projects Actually Deliver ROI

KPMG surveyed 2,110 C-suite leaders and found only 8% see tangible ROI from AI despite $186M average annual spending. The problem isn't technology—it's governance, orchestration, and workforce readiness.

By Rajesh Beri·April 21, 2026·6 min read

Companies are spending an average of $186 million annually on AI projects, yet only 8% report seeing tangible return on investment. That's the stark reality from KPMG's Global AI Pulse Q1 2026 survey of 2,110 C-suite leaders across 20 countries. The gap between AI investment and AI outcomes has never been wider—and it's not a technology problem.

The disconnect is structural. While 95% of companies now have an AI strategy and 39% claim they're scaling AI across the enterprise, the measurable business value remains elusive for the vast majority. Even more troubling: 77% of employees abandoned their company's AI tools last month and returned to manual workflows, according to WalkMe's State of Digital Adoption 2026 report.

The $186 Million Question: Where's the ROI?

KPMG identified just 11% of organizations as "AI leaders"—companies that demonstrate the ability to translate AI investments into measurable outcomes at scale. What separates these leaders from the 89% stuck in pilot purgatory? It's not budget, technology access, or willingness to experiment. The difference comes down to three operational factors: governance structures, orchestration capabilities, and workforce readiness.

The numbers tell the story. Among AI leaders, 82% report meaningful business value from their AI investments. For companies still piloting projects, that number drops to 62%. More critically, AI leaders are 2.5 times more confident in their ability to manage AI-related risks than non-leaders.

"What sets AI leaders apart is that they have a clear link between their AI activity and the business results," said Samantha Gloede, global head of risk services and global trusted AI leader at KPMG International. "They use consistent performance metrics across functions, and they have visibility into impact as systems operate—not just at the end."

The Governance Gap: Multi-Agent Orchestration Requires New Controls

Only 9% of enterprises have orchestrated multiple AI agents across workflows, despite 52% claiming they use AI to automate workflows across functions. This gap reveals the core challenge: most organizations are deploying AI tools in isolated pockets without the governance frameworks needed to coordinate agent decision-making across business functions.

When AI agents operate independently within sales, finance, operations, and customer service, they create value in silos. The exponential gains come from orchestration—when agents share context, coordinate decisions, and execute cross-functional workflows automatically. But orchestration demands governance systems that most enterprises haven't built yet.

"When you start coordinating multiple AI agents across business functions, getting the governance right is both difficult and vital," Gloede explained. "CIOs need to be clear about who owns decisions made by agents because once agents operate across teams, decisions don't sit in one place anymore."

The data backs this up. Among AI leaders, 81% say they have the capabilities and governance to manage AI risk at scale, compared to 63% of non-leaders. Leaders also invest more heavily in compliance systems, cybersecurity controls, and board-level AI expertise—treating governance as an enabler rather than a barrier.

For CIOs, this means building governance ahead of deployment: establishing clear ownership of AI-driven decisions, integrating risk and compliance directly into workflows, and designing adaptive controls that scale with agent proliferation. Real-time monitoring and observability become critical as agents move from task automation to cross-functional orchestration.

The Workforce Reality: Tools Without Training Create Friction, Not Productivity

The average employee loses 7.9 hours every week to software friction—51 working days per year wasted on broken handoffs, workarounds, and tools that don't fit the job. WalkMe's research across 3,750 executives and workers revealed a massive disconnect: 61% of executives trust AI to make operational decisions, while only 9% of workers trust AI for high-impact work. That's nearly a 7x trust gap.

The pilot-to-production chasm isn't about technology maturity. It's about people refusing to use tools they don't understand, don't trust, and haven't been trained to integrate into real workflows. Corporate AI training programs focus on completion rates rather than job-specific judgment calls. A recruiter and a finance analyst use the same underlying models in completely different ways, yet most training treats AI adoption as a one-size-fits-all skill.

The companies closing this gap fast are those measuring AI training effectiveness by what employees stopped doing manually—not by training module completion rates. Leading organizations build role-specific playbooks for high-volume tasks, with worked examples and exact prompts that deliver results. They create internal champions in every function, because workers learn AI from trusted colleagues faster than from any LMS course.

"When you lose 51 days a year per employee to friction, you don't have a productivity problem—you have a design problem," said Ofir Bloch, marketing executive at WalkMe. Multiply 51 days across a workforce of 10,000, and the productivity gains promised to the board start looking like the productivity crisis no one anticipated.

What AI Leaders Do Differently

AI leaders don't treat governance as an afterthought—they build it into how agents are designed and operated from the beginning. This means clear accountability structures, ownership assignments for AI-driven decisions, and control systems embedded in workflows rather than bolted on after deployment.

Three characteristics separate leaders from laggards:

  1. Agent ecosystems over isolated pilots: Leaders orchestrate multi-agent systems that transform business outcomes rather than getting stuck experimenting with individual use cases. They're building "super-agent ecosystems" that coordinate decision-making across sales, operations, finance, and customer service.

  2. Governance systems that scale: Leaders upgrade compliance frameworks, cybersecurity controls, and board-level expertise ahead of AI deployment. They treat governance as a competitive advantage—enabling faster, safer scaling while competitors drown in approval bottlenecks.

  3. Workforce transformation alongside technology: Leaders invest in hands-on learning, sandbox environments, and internal innovation programs that reward employees who develop AI solutions with measurable business impact. They recognize that technology adoption is a people problem, not a training problem.

The Path Forward: Infrastructure, Governance, People—In That Order

Enterprises planning to spend $186 million on AI in the next 12 months need to confront an uncomfortable truth: throwing more money at technology won't close the ROI gap. The infrastructure investments are already in place—58% prioritize IT infrastructure spending, and 50% are boosting cybersecurity and data protection.

The bottleneck is organizational. Most companies weren't built to support AI at scale. Their governance structures assume human decision-makers, their training programs optimize for compliance over capability, and their orchestration frameworks don't exist yet.

For CIOs and CFOs making 2026 investment decisions, the priority order is clear:

  1. Build governance frameworks before adding more agents. Define decision ownership, establish cross-functional controls, and implement real-time monitoring systems that track agent behavior and business impact.

  2. Orchestrate existing agents before deploying new ones. The value is in coordination, not proliferation. Nine agents working independently deliver less value than three agents orchestrated across workflows.

  3. Measure workforce friction, not training completion. If your teams are still working manually after AI rollout, your training didn't work. Measure what people stopped doing, not what courses they finished.

The 8% ROI (run the numbers with our ROI calculator) problem isn't a technology problem. It's an operating model problem. And the companies solving it first will own the next decade.

Continue Reading

Looking to implement AI governance frameworks? Learn how leading CIOs are building AI risk management systems that scale.

Struggling with AI adoption across your organization? Discover proven strategies for enterprise AI change management.

Need to justify AI spending to your board? Read our guide on measuring and communicating AI ROI to executives.


Sources:

  • KPMG Global AI Pulse: Q1 2026 Survey (link)
  • WalkMe State of Digital Adoption 2026 Report (link)
  • GovInfoSecurity: "What Enterprise AI Leaders Are Doing Right" (link)
  • Forbes: "Why 77% of Employees Abandoned Enterprise AI Tools" (link)

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Why Only 8% of Enterprise AI Projects Actually Deliver ROI

Photo by Campaign Creators on Unsplash

Companies are spending an average of $186 million annually on AI projects, yet only 8% report seeing tangible return on investment. That's the stark reality from KPMG's Global AI Pulse Q1 2026 survey of 2,110 C-suite leaders across 20 countries. The gap between AI investment and AI outcomes has never been wider—and it's not a technology problem.

The disconnect is structural. While 95% of companies now have an AI strategy and 39% claim they're scaling AI across the enterprise, the measurable business value remains elusive for the vast majority. Even more troubling: 77% of employees abandoned their company's AI tools last month and returned to manual workflows, according to WalkMe's State of Digital Adoption 2026 report.

The $186 Million Question: Where's the ROI?

KPMG identified just 11% of organizations as "AI leaders"—companies that demonstrate the ability to translate AI investments into measurable outcomes at scale. What separates these leaders from the 89% stuck in pilot purgatory? It's not budget, technology access, or willingness to experiment. The difference comes down to three operational factors: governance structures, orchestration capabilities, and workforce readiness.

The numbers tell the story. Among AI leaders, 82% report meaningful business value from their AI investments. For companies still piloting projects, that number drops to 62%. More critically, AI leaders are 2.5 times more confident in their ability to manage AI-related risks than non-leaders.

"What sets AI leaders apart is that they have a clear link between their AI activity and the business results," said Samantha Gloede, global head of risk services and global trusted AI leader at KPMG International. "They use consistent performance metrics across functions, and they have visibility into impact as systems operate—not just at the end."

The Governance Gap: Multi-Agent Orchestration Requires New Controls

Only 9% of enterprises have orchestrated multiple AI agents across workflows, despite 52% claiming they use AI to automate workflows across functions. This gap reveals the core challenge: most organizations are deploying AI tools in isolated pockets without the governance frameworks needed to coordinate agent decision-making across business functions.

When AI agents operate independently within sales, finance, operations, and customer service, they create value in silos. The exponential gains come from orchestration—when agents share context, coordinate decisions, and execute cross-functional workflows automatically. But orchestration demands governance systems that most enterprises haven't built yet.

"When you start coordinating multiple AI agents across business functions, getting the governance right is both difficult and vital," Gloede explained. "CIOs need to be clear about who owns decisions made by agents because once agents operate across teams, decisions don't sit in one place anymore."

The data backs this up. Among AI leaders, 81% say they have the capabilities and governance to manage AI risk at scale, compared to 63% of non-leaders. Leaders also invest more heavily in compliance systems, cybersecurity controls, and board-level AI expertise—treating governance as an enabler rather than a barrier.

For CIOs, this means building governance ahead of deployment: establishing clear ownership of AI-driven decisions, integrating risk and compliance directly into workflows, and designing adaptive controls that scale with agent proliferation. Real-time monitoring and observability become critical as agents move from task automation to cross-functional orchestration.

The Workforce Reality: Tools Without Training Create Friction, Not Productivity

The average employee loses 7.9 hours every week to software friction—51 working days per year wasted on broken handoffs, workarounds, and tools that don't fit the job. WalkMe's research across 3,750 executives and workers revealed a massive disconnect: 61% of executives trust AI to make operational decisions, while only 9% of workers trust AI for high-impact work. That's nearly a 7x trust gap.

The pilot-to-production chasm isn't about technology maturity. It's about people refusing to use tools they don't understand, don't trust, and haven't been trained to integrate into real workflows. Corporate AI training programs focus on completion rates rather than job-specific judgment calls. A recruiter and a finance analyst use the same underlying models in completely different ways, yet most training treats AI adoption as a one-size-fits-all skill.

The companies closing this gap fast are those measuring AI training effectiveness by what employees stopped doing manually—not by training module completion rates. Leading organizations build role-specific playbooks for high-volume tasks, with worked examples and exact prompts that deliver results. They create internal champions in every function, because workers learn AI from trusted colleagues faster than from any LMS course.

"When you lose 51 days a year per employee to friction, you don't have a productivity problem—you have a design problem," said Ofir Bloch, marketing executive at WalkMe. Multiply 51 days across a workforce of 10,000, and the productivity gains promised to the board start looking like the productivity crisis no one anticipated.

What AI Leaders Do Differently

AI leaders don't treat governance as an afterthought—they build it into how agents are designed and operated from the beginning. This means clear accountability structures, ownership assignments for AI-driven decisions, and control systems embedded in workflows rather than bolted on after deployment.

Three characteristics separate leaders from laggards:

  1. Agent ecosystems over isolated pilots: Leaders orchestrate multi-agent systems that transform business outcomes rather than getting stuck experimenting with individual use cases. They're building "super-agent ecosystems" that coordinate decision-making across sales, operations, finance, and customer service.

  2. Governance systems that scale: Leaders upgrade compliance frameworks, cybersecurity controls, and board-level expertise ahead of AI deployment. They treat governance as a competitive advantage—enabling faster, safer scaling while competitors drown in approval bottlenecks.

  3. Workforce transformation alongside technology: Leaders invest in hands-on learning, sandbox environments, and internal innovation programs that reward employees who develop AI solutions with measurable business impact. They recognize that technology adoption is a people problem, not a training problem.

The Path Forward: Infrastructure, Governance, People—In That Order

Enterprises planning to spend $186 million on AI in the next 12 months need to confront an uncomfortable truth: throwing more money at technology won't close the ROI gap. The infrastructure investments are already in place—58% prioritize IT infrastructure spending, and 50% are boosting cybersecurity and data protection.

The bottleneck is organizational. Most companies weren't built to support AI at scale. Their governance structures assume human decision-makers, their training programs optimize for compliance over capability, and their orchestration frameworks don't exist yet.

For CIOs and CFOs making 2026 investment decisions, the priority order is clear:

  1. Build governance frameworks before adding more agents. Define decision ownership, establish cross-functional controls, and implement real-time monitoring systems that track agent behavior and business impact.

  2. Orchestrate existing agents before deploying new ones. The value is in coordination, not proliferation. Nine agents working independently deliver less value than three agents orchestrated across workflows.

  3. Measure workforce friction, not training completion. If your teams are still working manually after AI rollout, your training didn't work. Measure what people stopped doing, not what courses they finished.

The 8% ROI (run the numbers with our ROI calculator) problem isn't a technology problem. It's an operating model problem. And the companies solving it first will own the next decade.

Continue Reading

Looking to implement AI governance frameworks? Learn how leading CIOs are building AI risk management systems that scale.

Struggling with AI adoption across your organization? Discover proven strategies for enterprise AI change management.

Need to justify AI spending to your board? Read our guide on measuring and communicating AI ROI to executives.


Sources:

  • KPMG Global AI Pulse: Q1 2026 Survey (link)
  • WalkMe State of Digital Adoption 2026 Report (link)
  • GovInfoSecurity: "What Enterprise AI Leaders Are Doing Right" (link)
  • Forbes: "Why 77% of Employees Abandoned Enterprise AI Tools" (link)
Share:

THE DAILY BRIEF

Enterprise AIROIAI GovernanceDigital AdoptionAI Strategy

Why Only 8% of Enterprise AI Projects Actually Deliver ROI

KPMG surveyed 2,110 C-suite leaders and found only 8% see tangible ROI from AI despite $186M average annual spending. The problem isn't technology—it's governance, orchestration, and workforce readiness.

By Rajesh Beri·April 21, 2026·6 min read

Companies are spending an average of $186 million annually on AI projects, yet only 8% report seeing tangible return on investment. That's the stark reality from KPMG's Global AI Pulse Q1 2026 survey of 2,110 C-suite leaders across 20 countries. The gap between AI investment and AI outcomes has never been wider—and it's not a technology problem.

The disconnect is structural. While 95% of companies now have an AI strategy and 39% claim they're scaling AI across the enterprise, the measurable business value remains elusive for the vast majority. Even more troubling: 77% of employees abandoned their company's AI tools last month and returned to manual workflows, according to WalkMe's State of Digital Adoption 2026 report.

The $186 Million Question: Where's the ROI?

KPMG identified just 11% of organizations as "AI leaders"—companies that demonstrate the ability to translate AI investments into measurable outcomes at scale. What separates these leaders from the 89% stuck in pilot purgatory? It's not budget, technology access, or willingness to experiment. The difference comes down to three operational factors: governance structures, orchestration capabilities, and workforce readiness.

The numbers tell the story. Among AI leaders, 82% report meaningful business value from their AI investments. For companies still piloting projects, that number drops to 62%. More critically, AI leaders are 2.5 times more confident in their ability to manage AI-related risks than non-leaders.

"What sets AI leaders apart is that they have a clear link between their AI activity and the business results," said Samantha Gloede, global head of risk services and global trusted AI leader at KPMG International. "They use consistent performance metrics across functions, and they have visibility into impact as systems operate—not just at the end."

The Governance Gap: Multi-Agent Orchestration Requires New Controls

Only 9% of enterprises have orchestrated multiple AI agents across workflows, despite 52% claiming they use AI to automate workflows across functions. This gap reveals the core challenge: most organizations are deploying AI tools in isolated pockets without the governance frameworks needed to coordinate agent decision-making across business functions.

When AI agents operate independently within sales, finance, operations, and customer service, they create value in silos. The exponential gains come from orchestration—when agents share context, coordinate decisions, and execute cross-functional workflows automatically. But orchestration demands governance systems that most enterprises haven't built yet.

"When you start coordinating multiple AI agents across business functions, getting the governance right is both difficult and vital," Gloede explained. "CIOs need to be clear about who owns decisions made by agents because once agents operate across teams, decisions don't sit in one place anymore."

The data backs this up. Among AI leaders, 81% say they have the capabilities and governance to manage AI risk at scale, compared to 63% of non-leaders. Leaders also invest more heavily in compliance systems, cybersecurity controls, and board-level AI expertise—treating governance as an enabler rather than a barrier.

For CIOs, this means building governance ahead of deployment: establishing clear ownership of AI-driven decisions, integrating risk and compliance directly into workflows, and designing adaptive controls that scale with agent proliferation. Real-time monitoring and observability become critical as agents move from task automation to cross-functional orchestration.

The Workforce Reality: Tools Without Training Create Friction, Not Productivity

The average employee loses 7.9 hours every week to software friction—51 working days per year wasted on broken handoffs, workarounds, and tools that don't fit the job. WalkMe's research across 3,750 executives and workers revealed a massive disconnect: 61% of executives trust AI to make operational decisions, while only 9% of workers trust AI for high-impact work. That's nearly a 7x trust gap.

The pilot-to-production chasm isn't about technology maturity. It's about people refusing to use tools they don't understand, don't trust, and haven't been trained to integrate into real workflows. Corporate AI training programs focus on completion rates rather than job-specific judgment calls. A recruiter and a finance analyst use the same underlying models in completely different ways, yet most training treats AI adoption as a one-size-fits-all skill.

The companies closing this gap fast are those measuring AI training effectiveness by what employees stopped doing manually—not by training module completion rates. Leading organizations build role-specific playbooks for high-volume tasks, with worked examples and exact prompts that deliver results. They create internal champions in every function, because workers learn AI from trusted colleagues faster than from any LMS course.

"When you lose 51 days a year per employee to friction, you don't have a productivity problem—you have a design problem," said Ofir Bloch, marketing executive at WalkMe. Multiply 51 days across a workforce of 10,000, and the productivity gains promised to the board start looking like the productivity crisis no one anticipated.

What AI Leaders Do Differently

AI leaders don't treat governance as an afterthought—they build it into how agents are designed and operated from the beginning. This means clear accountability structures, ownership assignments for AI-driven decisions, and control systems embedded in workflows rather than bolted on after deployment.

Three characteristics separate leaders from laggards:

  1. Agent ecosystems over isolated pilots: Leaders orchestrate multi-agent systems that transform business outcomes rather than getting stuck experimenting with individual use cases. They're building "super-agent ecosystems" that coordinate decision-making across sales, operations, finance, and customer service.

  2. Governance systems that scale: Leaders upgrade compliance frameworks, cybersecurity controls, and board-level expertise ahead of AI deployment. They treat governance as a competitive advantage—enabling faster, safer scaling while competitors drown in approval bottlenecks.

  3. Workforce transformation alongside technology: Leaders invest in hands-on learning, sandbox environments, and internal innovation programs that reward employees who develop AI solutions with measurable business impact. They recognize that technology adoption is a people problem, not a training problem.

The Path Forward: Infrastructure, Governance, People—In That Order

Enterprises planning to spend $186 million on AI in the next 12 months need to confront an uncomfortable truth: throwing more money at technology won't close the ROI gap. The infrastructure investments are already in place—58% prioritize IT infrastructure spending, and 50% are boosting cybersecurity and data protection.

The bottleneck is organizational. Most companies weren't built to support AI at scale. Their governance structures assume human decision-makers, their training programs optimize for compliance over capability, and their orchestration frameworks don't exist yet.

For CIOs and CFOs making 2026 investment decisions, the priority order is clear:

  1. Build governance frameworks before adding more agents. Define decision ownership, establish cross-functional controls, and implement real-time monitoring systems that track agent behavior and business impact.

  2. Orchestrate existing agents before deploying new ones. The value is in coordination, not proliferation. Nine agents working independently deliver less value than three agents orchestrated across workflows.

  3. Measure workforce friction, not training completion. If your teams are still working manually after AI rollout, your training didn't work. Measure what people stopped doing, not what courses they finished.

The 8% ROI (run the numbers with our ROI calculator) problem isn't a technology problem. It's an operating model problem. And the companies solving it first will own the next decade.

Continue Reading

Looking to implement AI governance frameworks? Learn how leading CIOs are building AI risk management systems that scale.

Struggling with AI adoption across your organization? Discover proven strategies for enterprise AI change management.

Need to justify AI spending to your board? Read our guide on measuring and communicating AI ROI to executives.


Sources:

  • KPMG Global AI Pulse: Q1 2026 Survey (link)
  • WalkMe State of Digital Adoption 2026 Report (link)
  • GovInfoSecurity: "What Enterprise AI Leaders Are Doing Right" (link)
  • Forbes: "Why 77% of Employees Abandoned Enterprise AI Tools" (link)

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe