Gartner: AI Winners Spend 4x More on Data, Not Models

Gartner's April 16 report: AI winners invest 4x more in data and governance, not models. Only 39% of tech leaders trust AI ROI. What CIOs must do now.

By Rajesh Beri·April 18, 2026·10 min read
Share:

THE DAILY BRIEF

Enterprise AIData StrategyGartnerAI GovernanceCIOCDAO

Gartner: AI Winners Spend 4x More on Data, Not Models

Gartner's April 16 report: AI winners invest 4x more in data and governance, not models. Only 39% of tech leaders trust AI ROI. What CIOs must do now.

By Rajesh Beri·April 18, 2026·10 min read

Gartner published research on April 16, 2026 that quantifies something every CIO has suspected for two years: the organizations winning with AI are not the ones spending more on models. They are the ones spending up to four times more on the unglamorous data, governance, and change management foundations that sit underneath the models.

For CTOs and CDAOs trying to explain why their generative AI program has produced demos but not revenue, the numbers give you a defensible budget conversation. For CFOs and boards tired of writing checks for AI initiatives with no measurable outcomes, the report is a map of where your money is probably going wrong — and where to redirect it.

The headline number

Gartner's core finding: organizations that report successful AI initiatives invest up to 4x more (as a percentage of revenue) in foundational capabilities than organizations with poor AI outcomes. Foundational capabilities here are specifically defined as data quality, data governance, AI-ready talent, and change management — not model licenses, not GPUs, not vendor contracts.

The research is anchored to two surveys: 353 data and analytics (D&A) and AI leaders surveyed from November through December 2025, and a separate cohort of 360 IT leaders surveyed in Q2 2025. The samples are global and cross-industry.

The second number is more damning: only 39% of technology leaders are confident that their enterprise's current AI investments will have a positive impact on financial performance. Translation: 61% of tech leaders, the people closest to the implementation, are either unsure or actively skeptical that the current AI spend will pay off.

And a third: only 23% of IT leaders say they are "very confident" in their ability to manage security and governance for generative AI deployments. That's the population running the systems, not the skeptics on the outside.

The upside, when organizations do invest in the foundations: those with the highest maturity of AI-ready D&A capabilities are achieving up to 65% greater business outcomes, measured across revenue growth and cost optimization. That's the ROI gap between winners and losers in concrete terms.

What "AI-ready data" actually means

The phrase "AI-ready data" has been in every vendor deck since early 2024, to the point where it means whatever the vendor selling it needs it to mean. Gartner's Rita Sallam, Distinguished VP Analyst and Gartner Fellow, is more specific. Success requires "new trusted data, context foundations and perceptive intelligence." That maps to a technical stack that most enterprises have not yet built:

  • Trusted data: data with documented lineage, quality scores, and versioning that an agent can reason over without a human in the loop confirming every fact.
  • Context foundations: semantic layers, knowledge graphs, and ontologies that encode what a piece of data means in the business, not just what type it is.
  • Perceptive intelligence: the ability of systems to interpret context across structured and unstructured data and adapt to the business situation rather than the query syntax.

The gap between a dashboard-era data warehouse and this stack is substantial. Most Fortune 500 data estates were optimized for BI query performance, not for agent-led reasoning. Column-level lineage exists in pockets; business context lives mostly in confluence pages and tribal knowledge; the knowledge graph layer is usually missing entirely.

That's the 4x investment. Rebuilding that stack is expensive, slow, and doesn't produce demo-able outputs for quarterly business reviews. The organizations doing it anyway are the ones Gartner is counting as winners.

The six shifts Sallam names

Gartner frames this as six required shifts for D&A leaders heading into 2027. Each of them has implications worth unpacking for technical and business buyers:

1. Build toward an AI-first approach. The legacy posture is "we have a BI stack and we're adding AI on top." The AI-first posture is "we are building for agents as primary consumers of our data, and humans will use the same substrate." The implication: your data products should be API-first and machine-readable before they're dashboard-first.

2. Redesign for human-agent collaboration. Sallam notes that high-performing AI teams can be as small as one technical person and one business person working with a fleet of agents. The org-chart implication is significant — the team structure your data org had in 2022, with tiers of analysts supporting senior analysts supporting executives, compresses.

3. Establish context as critical infrastructure. This is the semantic layer / knowledge graph work that most enterprises have been deferring. Gartner is saying the deferral is now the single biggest predictor of AI failure.

4. Scale connected engineering practices. Data engineering, ML engineering, and software engineering have historically lived in separate functions with separate tooling. The shift is to unified platform engineering where the same team owns the pipelines, the model serving, and the agent orchestration.

5. Establish trust-based governance models. Only 23% of IT leaders being "very confident" in GenAI security is a governance crisis, not a tooling gap. Gartner's recommendation is to build governance into the data substrate — access controls, audit trails, policy enforcement — rather than bolting it on at the application layer.

6. Move beyond ROI to value compounding. Traditional project ROI assumes a one-time delivery and a payback curve. AI capabilities compound — a well-built context layer supports a growing number of use cases at declining marginal cost. The measurement framework has to change.

The most quoted line

Sallam's most quoted line in the report is less technical: "The future is not about replacing humans, but amplifying their ingenuity." That framing matters politically inside large enterprises, because it's the narrative that allows a CIO to request a significant foundation investment without triggering the headcount anxiety that has made AI projects politically fraught.

But read it alongside shift #2 — teams as small as one technical and one business person — and the message is more nuanced. The team structure compresses; the individuals left in those teams are dramatically more leveraged. That is amplification, but it's amplification that changes which jobs exist and which don't.

What this means for enterprise technology buyers

For technical leaders (CIO, CTO, CDAO), the practical implications:

Your vendor selection criteria are probably wrong. Most enterprise RFPs in 2024-2025 evaluated AI vendors on model capability, inference cost, and integration breadth. The Gartner data suggests the more predictive criteria are data governance depth, semantic layer quality, and the vendor's track record on change management and training. Vendors that pitch "just plug in our agent" without asking hard questions about your data substrate are pitching failure.

Your internal budget allocation is probably wrong too. If your AI budget is 80% model / compute / platform licenses and 20% data foundations and change management, you are in the 61% of organizations Gartner is documenting as unlikely to see financial impact. The 4x multiplier suggests the allocation for successful programs looks closer to 40% foundations / 40% talent and change / 20% models and compute.

Governance cannot be downstream anymore. The 23% confidence number means most IT leaders know their governance posture will fail an audit when GenAI moves into regulated workflows. Retrofitting governance after deployment is more expensive and slower than building it into the data layer upfront. The organizations that are winning are the ones that accepted this and rebuilt access, lineage, and policy enforcement at the substrate level.

For business leaders (CFO, CEO, board), the practical implications:

Stop funding AI projects without funding the foundations first. The 39% confidence number from technology leaders is a leading indicator. If the people running the implementation don't believe it will pay off, the odds that the board-level business case holds up are low. The check to write is not the next model license; it's the data governance, quality, and platform engineering investment that makes the next model license productive.

Benchmark your investment mix, not your total. The 4x multiplier is a ratio, not an absolute number. You are not competing with hyperscalers on AI spend. You are competing with your own peers on how much of your AI spend is going into foundations vs. surface features. That's the question to ask at the next board meeting.

Reframe the ROI conversation. Gartner's shift #6 — move beyond ROI to value compounding — is a recognition that the traditional project-based investment case doesn't work for AI capabilities. The alternative is treating the data substrate as a platform investment with expanding returns, measured in how many use cases it supports over time rather than the payback on the first use case.

What the critics will say

Gartner research is Gartner research. The obvious critiques apply: vendor-influenced, US/Europe-centric, based on self-reported survey data, and written from a distinctly enterprise-IT worldview that doesn't map perfectly to AI-native startups or to the open-source / self-hosted side of the market.

A few things make this particular report harder to dismiss than usual:

  • The 353 and 360 sample sizes are large enough for inter-group comparisons rather than just directional claims
  • The split between the 39% confidence number and the 65% outcome gap gives you leading and lagging indicators in the same study
  • The specific shift toward context and semantic layers aligns with what private market signals are showing — TextQL's April 17 Blackstone round, the Snowflake-OpenAI $200M partnership in February, and the Qlik-ServiceNow partnership announced April 13 all point at the same architectural pattern

When Gartner, private capital, and the hyperscaler platform roadmaps all point the same direction in the same quarter, the direction is worth taking seriously even if you discount the Gartner framing.

The action list for this quarter

If you're reading this and running or influencing AI strategy at a large enterprise, the concrete actions implied by the research:

Run an investment mix audit. Take the last four quarters of AI-related spend and categorize it: models/inference, platform/tooling, data foundations (quality, governance, lineage, semantic layer), talent/training, change management. If the first two categories are more than 50% of the total, you are very likely in the 61% Gartner is calling out.

Run a governance readiness assessment. For your top three GenAI use cases, can you show: who can access what data, what actions the agent took, why it took them, what policies were enforced, and what the audit trail looks like? If the answer is "partially, in different systems," your governance is bolted on, not built in.

Benchmark your D&A maturity honestly. Gartner's own maturity frameworks are one option; a cleaner internal version asks: can a business user get a trusted answer from your data substrate without a human analyst in the loop? For how many questions? At what latency? The honest answer is usually worse than the executive summary admits.

Put a dollar figure on the gap. The 65% business outcome advantage is the actionable number. If your industry peer set is capturing 65% more value from comparable AI investments because they built the foundations, the cost of not closing that gap compounds every year.

The bottom line

The Gartner report is, in some sense, not news. Practitioners have been saying for two years that the bottleneck for enterprise AI is data, governance, and change management, not models. What is new is the quantification: a 4x investment multiplier between winners and losers, a 65% business outcome gap, and a 39% confidence rate among the technology leaders who should be the strongest advocates for their own programs.

For the enterprises that have treated foundational data work as unsexy back-office investment, the strategic implication is uncomfortable: your AI program's ceiling is set by decisions you made about data, not decisions you will make about models. The work to raise that ceiling is in front of you, and the peer set that started on it in 2024 is three years ahead.

The good news: that work is legible, scopeable, and well-understood. The hard news: it takes years, not quarters, and the first honest conversation about it usually happens well after the board has already been promised AI-driven revenue growth for the coming fiscal year.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Gartner: AI Winners Spend 4x More on Data, Not Models

Photo by Lukas on Pexels

Gartner published research on April 16, 2026 that quantifies something every CIO has suspected for two years: the organizations winning with AI are not the ones spending more on models. They are the ones spending up to four times more on the unglamorous data, governance, and change management foundations that sit underneath the models.

For CTOs and CDAOs trying to explain why their generative AI program has produced demos but not revenue, the numbers give you a defensible budget conversation. For CFOs and boards tired of writing checks for AI initiatives with no measurable outcomes, the report is a map of where your money is probably going wrong — and where to redirect it.

The headline number

Gartner's core finding: organizations that report successful AI initiatives invest up to 4x more (as a percentage of revenue) in foundational capabilities than organizations with poor AI outcomes. Foundational capabilities here are specifically defined as data quality, data governance, AI-ready talent, and change management — not model licenses, not GPUs, not vendor contracts.

The research is anchored to two surveys: 353 data and analytics (D&A) and AI leaders surveyed from November through December 2025, and a separate cohort of 360 IT leaders surveyed in Q2 2025. The samples are global and cross-industry.

The second number is more damning: only 39% of technology leaders are confident that their enterprise's current AI investments will have a positive impact on financial performance. Translation: 61% of tech leaders, the people closest to the implementation, are either unsure or actively skeptical that the current AI spend will pay off.

And a third: only 23% of IT leaders say they are "very confident" in their ability to manage security and governance for generative AI deployments. That's the population running the systems, not the skeptics on the outside.

The upside, when organizations do invest in the foundations: those with the highest maturity of AI-ready D&A capabilities are achieving up to 65% greater business outcomes, measured across revenue growth and cost optimization. That's the ROI gap between winners and losers in concrete terms.

What "AI-ready data" actually means

The phrase "AI-ready data" has been in every vendor deck since early 2024, to the point where it means whatever the vendor selling it needs it to mean. Gartner's Rita Sallam, Distinguished VP Analyst and Gartner Fellow, is more specific. Success requires "new trusted data, context foundations and perceptive intelligence." That maps to a technical stack that most enterprises have not yet built:

  • Trusted data: data with documented lineage, quality scores, and versioning that an agent can reason over without a human in the loop confirming every fact.
  • Context foundations: semantic layers, knowledge graphs, and ontologies that encode what a piece of data means in the business, not just what type it is.
  • Perceptive intelligence: the ability of systems to interpret context across structured and unstructured data and adapt to the business situation rather than the query syntax.

The gap between a dashboard-era data warehouse and this stack is substantial. Most Fortune 500 data estates were optimized for BI query performance, not for agent-led reasoning. Column-level lineage exists in pockets; business context lives mostly in confluence pages and tribal knowledge; the knowledge graph layer is usually missing entirely.

That's the 4x investment. Rebuilding that stack is expensive, slow, and doesn't produce demo-able outputs for quarterly business reviews. The organizations doing it anyway are the ones Gartner is counting as winners.

The six shifts Sallam names

Gartner frames this as six required shifts for D&A leaders heading into 2027. Each of them has implications worth unpacking for technical and business buyers:

1. Build toward an AI-first approach. The legacy posture is "we have a BI stack and we're adding AI on top." The AI-first posture is "we are building for agents as primary consumers of our data, and humans will use the same substrate." The implication: your data products should be API-first and machine-readable before they're dashboard-first.

2. Redesign for human-agent collaboration. Sallam notes that high-performing AI teams can be as small as one technical person and one business person working with a fleet of agents. The org-chart implication is significant — the team structure your data org had in 2022, with tiers of analysts supporting senior analysts supporting executives, compresses.

3. Establish context as critical infrastructure. This is the semantic layer / knowledge graph work that most enterprises have been deferring. Gartner is saying the deferral is now the single biggest predictor of AI failure.

4. Scale connected engineering practices. Data engineering, ML engineering, and software engineering have historically lived in separate functions with separate tooling. The shift is to unified platform engineering where the same team owns the pipelines, the model serving, and the agent orchestration.

5. Establish trust-based governance models. Only 23% of IT leaders being "very confident" in GenAI security is a governance crisis, not a tooling gap. Gartner's recommendation is to build governance into the data substrate — access controls, audit trails, policy enforcement — rather than bolting it on at the application layer.

6. Move beyond ROI to value compounding. Traditional project ROI assumes a one-time delivery and a payback curve. AI capabilities compound — a well-built context layer supports a growing number of use cases at declining marginal cost. The measurement framework has to change.

The most quoted line

Sallam's most quoted line in the report is less technical: "The future is not about replacing humans, but amplifying their ingenuity." That framing matters politically inside large enterprises, because it's the narrative that allows a CIO to request a significant foundation investment without triggering the headcount anxiety that has made AI projects politically fraught.

But read it alongside shift #2 — teams as small as one technical and one business person — and the message is more nuanced. The team structure compresses; the individuals left in those teams are dramatically more leveraged. That is amplification, but it's amplification that changes which jobs exist and which don't.

What this means for enterprise technology buyers

For technical leaders (CIO, CTO, CDAO), the practical implications:

Your vendor selection criteria are probably wrong. Most enterprise RFPs in 2024-2025 evaluated AI vendors on model capability, inference cost, and integration breadth. The Gartner data suggests the more predictive criteria are data governance depth, semantic layer quality, and the vendor's track record on change management and training. Vendors that pitch "just plug in our agent" without asking hard questions about your data substrate are pitching failure.

Your internal budget allocation is probably wrong too. If your AI budget is 80% model / compute / platform licenses and 20% data foundations and change management, you are in the 61% of organizations Gartner is documenting as unlikely to see financial impact. The 4x multiplier suggests the allocation for successful programs looks closer to 40% foundations / 40% talent and change / 20% models and compute.

Governance cannot be downstream anymore. The 23% confidence number means most IT leaders know their governance posture will fail an audit when GenAI moves into regulated workflows. Retrofitting governance after deployment is more expensive and slower than building it into the data layer upfront. The organizations that are winning are the ones that accepted this and rebuilt access, lineage, and policy enforcement at the substrate level.

For business leaders (CFO, CEO, board), the practical implications:

Stop funding AI projects without funding the foundations first. The 39% confidence number from technology leaders is a leading indicator. If the people running the implementation don't believe it will pay off, the odds that the board-level business case holds up are low. The check to write is not the next model license; it's the data governance, quality, and platform engineering investment that makes the next model license productive.

Benchmark your investment mix, not your total. The 4x multiplier is a ratio, not an absolute number. You are not competing with hyperscalers on AI spend. You are competing with your own peers on how much of your AI spend is going into foundations vs. surface features. That's the question to ask at the next board meeting.

Reframe the ROI conversation. Gartner's shift #6 — move beyond ROI to value compounding — is a recognition that the traditional project-based investment case doesn't work for AI capabilities. The alternative is treating the data substrate as a platform investment with expanding returns, measured in how many use cases it supports over time rather than the payback on the first use case.

What the critics will say

Gartner research is Gartner research. The obvious critiques apply: vendor-influenced, US/Europe-centric, based on self-reported survey data, and written from a distinctly enterprise-IT worldview that doesn't map perfectly to AI-native startups or to the open-source / self-hosted side of the market.

A few things make this particular report harder to dismiss than usual:

  • The 353 and 360 sample sizes are large enough for inter-group comparisons rather than just directional claims
  • The split between the 39% confidence number and the 65% outcome gap gives you leading and lagging indicators in the same study
  • The specific shift toward context and semantic layers aligns with what private market signals are showing — TextQL's April 17 Blackstone round, the Snowflake-OpenAI $200M partnership in February, and the Qlik-ServiceNow partnership announced April 13 all point at the same architectural pattern

When Gartner, private capital, and the hyperscaler platform roadmaps all point the same direction in the same quarter, the direction is worth taking seriously even if you discount the Gartner framing.

The action list for this quarter

If you're reading this and running or influencing AI strategy at a large enterprise, the concrete actions implied by the research:

Run an investment mix audit. Take the last four quarters of AI-related spend and categorize it: models/inference, platform/tooling, data foundations (quality, governance, lineage, semantic layer), talent/training, change management. If the first two categories are more than 50% of the total, you are very likely in the 61% Gartner is calling out.

Run a governance readiness assessment. For your top three GenAI use cases, can you show: who can access what data, what actions the agent took, why it took them, what policies were enforced, and what the audit trail looks like? If the answer is "partially, in different systems," your governance is bolted on, not built in.

Benchmark your D&A maturity honestly. Gartner's own maturity frameworks are one option; a cleaner internal version asks: can a business user get a trusted answer from your data substrate without a human analyst in the loop? For how many questions? At what latency? The honest answer is usually worse than the executive summary admits.

Put a dollar figure on the gap. The 65% business outcome advantage is the actionable number. If your industry peer set is capturing 65% more value from comparable AI investments because they built the foundations, the cost of not closing that gap compounds every year.

The bottom line

The Gartner report is, in some sense, not news. Practitioners have been saying for two years that the bottleneck for enterprise AI is data, governance, and change management, not models. What is new is the quantification: a 4x investment multiplier between winners and losers, a 65% business outcome gap, and a 39% confidence rate among the technology leaders who should be the strongest advocates for their own programs.

For the enterprises that have treated foundational data work as unsexy back-office investment, the strategic implication is uncomfortable: your AI program's ceiling is set by decisions you made about data, not decisions you will make about models. The work to raise that ceiling is in front of you, and the peer set that started on it in 2024 is three years ahead.

The good news: that work is legible, scopeable, and well-understood. The hard news: it takes years, not quarters, and the first honest conversation about it usually happens well after the board has already been promised AI-driven revenue growth for the coming fiscal year.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

Enterprise AIData StrategyGartnerAI GovernanceCIOCDAO

Gartner: AI Winners Spend 4x More on Data, Not Models

Gartner's April 16 report: AI winners invest 4x more in data and governance, not models. Only 39% of tech leaders trust AI ROI. What CIOs must do now.

By Rajesh Beri·April 18, 2026·10 min read

Gartner published research on April 16, 2026 that quantifies something every CIO has suspected for two years: the organizations winning with AI are not the ones spending more on models. They are the ones spending up to four times more on the unglamorous data, governance, and change management foundations that sit underneath the models.

For CTOs and CDAOs trying to explain why their generative AI program has produced demos but not revenue, the numbers give you a defensible budget conversation. For CFOs and boards tired of writing checks for AI initiatives with no measurable outcomes, the report is a map of where your money is probably going wrong — and where to redirect it.

The headline number

Gartner's core finding: organizations that report successful AI initiatives invest up to 4x more (as a percentage of revenue) in foundational capabilities than organizations with poor AI outcomes. Foundational capabilities here are specifically defined as data quality, data governance, AI-ready talent, and change management — not model licenses, not GPUs, not vendor contracts.

The research is anchored to two surveys: 353 data and analytics (D&A) and AI leaders surveyed from November through December 2025, and a separate cohort of 360 IT leaders surveyed in Q2 2025. The samples are global and cross-industry.

The second number is more damning: only 39% of technology leaders are confident that their enterprise's current AI investments will have a positive impact on financial performance. Translation: 61% of tech leaders, the people closest to the implementation, are either unsure or actively skeptical that the current AI spend will pay off.

And a third: only 23% of IT leaders say they are "very confident" in their ability to manage security and governance for generative AI deployments. That's the population running the systems, not the skeptics on the outside.

The upside, when organizations do invest in the foundations: those with the highest maturity of AI-ready D&A capabilities are achieving up to 65% greater business outcomes, measured across revenue growth and cost optimization. That's the ROI gap between winners and losers in concrete terms.

What "AI-ready data" actually means

The phrase "AI-ready data" has been in every vendor deck since early 2024, to the point where it means whatever the vendor selling it needs it to mean. Gartner's Rita Sallam, Distinguished VP Analyst and Gartner Fellow, is more specific. Success requires "new trusted data, context foundations and perceptive intelligence." That maps to a technical stack that most enterprises have not yet built:

  • Trusted data: data with documented lineage, quality scores, and versioning that an agent can reason over without a human in the loop confirming every fact.
  • Context foundations: semantic layers, knowledge graphs, and ontologies that encode what a piece of data means in the business, not just what type it is.
  • Perceptive intelligence: the ability of systems to interpret context across structured and unstructured data and adapt to the business situation rather than the query syntax.

The gap between a dashboard-era data warehouse and this stack is substantial. Most Fortune 500 data estates were optimized for BI query performance, not for agent-led reasoning. Column-level lineage exists in pockets; business context lives mostly in confluence pages and tribal knowledge; the knowledge graph layer is usually missing entirely.

That's the 4x investment. Rebuilding that stack is expensive, slow, and doesn't produce demo-able outputs for quarterly business reviews. The organizations doing it anyway are the ones Gartner is counting as winners.

The six shifts Sallam names

Gartner frames this as six required shifts for D&A leaders heading into 2027. Each of them has implications worth unpacking for technical and business buyers:

1. Build toward an AI-first approach. The legacy posture is "we have a BI stack and we're adding AI on top." The AI-first posture is "we are building for agents as primary consumers of our data, and humans will use the same substrate." The implication: your data products should be API-first and machine-readable before they're dashboard-first.

2. Redesign for human-agent collaboration. Sallam notes that high-performing AI teams can be as small as one technical person and one business person working with a fleet of agents. The org-chart implication is significant — the team structure your data org had in 2022, with tiers of analysts supporting senior analysts supporting executives, compresses.

3. Establish context as critical infrastructure. This is the semantic layer / knowledge graph work that most enterprises have been deferring. Gartner is saying the deferral is now the single biggest predictor of AI failure.

4. Scale connected engineering practices. Data engineering, ML engineering, and software engineering have historically lived in separate functions with separate tooling. The shift is to unified platform engineering where the same team owns the pipelines, the model serving, and the agent orchestration.

5. Establish trust-based governance models. Only 23% of IT leaders being "very confident" in GenAI security is a governance crisis, not a tooling gap. Gartner's recommendation is to build governance into the data substrate — access controls, audit trails, policy enforcement — rather than bolting it on at the application layer.

6. Move beyond ROI to value compounding. Traditional project ROI assumes a one-time delivery and a payback curve. AI capabilities compound — a well-built context layer supports a growing number of use cases at declining marginal cost. The measurement framework has to change.

The most quoted line

Sallam's most quoted line in the report is less technical: "The future is not about replacing humans, but amplifying their ingenuity." That framing matters politically inside large enterprises, because it's the narrative that allows a CIO to request a significant foundation investment without triggering the headcount anxiety that has made AI projects politically fraught.

But read it alongside shift #2 — teams as small as one technical and one business person — and the message is more nuanced. The team structure compresses; the individuals left in those teams are dramatically more leveraged. That is amplification, but it's amplification that changes which jobs exist and which don't.

What this means for enterprise technology buyers

For technical leaders (CIO, CTO, CDAO), the practical implications:

Your vendor selection criteria are probably wrong. Most enterprise RFPs in 2024-2025 evaluated AI vendors on model capability, inference cost, and integration breadth. The Gartner data suggests the more predictive criteria are data governance depth, semantic layer quality, and the vendor's track record on change management and training. Vendors that pitch "just plug in our agent" without asking hard questions about your data substrate are pitching failure.

Your internal budget allocation is probably wrong too. If your AI budget is 80% model / compute / platform licenses and 20% data foundations and change management, you are in the 61% of organizations Gartner is documenting as unlikely to see financial impact. The 4x multiplier suggests the allocation for successful programs looks closer to 40% foundations / 40% talent and change / 20% models and compute.

Governance cannot be downstream anymore. The 23% confidence number means most IT leaders know their governance posture will fail an audit when GenAI moves into regulated workflows. Retrofitting governance after deployment is more expensive and slower than building it into the data layer upfront. The organizations that are winning are the ones that accepted this and rebuilt access, lineage, and policy enforcement at the substrate level.

For business leaders (CFO, CEO, board), the practical implications:

Stop funding AI projects without funding the foundations first. The 39% confidence number from technology leaders is a leading indicator. If the people running the implementation don't believe it will pay off, the odds that the board-level business case holds up are low. The check to write is not the next model license; it's the data governance, quality, and platform engineering investment that makes the next model license productive.

Benchmark your investment mix, not your total. The 4x multiplier is a ratio, not an absolute number. You are not competing with hyperscalers on AI spend. You are competing with your own peers on how much of your AI spend is going into foundations vs. surface features. That's the question to ask at the next board meeting.

Reframe the ROI conversation. Gartner's shift #6 — move beyond ROI to value compounding — is a recognition that the traditional project-based investment case doesn't work for AI capabilities. The alternative is treating the data substrate as a platform investment with expanding returns, measured in how many use cases it supports over time rather than the payback on the first use case.

What the critics will say

Gartner research is Gartner research. The obvious critiques apply: vendor-influenced, US/Europe-centric, based on self-reported survey data, and written from a distinctly enterprise-IT worldview that doesn't map perfectly to AI-native startups or to the open-source / self-hosted side of the market.

A few things make this particular report harder to dismiss than usual:

  • The 353 and 360 sample sizes are large enough for inter-group comparisons rather than just directional claims
  • The split between the 39% confidence number and the 65% outcome gap gives you leading and lagging indicators in the same study
  • The specific shift toward context and semantic layers aligns with what private market signals are showing — TextQL's April 17 Blackstone round, the Snowflake-OpenAI $200M partnership in February, and the Qlik-ServiceNow partnership announced April 13 all point at the same architectural pattern

When Gartner, private capital, and the hyperscaler platform roadmaps all point the same direction in the same quarter, the direction is worth taking seriously even if you discount the Gartner framing.

The action list for this quarter

If you're reading this and running or influencing AI strategy at a large enterprise, the concrete actions implied by the research:

Run an investment mix audit. Take the last four quarters of AI-related spend and categorize it: models/inference, platform/tooling, data foundations (quality, governance, lineage, semantic layer), talent/training, change management. If the first two categories are more than 50% of the total, you are very likely in the 61% Gartner is calling out.

Run a governance readiness assessment. For your top three GenAI use cases, can you show: who can access what data, what actions the agent took, why it took them, what policies were enforced, and what the audit trail looks like? If the answer is "partially, in different systems," your governance is bolted on, not built in.

Benchmark your D&A maturity honestly. Gartner's own maturity frameworks are one option; a cleaner internal version asks: can a business user get a trusted answer from your data substrate without a human analyst in the loop? For how many questions? At what latency? The honest answer is usually worse than the executive summary admits.

Put a dollar figure on the gap. The 65% business outcome advantage is the actionable number. If your industry peer set is capturing 65% more value from comparable AI investments because they built the foundations, the cost of not closing that gap compounds every year.

The bottom line

The Gartner report is, in some sense, not news. Practitioners have been saying for two years that the bottleneck for enterprise AI is data, governance, and change management, not models. What is new is the quantification: a 4x investment multiplier between winners and losers, a 65% business outcome gap, and a 39% confidence rate among the technology leaders who should be the strongest advocates for their own programs.

For the enterprises that have treated foundational data work as unsexy back-office investment, the strategic implication is uncomfortable: your AI program's ceiling is set by decisions you made about data, not decisions you will make about models. The work to raise that ceiling is in front of you, and the peer set that started on it in 2024 is three years ahead.

The good news: that work is legible, scopeable, and well-understood. The hard news: it takes years, not quarters, and the first honest conversation about it usually happens well after the board has already been promised AI-driven revenue growth for the coming fiscal year.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe