You've allocated the budget. You've greenlit the pilot. Your team is ready to deploy AI.
Then it stops.
Not because of technical limitations. Not because of lack of talent. It stops because no one trusts the system enough to bet their career on it.
New research from Gong—based on a survey of 2,056 US and UK business leaders—reveals that 58% of companies have stalled AI projects. The reason? A trust deficit that's costing companies their competitive edge.
This isn't a technology problem. It's a transparency problem.
The Trust Barrier Is Real (And Expensive)
Here's what the data shows:
- 58% of companies (US: 63% | UK: 52%) have delayed or cancelled AI projects
- 46% of planned AI investments are currently paused on average
- 75% of leaders feel their organizations are falling behind on realizing AI's benefits
The most telling finding: security concerns are now revenue conversations. Gong Labs analyzed 25 million sales interactions and found that one in four sales calls referenced security, with AI's foundational data and learning mechanisms the most commonly discussed topics.
When your sales team is spending 25% of their call time addressing security concerns instead of closing deals, you have a revenue problem—not just a technical one.
The Four Trust Barriers Blocking Adoption
The research identified exactly what's stopping enterprise AI adoption:
| Trust Barrier | Overall | US | UK |
|---|---|---|---|
| Data Privacy & Security | 34% | 36% | 31% |
| Explainability | 30% | 30% | 31% |
| Model Transparency | 28% | 28% | 28% |
| Regulatory Uncertainty | 27% | 27% | 27% |
Data privacy and security tops the list—and rightfully so. CISOs and legal teams aren't being paranoid; they're being prudent. When you can't explain how an AI model uses your customer data, you can't deploy it in production.
Explainability comes second. Business leaders need to articulate to the board why an AI system recommended a specific action. "The algorithm said so" doesn't cut it in a quarterly earnings call.
Model transparency and regulatory uncertainty round out the top four. Both speak to the same core issue: enterprises need guardrails, not just capabilities.
What Leaders Actually Need to Deploy AI
The research asked leaders what assurances would help them confidently adopt AI solutions:
| Assurance | Overall | US | UK |
|---|---|---|---|
| Explainability of AI-derived outputs | 26% | 24% | 27% |
| Ability to articulate AI model guardrails protecting data | 25% | 24% | 25% |
| Security guarantees built into solutions | 23% | 25% | 22% |
| Third-party audits or certification | 23% | 23% | 23% |
| Transparency into how training data is used | 22% | 22% | 22% |
| Transparent model logic | 22% | 22% | 22% |
Notice the pattern: every single item is about transparency and trust, not about raw capabilities.
Your CFO doesn't need a faster model. They need a model they can defend in an audit.
Your CIO doesn't need more features. They need a security certification they can show the board.
Your CMO doesn't need better accuracy. They need to explain why the AI recommended Budget Option A over Budget Option B.
Why This Matters for Your Bottom Line
Let's talk ROI.
If 46% of your planned AI investments are paused, you're not just delaying innovation—you're bleeding competitive advantage while your competitors solve the trust problem.
Consider what's happening at the companies that have broken through the trust barrier:
- Sales teams spend less time addressing security objections and more time closing deals
- Legal and compliance teams can approve AI deployments instead of blocking them
- C-suite executives can make strategic AI bets with confidence instead of cautious pilots
The competitive advantage isn't in having access to AI tools. Everyone has access. The advantage is in deploying them at scale across your organization—and trust is the prerequisite for scale.
How to Break Through the Trust Barrier
Based on the research and conversations with enterprise leaders implementing AI at scale, here's what actually works:
1. Demand Vendor Transparency (Not Just Marketing Claims)
Ask your AI vendors:
- What data sources does your model use?
- Can you show me exactly how our data is protected?
- What happens to customer data if we terminate the contract?
- Can you provide third-party security certifications?
If they can't answer these questions clearly, walk away. Plenty of vendors can.
2. Build Explainability Into Your Evaluation Criteria
Stop evaluating AI tools solely on accuracy metrics. Add these questions:
- Can the system explain why it made a specific recommendation?
- Can our legal team understand the decision logic?
- Can we audit the AI's reasoning six months from now?
Explainability isn't a nice-to-have. It's a deployment requirement.
3. Establish Enterprise-Grade Guardrails Before Deployment
Work with your security and compliance teams before selecting an AI vendor, not after. Define:
- Data governance policies specific to AI systems
- Custom redaction capabilities for sensitive information
- Role-based access controls for AI outputs
- Audit trails for all AI-generated recommendations
In highly regulated industries (finance, healthcare, legal), this isn't optional—it's table stakes.
4. Prioritize Vendor-Led Deployments Over Internal Builds
Here's a buried finding from related research that deserves attention: vendor-led deployments succeed 67% of the time, while internal builds succeed only one-third of the time.
Building your own AI infrastructure sounds appealing to engineering teams. But unless you're Google or Amazon, you're better off partnering with vendors who've already solved the governance, security, and compliance challenges at scale.
Your competitive advantage isn't in building AI infrastructure. It's in deploying AI to solve your specific business problems faster than competitors.
The Real Competitive Edge
"Security and AI trust are no longer back-office conversations; they are revenue conversations," says Chris Peake, Chief Trust Officer at Gong.
He's right. The trust barrier isn't a technical hurdle—it's a strategic differentiator.
Companies that solve for trust will deploy AI at scale. Companies that don't will run endless pilots that never reach production.
The technology is ready. The question is: are you building the trust infrastructure to deploy it?
Because while you're waiting for perfect certainty, your competitors are building trust systems and gaining market share.
What to do this week:
- Audit your stalled AI projects—identify which ones are blocked by trust issues vs. technical issues
- Schedule a session with your security, legal, and compliance teams to define AI governance requirements
- Update your AI vendor evaluation criteria to include transparency, explainability, and third-party certifications
- Calculate the opportunity cost of your paused AI investments (46% on average, per the research)
The trust barrier is real. But it's not insurmountable. The companies that crack it first will define the competitive landscape for the next decade.
Sources:
- Gong Research: "Unlocking The Trust Barrier For Enterprise AI" (April 2026)
- Gong Labs analysis of 25 million sales interactions (2025)
- Censuswide survey of 2,056 business leaders (US & UK, January 2026)
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
Continue Reading
- JPMorgan Reclassifies AI as Core Infrastructure
- Scotiabank Cuts Manual Work 70% With Scotia Intelligence AI
- [$40B/Year: Anthropic's Google Lock-In Reshapes AI Strategy](/article/anthropic-google-200b-cloud-lock-in)