For the first time since the AI race began, more American businesses are paying for Anthropic's Claude than OpenAI's ChatGPT. According to Ramp's AI Index, Anthropic reached 34.4% business adoption in April 2026, overtaking OpenAI at 32.3%. It's a symbolic milestone in a market OpenAI once dominated—and a wake-up call for every CIO, CFO, and CTO evaluating AI vendors.
But the same data that crowns a new leader also exposes a structural problem that could reshape the entire enterprise AI market: the companies winning on adoption are creating budget crises for their own customers.
The Numbers Behind the Crossover
Ramp's index tracks corporate card and bill-paying activity across 50,000+ U.S. businesses, capturing billions of dollars in monthly AI spending. The April 2026 data shows a dramatic reversal:
- Anthropic: 34.4% adoption (+3.8% month-over-month)
- OpenAI: 32.3% adoption (-2.9% month-over-month)
- Overall AI adoption: 50.6% of businesses
Anthropic quadrupled its business adoption over the past year. OpenAI grew just 0.3% over the same period. In head-to-head matchups among first-time AI buyers, Anthropic now wins roughly 70% of the time—a complete reversal of 2025 trends.
The driver? Claude Code. Anthropic's agentic AI coding tool has become the fastest-growing product in the company's history. Recent analysis shows Claude Code now authors 4% of all GitHub public commits worldwide—double the percentage from just one month prior.
The Technical Perspective: Why Engineers Choose Claude
For CTOs and VPs of Engineering, the Anthropic surge makes sense. Claude Code delivers tangible productivity gains. Engineers report that the tool handles boilerplate code, refactoring, and even complex architectural decisions with minimal supervision. At companies like Uber, adoption jumped from 32% to 84% of engineers in a matter of months. Roughly 70% of committed code at Uber now originates from AI.
The product works. That's not the problem.
The problem is that it works too well.
When engineers discover a tool that genuinely accelerates their workflow, they use it constantly. And when you're paying per token on a usage-based pricing model, "constant use" translates into escalating costs that finance teams never budgeted for.
The Business Perspective: The Budget Crisis CFOs Didn't See Coming
Here's the part CFOs need to understand: Uber's CTO revealed that the company spent its entire 2026 AI budget in just four months—largely on Claude Code and similar tools. Engineers at Uber report monthly API costs ranging from $500 to $2,000 per person.
Let's run that math for a 500-person engineering team:
- Low end: $250,000/month = $3 million/year
- High end: $1 million/month = $12 million/year
That's not a line item. That's a budget reallocation that requires board-level approval.
And it gets worse: Anthropic's business model incentivizes higher token consumption. The company makes more money when businesses use more expensive models, even when cheaper alternatives would suffice. As Ramp lead economist Ara Kharazian noted in the May 2026 index, "We have never seen a software industry as dynamic, where newcomers can disrupt market leaders in a matter of months."
The same volatility that helped Anthropic overtake OpenAI could work in reverse—especially if enterprises start demanding predictable, seat-based pricing instead of unpredictable token-based billing.
What This Means for Enterprise Decision-Makers
For CIOs and CTOs evaluating AI vendors:
-
Budget for runaway adoption. If a tool delivers real productivity gains, engineers will use it far more than your initial estimates. Plan for 3-5x your projected usage.
-
Negotiate rate caps. Usage-based pricing creates unlimited upside for vendors and unlimited risk for buyers. Insist on monthly or annual spending caps as part of your contract.
-
Monitor usage per team. Some engineering teams will burn through budgets faster than others. Real-time cost visibility prevents surprise invoices.
-
Evaluate open-source alternatives. The same Ramp report notes growing interest in cheaper open-source models. If token costs become prohibitive, alternatives like Llama 3 or Mistral may offer better ROI for specific use cases.
For CFOs and finance leaders:
-
AI spending is now a material line item. It's no longer buried in "cloud services" or "software licenses." Create a dedicated AI budget category with monthly tracking.
-
Token-based pricing is a hidden liability. Unlike seat-based SaaS (where costs are predictable), token-based pricing scales with usage. That's great for vendors; it's terrible for budget forecasting.
-
Demand consumption analytics from vendors. Anthropic, OpenAI, and other AI vendors should provide real-time dashboards showing token consumption, cost per team, and cost per project. If they can't, build it yourself.
The Market Remains Volatile—and That's the Point
Ramp's Kharazian issued a critical warning alongside the April data: "These results should not be construed to suggest Anthropic is the definitive leader in business adoption."
Translation: The same forces that helped Anthropic overtake OpenAI—rapid product innovation, superior technical performance, aggressive go-to-market strategy—could just as easily favor a competitor six months from now. Rising token costs, compute shortages, and growing interest in open-source alternatives could reshape the market yet again.
In fact, OpenAI has already started responding. The company recently announced a new $4 billion enterprise services business to embed Forward Deployed Engineers inside customer organizations—a direct play to win back enterprise accounts frustrated by runaway costs.
The Productivity Paradox: AI Is Everywhere, But Where Are the Results?
Here's the uncomfortable truth buried in the data: AI adoption has reached 50% of businesses, but productivity gains remain elusive.
A recent Gallup survey of 23,717 U.S. employees found that 50% of employed adults now use AI in their roles at least a few times a year. Daily usage jumped to 13%, and 28% report using AI a few times a week or more.
But when Gallup asked employees whether AI has "transformed how work gets done," only about 1 in 10 strongly agreed. Firm-level studies across the U.S., U.K., Germany, and Australia show chief executives reporting minimal broad productivity effects from AI over the past three years.
Why the disconnect?
Because AI is being used at the task level, not the organizational level. Engineers use Claude Code to write functions faster. Sales reps use ChatGPT to draft emails. Finance analysts use AI to summarize reports. But none of that translates into measurable ROI unless organizations fundamentally redesign workflows around AI capabilities.
For business leaders, the takeaway is clear: Spending more on AI doesn't automatically deliver productivity gains. You need to rethink how work gets done—not just bolt AI onto existing processes.
What Should You Do Right Now?
If you're a CIO or CTO:
- Audit your current AI spending. If you don't have real-time visibility, build it this quarter.
- Renegotiate contracts with token-based pricing. Push for caps, discounts at scale, or hybrid models.
- Run pilot programs comparing proprietary models (Claude, GPT-4) against open-source alternatives for specific use cases.
If you're a CFO or finance leader:
- Create a dedicated AI budget category. Track spending monthly, not quarterly.
- Model worst-case scenarios: What happens if AI usage triples in 6 months? Can you afford it?
- Work with procurement to standardize AI vendor evaluation criteria—especially around cost predictability.
If you're a business executive (CMO, COO, CRO):
- Don't assume your teams are using AI effectively just because they're using it frequently.
- Measure outcomes, not activity. Are sales cycles shorter? Are customer support tickets resolved faster? Are compliance reviews completed with fewer errors?
- If you can't measure it, you can't manage it—and you definitely shouldn't be spending millions on it.
The Bottom Line
Anthropic's win over OpenAI in business AI adoption is a big deal. It proves that product quality, developer experience, and technical performance can overcome OpenAI's first-mover advantage and consumer brand dominance.
But the real story isn't who's winning the adoption race. The real story is that the current pricing model is unsustainable for enterprise buyers. Token-based billing creates unpredictable costs, budget overruns, and misaligned incentives between vendors and customers.
The next phase of enterprise AI won't be about which model is best. It'll be about which vendor can deliver predictable costs, transparent usage analytics, and measurable ROI.
Right now, neither Anthropic nor OpenAI has solved that problem. The company that does will own the next decade of enterprise AI.
Continue Reading
- OpenAI Drops Seat Fees — What This Means for Enterprise
- Why Enterprise AI Projects Fail (and How to Fix Them)
- The Hidden Cost of AI: Why CFOs Are Pumping the Brakes
About the Author: Rajesh Beri is Head of AI Engineering at a Fortune 500 security company and author of THE DAILY BRIEF, a newsletter for technical and business leaders navigating enterprise AI. Follow on LinkedIn | Follow on Twitter/X
