This morning, May 4, both Anthropic and OpenAI launched their own enterprise AI services firms. On the same day. With overlapping but distinct private-equity consortia. Combined committed capital: $11.5 billion. The frontier model labs are no longer just selling the model. They are now selling the implementation, the engineering team, and the outcome — and they are doing it in direct competition with Accenture, Deloitte, McKinsey, BCG, and (this part is awkward) Bain Capital, which sits on the OpenAI side of the deal.
If you run an enterprise architecture function and your AI roadmap currently routes through one of the Big Three system integrators, the procurement question on your desk has just changed. The vendors who own the model now own the implementation business and want your services budget too. The economics that made this market work for forty years — software margin for the vendor, services margin for the consultant, integration risk for the customer — are about to be restructured.
Here is what each firm shipped, why it is happening now, what the Big Three lose, and the procurement decision every enterprise needs to make in the next 90 days.
What Anthropic Launched
Anthropic, Blackstone, Hellman & Friedman, and Goldman Sachs jointly announced an unnamed AI-native enterprise services firm. The capital structure: Anthropic, Blackstone, and Hellman & Friedman each commit roughly $300 million; Goldman Sachs adds $150 million as a founding investor; additional capital comes from Apollo Global Management, General Atlantic, Leonard Green, GIC, and Sequoia Capital. Total capacity: approximately $1.5 billion.
The structural innovation is the embedded model. Anthropic engineering and partnership resources sit inside the new firm rather than at arm's length. The new firm operates as a standalone entity with direct access to Anthropic's research and product teams, ensuring implementations evolve in lockstep with Claude's capability releases — which now ship monthly or more often. That is a meaningfully different posture from a traditional consulting engagement where the AI vendor is one slide in the deck.
The target is mid-market companies and the portfolio companies of the founding investors. Blackstone alone manages over $1.2 trillion across portfolio companies in real estate, infrastructure, private equity, and credit. Hellman & Friedman, Apollo, General Atlantic, and Leonard Green collectively control hundreds of additional portfolio companies across healthcare, financial services, manufacturing, and consumer. Goldman Sachs Asset Management adds another channel. The firm has, on day one, a captive deployment surface measured in thousands of mid-cap and large-cap companies whose owners have direct economic incentive to install Claude.
Krishna Rao, Anthropic's CFO, framed it bluntly: "Enterprise demand for Claude is significantly outpacing any single delivery model." Translation: Anthropic cannot scale its own forward-deployed engineering function fast enough, the existing systems-integrator channel is too slow and too thinly Claude-trained, and the LP capital sitting in PE asset managers is the fastest path to engineering capacity at scale.
What OpenAI Launched
A few hours later, OpenAI finalized The Deployment Company — a $10 billion joint venture with TPG (anchor), Brookfield Asset Management, Bain Capital, Advent International, Goanna Capital, and 14 additional unnamed investors.
The capital structure is more aggressive than Anthropic's. OpenAI itself commits up to $1.5 billion: $500 million in equity at close plus a $1 billion option to deepen its position. The PE consortium commits roughly $4 billion over a five-year window, with the remainder structured through follow-on capacity. The kicker: OpenAI is guaranteeing the PE backers a 17.5 percent annual return over five years. That is a structured-finance commitment, not a venture bet. It is also a bet OpenAI is willing to make because the underlying services revenue — at typical enterprise consulting margins applied to OpenAI's existing 4-million-weekly-Codex-user base alone — can support that return without strain if the deployment volume materializes.
Governance reflects the same logic. OpenAI retains super-voting shares for strategic control while financial sponsors receive income-oriented economics. Sam Altman keeps the steering wheel; the PE firms get the cash flows. That structure tells you exactly what each side believes the venture is worth: OpenAI thinks the strategic optionality of owning enterprise distribution is the prize; the PE firms think the cash yield is the prize. They are both right, which is why the deal works.
The Deployment Company will embed OpenAI engineering teams inside client organizations on a forward-deployed basis — the Palantir model, applied at scale. Initial sector focus: healthcare, logistics, manufacturing, and financial services. The customer set explicitly includes the portfolio companies of the PE backers, the same captive distribution play Anthropic is making but at roughly 6.7x the scale of committed capital.
Why Both, Why Today
The simultaneous launch is not coincidental. Both firms have been negotiating these structures for months — OpenAI's was first reported as in talks on March 16; Anthropic's emerged in late April — but the public timing was coordinated across the industry's tea leaves. Three forces drove the convergence:
One: The frontier-model differentiation gap is closing. GPT-5.5, Claude Opus 4.7, Gemini 3.1 Ultra, and DeepSeek V4 cluster within a few percentage points on most enterprise-relevant benchmarks. When the model layer commoditizes — which it has — the durable margin in AI shifts to whoever owns the implementation surface. Both firms read the same data and arrived at the same conclusion: services revenue is now the moat the model alone cannot defend.
Two: The 6:1 ratio. Industry data quoted in the Anthropic announcement is the number every CFO in this market should have memorized: for every dollar enterprises spend on software, they spend roughly six dollars on services. The global enterprise IT services market is approximately $1.5 trillion. AI implementation, training, and integration are projected to be the fastest-growing slice of that pie for the next five years. If Anthropic and OpenAI capture even single-digit percentage points of that downstream services spend, it dwarfs their model-licensing revenue.
Three: PE portfolio company AI urgency. 85 percent of private equity buyers are now factoring AI-enabled finance and operations capabilities into deal valuations. That number was a footnote in the Fortune coverage and is the single most important data point in either announcement. PE firms have hundreds of portfolio companies whose exit valuations now depend on demonstrable AI deployment. The economics of forming a JV with the AI vendor — to get prioritized engineering capacity, guaranteed delivery, and direct model-roadmap access for those portfolio companies — outweigh the marginal cost of the JV equity. Blackstone, Hellman & Friedman, TPG, Apollo, and Brookfield did the math at roughly the same time and arrived at the same answer.
The fourth, unstated force: speed. Both firms watched Microsoft and Google build internal forward-deployment teams over five years and concluded that approach was too slow. The PE-backed JV structure gets to billable engineering capacity in weeks rather than years. That is the only way to absorb the demand wave that is already happening.
What the Big Three Lose
Accenture, Deloitte, BCG, McKinsey, and the Indian SI majors (TCS, Infosys, Wipro, HCL) collectively bill upward of $400 billion per year in enterprise services. AI-related implementation work is currently the fastest-growing line item in those P&Ls. Both Anthropic's JV and OpenAI's Deployment Company aim directly at that revenue.
The disintermediation logic is clean. Today, when a Fortune 500 enterprise buys a Claude or GPT enterprise license, it then hires Accenture or Deloitte to do the deployment, integration, change management, and custom agent build-out. The systems integrator captures roughly $5–10 of services revenue per $1 of model-licensing revenue. That is the deal that has worked for forty years across every wave of enterprise software (ERP, CRM, cloud, data warehousing, identity).
The new model collapses the sale. The customer buys a single bundle: model + engineering team + outcome guarantee. The implementing firm has direct access to model-roadmap intelligence the SI does not. The PE-backed structure reduces the customer's payment risk because the PE firms are underwriting the delivery economics. And — critically — the PE-portfolio-company captive market means the new firms have a guaranteed initial pipeline that traditional SIs would have had to compete for.
The Big Three responses will fall into three buckets:
-
Deepen vendor partnerships. Accenture has already announced an expanded Anthropic alliance and a similar OpenAI partnership. The defense is to argue that an independent SI brings vendor-neutral judgment, deeper industry expertise, and existing client relationships. This is the most likely public posture and the weakest economic position.
-
Acquire boutique AI implementation firms. McKinsey acquired QuantumBlack a decade ago and is now hunting frontier-AI specialists. Expect a wave of consolidation in 2026–2027 as the Big Three buy the AI-native implementation firms that have been growing 100%+ annually since 2024.
-
Form their own PE-backed JVs. Less likely on the McKinsey/BCG side because of partnership-equity constraints; more likely on the Accenture/Deloitte side. The structural disadvantage is that the SI does not own the model and cannot guarantee model-roadmap alignment.
The Indian SI majors — TCS, Infosys, Wipro, HCL, Tech Mahindra — face a harder problem. Their cost-arbitrage model depends on labor-intensive integration work that increasingly does not need labor. The forward-deployed-engineer-per-client model that both Anthropic and OpenAI are now scaling does not arbitrage geography; it arbitrages model fluency. Bangalore engineers do not have a structural advantage there.
The awkward case is Bain Capital, which sits on OpenAI's Deployment Company cap table while Bain & Company (the consultancy) competes with that same firm for client work. Bain Capital and Bain & Company are legally separate; the brand association is not. Expect significant internal navigation over the next 18 months.
The CIO Procurement Question
If you are a CIO with an active AI implementation budget, the question on your desk this quarter is:
Do we contract for the AI implementation through the model vendor's PE-backed services arm, through a traditional Big Three SI partnered with the model vendor, or through a vendor-neutral boutique?
Each path has trade-offs:
Vendor's PE-backed arm (Anthropic JV or OpenAI's Deployment Company):
- Pro: Deepest model-roadmap access. Direct line to research teams. Engineering quality is structurally guaranteed by the vendor's reputation. Forward-deployed model is operationally tight.
- Con: Maximum vendor lock-in. The implementation team is structurally incentivized to build on the model that funds them, not the model that fits your stack best in 24 months.
Big Three with vendor partnership (Accenture, Deloitte, BCG, McKinsey):
- Pro: Vendor-neutral judgment, existing client relationships, deeper industry expertise, change-management muscle the AI-native firms do not yet have.
- Con: Weaker model-roadmap intelligence, slower delivery, the historical 6:1 services-to-software cost structure that the new model is explicitly trying to undercut.
Vendor-neutral boutique:
- Pro: Multi-model architecture by design, lower total cost than the Big Three, often deeper technical chops on agentic infrastructure.
- Con: Capacity constraints, single-point-of-failure risk on key engineering hires, less structured methodology for change management.
My read: most enterprises will end up running a portfolio. The vendor-PE-backed firm gets the strategic Claude or GPT-native workloads where model-roadmap alignment matters most. The Big Three retain the regulated-industry transformation work where existing client relationships and change-management capability are load-bearing. The boutiques get the agentic-infrastructure and orchestration work that requires multi-model expertise neither of the others have built yet.
The mistake to avoid: standardizing on one path before you know which workloads belong where. The right answer is to break the AI services portfolio into three buckets and source against each one independently.
The CFO Question
The CFO question is sharper:
Under the new vendor-PE-services-firm pricing model, does our AI total cost of ownership go up or down — and what is the lock-in cost we are accepting in exchange?
The honest answer is "down on day one, up over five years if you do not negotiate carefully." The vendor-backed firms can offer aggressive initial pricing because the PE capital underwrites the burn. They are buying market share. The 17.5 percent guaranteed return OpenAI is paying its PE backers tells you exactly how aggressive the underwriting is — and it tells you something about the price discovery pressure those firms will exert on customer pricing once the customer is locked in.
The negotiation lever every CFO should bring to these conversations: the willingness to walk to the alternative vendor's services arm. As of today, that lever exists for the first time. Anthropic has a JV; OpenAI has a JV; the implementation contract you sign with one is the contract you can credibly threaten to walk to the other on. That competitive dynamic is genuinely new, and it lasts only as long as both vendors believe the other is a real procurement alternative. Use it.
The 18-Month Forecast
Three things will be true by Q4 2027:
One: At least one Big Three SI will have either acquired a boutique AI implementation firm at a premium valuation or formed a PE-backed JV structure that mirrors the Anthropic/OpenAI model. The defensive logic is too obvious to ignore.
Two: A significant percentage of mid-market enterprise AI implementations — somewhere between 20 and 40 percent — will route through one of the two new vendor-backed firms or their imitators. The PE captive distribution is too efficient a channel to ignore.
Three: The 6:1 software-to-services ratio in enterprise IT will compress. Some of that compression goes to the AI vendors directly through their services arms. Some goes to the model itself, which now does work that previously required human consultants. The total enterprise IT services TAM will not shrink; the slice captured by traditional SIs will.
There is a fourth thing that might be true and that I am less confident about. If both vendor-backed firms succeed and the PE backers see the projected returns, expect a wave of similar JVs across other vendor categories — Salesforce-PE, ServiceNow-PE, Workday-PE — each turning their model or platform into a captive services delivery vehicle for portfolio companies. The vendor-as-consultant pattern, if it works for AI, will not stay limited to AI.
Today, May 4, 2026, was the day the model labs stopped pretending they were just model labs. That repositioning will define the enterprise services market for the next decade. The CIOs and CFOs who restructure their AI procurement model around it in the next 90 days will be operating at a different cost base than the ones who do not.
I would not wait to find out which side of that line you are on.
Rajesh Beri is Head of AI Engineering at Zscaler. Opinions are his own and do not represent Zscaler.
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
