OpenAI just launched a $4 billion services company to embed AI engineers directly into enterprise organizations. Two days later, Anthropic announced the same thing. This isn't another partner program announcement. This is a fundamental shift in how AI vendors are going to market—and it changes the build-vs-buy calculation for every CTO and CFO evaluating AI investments.
The message from both companies is clear: buying the API isn't enough anymore. They're coming inside your organization to build the systems for you.
What OpenAI Built (and Why It Matters)
OpenAI's new unit—called the OpenAI Deployment Company (DeployCo)—launched Monday with $4 billion in backing from 19 investment firms, including TPG (lead), Advent, Bain Capital, Brookfield, Goldman Sachs, and SoftBank. Consulting giants Bain & Company, Capgemini, and McKinsey are also investors and partners.
DeployCo's model is built around Forward Deployed Engineers (FDEs)—specialized AI engineers who embed inside customer organizations to redesign workflows, connect models to business systems, and build production-ready AI infrastructure. They're not consultants who hand off a strategy deck and leave. They're engineers who sit with your teams, write code, and stay until the system works in production.
To launch, OpenAI acquired London-based AI consulting firm Tomoro, bringing 150 FDEs with experience deploying AI systems at Tesco, Virgin Atlantic, and Supercell. The acquisition gives DeployCo immediate capacity to start embedding engineers into enterprise customers.
Denise Dresser, OpenAI's chief revenue officer, said the challenge has shifted from "can AI do this?" to "how do we integrate AI into the infrastructure and workflows that actually run our business?" DeployCo is OpenAI's answer: engineers who know the models, understand production systems, and can bridge the gap.
The investment scale is notable. $4 billion isn't a pilot program budget. That's enough capital to acquire multiple services firms, hire hundreds of engineers, and compete directly with traditional systems integrators like Accenture, Deloitte, and IBM.
Anthropic's Countermove: The Same Strategy, Different Execution
Anthropic isn't sitting still. The company announced its own services unit in March, backed by a $100 million initial investment and supported by its Claude Partner Network. Like OpenAI, Anthropic is building a services layer to help enterprises deploy Claude into production workflows.
The competitive positioning is instructive. Anthropic emphasizes that Claude is available on all three major cloud providers—AWS, Google Cloud, and Microsoft Azure—while OpenAI is primarily distributed through Azure. For enterprises already committed to a specific cloud vendor, that multi-cloud flexibility matters.
Goldman Sachs is backing both OpenAI's and Anthropic's services companies, which tells you Wall Street sees this as a real market opportunity, not a vendor vanity project.
The timing is strategic. Both companies launched services units within months of each other, signaling they've both identified the same bottleneck: enterprises are willing to buy AI, but they can't deploy it effectively without help. Selling APIs and chat interfaces isn't enough. The real revenue opportunity is in deployment services.
The Traditional Consulting Firms Are Nervous (and Should Be)
This move puts pressure on traditional systems integrators who've built businesses around enterprise transformation. Firms like CGI, Accenture, and Deloitte have argued that their decades of experience with business processes, security, and compliance give them an advantage over "born-in-AI" upstarts.
Russell Goodenough, AI lead for CGI, told CRN that traditional solution providers bring "trust and security" that large enterprises need for AI at scale—and they avoid vendor lock-in by working across multiple AI platforms, not just one.
That's a fair point. If you hire OpenAI's DeployCo, you're getting engineers deeply familiar with GPT models and OpenAI's API roadmap. But you're also locked into OpenAI's ecosystem. If the model pricing changes, if a competitor releases a better model, or if OpenAI's API goes down, you're stuck.
Traditional consultancies position themselves as vendor-neutral advisors who can integrate the best AI models for your use case, migrate you to new platforms as the market evolves, and maintain systems across your existing IT infrastructure.
The counterargument from the AI vendors is speed. Born-in-AI firms claim they can move faster because their engineers already know the models, the APIs, and the deployment patterns. Traditional consultancies are learning AI on the job. OpenAI and Anthropic engineers are building AI systems every day.
For enterprises, this creates a new decision point: Do you hire the AI vendor's in-house team (fast, deep model expertise, vendor lock-in risk) or a traditional consultancy (slower, broader experience, vendor-neutral)?
What This Means for CTOs and Engineering Leaders
If you're a CTO or VP of Engineering evaluating AI deployment, this changes your options:
Option 1: Build In-House
Hire your own AI engineers, buy API access, and build custom systems internally. This gives you full control, no vendor lock-in, and the ability to switch models or providers as the market evolves.
The challenge: AI engineering talent is expensive and scarce. A senior AI engineer with production deployment experience commands $250K-$400K+ in total compensation. If you need a team of 5-10 engineers to build and maintain your AI systems, you're looking at $2M-$4M annually just in salary costs.
You'll also need to staff for security, compliance, data engineering, and integration with your existing systems. The total cost of an internal AI team can easily hit $5M-$10M annually before you've delivered a single production system.
Option 2: Hire a Traditional Consultancy
Engage Accenture, Deloitte, CGI, or another systems integrator to design and deploy AI systems for you. They bring vendor-neutral expertise, broad industry experience, and the ability to integrate AI into complex enterprise workflows.
The challenge: Traditional consultancies bill $300-$500 per hour for AI consulting work. A mid-sized AI deployment project can run $2M-$5M in consulting fees. And because these firms are learning AI deployment alongside their clients, you're often paying for on-the-job training.
Speed is another issue. Traditional consulting engagements move slowly—requirements gathering, architecture design, approval cycles, phased rollouts. If you're competing against a rival who's moving faster with AI, a 12-18 month consulting project might be too slow.
Option 3: Hire OpenAI or Anthropic's Services Team
Bring in Forward Deployed Engineers from the AI vendor directly. They know the models, the APIs, and the deployment patterns. They can move fast because they've built similar systems dozens of times before.
The challenge: Vendor lock-in. If you build your entire AI infrastructure around GPT-4 and OpenAI's API, switching to Claude or Gemini later will be expensive and disruptive. You're betting that OpenAI (or Anthropic) will remain the best model provider for your use case over the next 3-5 years.
Pricing is another unknown. OpenAI hasn't published DeployCo pricing yet, but it's reasonable to assume it will be competitive with traditional consultancies—probably in the $250-$400 per hour range for FDEs. A major deployment project could still cost $1M-$3M.
What This Means for CFOs and Business Leaders
From a CFO's perspective, this is fundamentally a build-vs-buy decision with a new "buy from the AI vendor" option on the table.
The financial calculus depends on three factors:
1. Time to Value
How fast do you need AI systems in production? If you're in a competitive market where rivals are already deploying AI, speed matters. Hiring OpenAI or Anthropic's services team might deliver results 6-12 months faster than building in-house or engaging a traditional consultancy.
2. Total Cost of Ownership
Building in-house costs $5M-$10M annually for a competent AI team, but you own the system and can iterate without ongoing consulting fees. Hiring a consultancy costs $2M-$5M per project, but you pay for each new initiative. Hiring the AI vendor's team probably costs $1M-$3M per project, but with faster delivery.
3. Strategic Risk
Vendor lock-in is a real risk. If you build your entire operation around GPT-4 and OpenAI later raises API prices 3x (which has happened before), your cost structure changes overnight. Building in-house or using a vendor-neutral consultancy gives you flexibility to switch models as the market evolves.
The Real Strategic Question: Is AI a Core Competency?
The deeper question for enterprise leaders is whether AI should be a core competency or an outsourced capability.
If AI is central to your competitive advantage—if it's how you differentiate your product, serve customers better, or operate more efficiently than rivals—you probably need to build in-house. Outsourcing your core competency to a vendor (even a very good one) is strategically risky.
But if AI is a supporting capability—something that improves operations but isn't your core differentiator—buying services from OpenAI, Anthropic, or a traditional consultancy might make sense. You get speed and expertise without the overhead of building and maintaining an internal AI team.
Marc Andreessen's classic question applies here: "Is this technology a sustaining innovation (helps you do what you already do better) or a disruptive innovation (changes what you do entirely)?" If AI is sustaining, outsource it. If it's disruptive, build it in-house.
The Partner Ecosystem Is About to Get Very Complicated
One underreported aspect of this announcement is the partner network tension it creates.
OpenAI's Frontier Alliance includes partners like Accenture, Deloitte, and PwC—the same firms OpenAI's DeployCo will now compete against. Anthropic's Claude Partner Network includes similar consulting firms.
How does that work? If Accenture is both a partner and a competitor, which projects does OpenAI hand to Accenture, and which does it keep for DeployCo? The lines are blurry.
For enterprises, this creates confusion. If you call Accenture for AI deployment help, are you getting Accenture's vendor-neutral expertise, or are you getting an Accenture team that's basically reselling OpenAI's services?
Expect channel conflict. OpenAI and Anthropic will try to route high-value, strategic deployments to their own services teams. Partners will push back and demand clear rules of engagement. Some partnerships will dissolve. Others will turn into awkward co-opetition arrangements where the AI vendor and the consultancy are cooperating on some deals and competing on others.
What to Do Now
If you're evaluating AI deployment options, here's what to prioritize:
For CTOs and Engineering Leaders:
-
Evaluate internal capability. Can your current team deploy AI systems in production, or do you need external help? Be honest—most teams don't have this experience yet.
-
Run a pilot with multiple vendors. Don't commit to a multi-million-dollar services contract until you've seen how the vendor's team actually works with your organization. Run a 3-6 month pilot with OpenAI, Anthropic, and a traditional consultancy. Measure time to deployment, system quality, and knowledge transfer.
-
Plan for vendor switching costs. If you go with OpenAI or Anthropic's services team, document integration points and design for portability. You don't want to be locked in forever.
For CFOs and Business Leaders:
-
Model the total cost of ownership. Compare building in-house ($5M-$10M/year) vs buying services ($1M-$5M per project) vs hybrid (internal team + external specialists). Factor in time to value and strategic risk.
-
Set a decision deadline. AI deployment is moving fast. Waiting 12 months to decide means you're 12 months behind competitors who are already deploying. Set a 90-day deadline to evaluate options and commit to a strategy.
-
Track competitive moves. If your direct competitors are deploying AI faster than you are, that's a strategic problem. Don't let analysis paralysis cost you market share.
The Bottom Line
OpenAI and Anthropic are spending billions of dollars to embed engineers inside enterprise organizations because they've realized that selling AI is no longer about selling models—it's about selling deployment expertise.
For enterprises, this creates new options and new complexity. You can build in-house, hire a traditional consultancy, or bring in the AI vendor's team directly. Each option has trade-offs in cost, speed, and strategic risk.
The only wrong answer is doing nothing. Your competitors are deploying AI right now. The question isn't whether to deploy AI—it's who you trust to help you do it.
Continue Reading
- How Enterprise AI Vendors Are Rethinking Go-to-Market Strategy
- The True Cost of Building vs Buying Enterprise AI Systems
- Why AI Vendor Lock-In Is the New Cloud Vendor Lock-In
Follow THE DAILY BRIEF:
- LinkedIn: Rajesh Beri
- Twitter/X: @rajeshberi
- Subscribe for twice-weekly enterprise AI insights at beri.net
