After years of Microsoft Azure exclusivity, OpenAI just made its biggest enterprise infrastructure move yet: GPT-5.5, Codex, and Managed Agents are now available on AWS Bedrock.
The announcement, made last week, fundamentally changes the enterprise AI landscape. For the first time, companies with AWS commitments can access OpenAI's frontier models without leaving their existing cloud infrastructure.
This isn't just about model access. It's about breaking vendor lock-in, reducing costs, and giving enterprises the multi-cloud AI strategy they've been demanding.
What Just Changed
OpenAI and AWS announced three new offerings in limited preview:
- OpenAI models on Amazon Bedrock — GPT-5.4 and GPT-5.5 (currently rolling out)
- Codex on Amazon Bedrock — OpenAI's coding agent, now integrated with AWS infrastructure
- Amazon Bedrock Managed Agents powered by OpenAI — Production-ready AI agents with AWS security and governance
All three run through existing Bedrock APIs. Enterprises authenticate with AWS credentials, process inference through AWS infrastructure, and pay through existing AWS commits.
Why This Matters for Technical Leaders
Network latency drops. When your data lives in AWS and your AI models run in Azure, every API call crosses cloud boundaries. That's latency, egress costs, and complexity. AWS eliminates this by running OpenAI models inside your AWS environment.
Security posture improves. Instead of managing API keys for multiple cloud providers, teams use AWS IAM roles. Instead of separate compliance frameworks, everything runs under AWS's existing enterprise-grade controls. One security model, one audit trail, one governance framework.
Cost structure simplifies. Companies with AWS Enterprise Discount Programs (EDPs) can now apply those commits to OpenAI model usage. No more fighting procurement to add another cloud provider. No more reconciling bills across Azure and AWS.
Codex integration becomes seamless. For teams already using AWS CodeCommit, CodePipeline, or CodeBuild, Codex on Bedrock means the AI coding agent runs where the code lives. Authenticate once, build in one environment, stay in one ecosystem.
Why This Matters for Business Leaders
Multi-cloud strategy just became viable. Before this, choosing OpenAI meant choosing Azure for inference. Now, CFOs can negotiate better terms with AWS by consolidating AI spend. CIOs can build multi-cloud resilience without operational complexity.
Vendor lock-in risk decreases. Enterprises betting billions on AI transformation don't want single-vendor dependency. This announcement proves OpenAI is committed to multi-cloud distribution, reducing strategic risk for C-suite decision-makers.
Time-to-value accelerates. Companies already running production workloads on AWS don't need to spin up Azure infrastructure to experiment with GPT-5.5. No new contracts, no new security reviews, no new compliance assessments. Just enable Bedrock and start building.
Procurement friction vanishes. Legal, Finance, and Security teams have already vetted AWS. Adding OpenAI models through Bedrock is an infrastructure change, not a new vendor relationship. That's weeks or months of approval cycles eliminated.
The GPT-5.5 Performance Story
OpenAI released GPT-5.5 on April 23, 2026, calling it "our smartest and most intuitive to use model yet". The performance gains matter for enterprise use cases:
- 82.7% accuracy on Terminal-Bench 2.0 (complex command-line workflows)
- 58.6% on SWE-Bench Pro (real-world GitHub issue resolution)
- 84.9% on GDPval (knowledge work across 44 occupations)
- 78.7% on OSWorld-Verified (operating real computer environments)
More importantly, GPT-5.5 uses fewer tokens to complete the same tasks compared to GPT-5.4. That's not just faster inference—it's lower costs per task.
On Artificial Analysis's Coding Index, GPT-5.5 delivers state-of-the-art intelligence at half the cost of competitive frontier coding models.
For enterprise teams running thousands of AI-assisted coding tasks per day, that efficiency gain translates to real budget impact.
Managed Agents: The Missing Infrastructure Piece
The third part of this announcement—Amazon Bedrock Managed Agents powered by OpenAI—solves a problem most enterprises hit within six months of deploying AI agents: production operations.
Building a proof-of-concept agent is easy. Running it reliably at scale, with proper error handling, logging, monitoring, and fallback logic, is hard.
Bedrock Managed Agents provides:
- OpenAI harness integration — Pre-built infrastructure optimized for GPT-5.5's agentic capabilities
- Faster execution — AWS-optimized serving for lower latency
- Built-in security and governance — IAM roles, VPC controls, CloudTrail logging from day one
- Production-ready deployment — No need to build custom agent orchestration
For teams that have spent months building custom agent frameworks, this is infrastructure they no longer need to maintain.
The Multi-Cloud AI Market Shift
This announcement signals a broader shift in enterprise AI procurement.
In 2024-2025, choosing an AI model meant choosing a cloud provider. OpenAI = Azure. Anthropic = AWS and Google Cloud. Google Gemini = Google Cloud.
In 2026, that's starting to break down.
- OpenAI now runs on AWS Bedrock (and still on Azure)
- Anthropic runs on AWS Bedrock and Google Vertex AI
- Google Gemini is Google-only (for now)
The trend is clear: model providers want distribution, and cloud providers want to be the infrastructure layer.
For enterprises, this means:
- More negotiating leverage — Play cloud providers against each other on price
- Better operational flexibility — Run workloads where it makes sense, not where the model happens to be
- Reduced migration risk — Multi-cloud model access makes it easier to shift infrastructure without rewriting applications
What to Do Next
For CIOs and CTOs:
- Review your AWS EDP — Can you redirect existing cloud credits to OpenAI model usage?
- Audit egress costs — If you're running OpenAI models from Azure but your data lives in AWS, calculate what you're paying in cross-cloud data transfer
- Test Codex on Bedrock — If your engineering teams use AWS CodeCommit or CodePipeline, pilot Codex integration and measure productivity gains
For CFOs and Procurement:
- Renegotiate cloud contracts — Use multi-cloud AI access as leverage in your next AWS renewal
- Consolidate vendors — If you can run OpenAI through AWS Bedrock instead of adding Azure, that's one less vendor relationship to manage
- Model cost benchmarks — GPT-5.5 on Bedrock is priced at $5/1M input tokens and $30/1M output tokens—compare that to your current Azure OpenAI spend
For Security and Compliance:
- Review IAM policies — Bedrock uses AWS IAM; ensure your existing roles and policies extend to AI model access
- Check data residency requirements — Bedrock runs models in AWS regions; confirm that meets your compliance needs
- Audit logging setup — CloudTrail can now track OpenAI model usage alongside other AWS service calls
The Bottom Line
OpenAI on AWS Bedrock isn't just a new deployment option. It's a strategic shift in how enterprises buy, deploy, and operate AI infrastructure.
For years, picking an AI model meant picking a cloud provider. That era is ending.
Now, enterprises can choose the best model for the job and run it on the infrastructure that makes sense for their business—without vendor lock-in, without operational complexity, and without rewriting applications.
If you're an AWS shop betting big on AI, this announcement just gave you a lot more options.
And if you're a CFO negotiating cloud renewals, you just got a lot more leverage.
Key Takeaways:
✅ OpenAI GPT-5.5, Codex, and Managed Agents now available on AWS Bedrock
✅ Ends Microsoft Azure exclusivity for OpenAI models
✅ Enterprises get multi-cloud AI options with AWS security and governance
✅ Reduces egress costs for AWS-based data + OpenAI model usage
✅ Bedrock Managed Agents provide production-ready infrastructure
✅ GPT-5.5 delivers state-of-the-art coding at half the cost of competitors
Sources:
- OpenAI on AWS announcement
- AWS Bedrock OpenAI models
- Introducing GPT-5.5
- Amazon Bedrock Managed Agents
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.