The quietest $60 million in enterprise AI this year just landed on an orchestration platform most CIOs have never heard of.
On April 23, 2026, Orkes closed a $60 million Series B led by AVP, with participation from new investor Prosperity7 Ventures and returning backers Nexus Venture Partners, Battery Ventures, and Vertex Ventures US. The company runs over one billion workflows a day for customers including JP Morgan Chase, Tesla, LinkedIn, American Express, VMware, Quest Diagnostics, United Wholesale Mortgage, and Woodside Energy.
The round is small by 2026 agentic-AI standards. The signal is not.
Orkes is not a model company. It is not a vector database, a RAG framework, or a copilot. It is the execution layer underneath enterprise AI—the substrate that decides whether an agentic workflow survives contact with production or breaks the moment an LLM call times out, a tool rate-limits, or a downstream service goes down.
That is exactly the capability two-thirds of enterprises cannot build themselves. McKinsey's 2025 research found roughly 66% of companies stuck in AI pilot mode, not because models are weak but because production-grade orchestration is hard. Gartner projects AI software spending will hit $450 billion in 2026. Most of that capital will be wasted on agents that never leave staging.
For CIOs, CTOs, and CFOs evaluating what to actually buy, the Orkes round is a data point on where the enterprise AI stack is consolidating. The orchestration layer is no longer optional.
Why Netflix's Conductor Matters in 2026
Orkes did not start as an AI company. Its engineering core is Conductor, the durable workflow engine Netflix built in 2014 and open-sourced in 2016 to coordinate its own microservices chaos at global streaming scale.
Conductor was built to answer a very specific production question: when you have hundreds of services, thousands of deployments, and workflows that span minutes to days, how do you guarantee that a business process finishes exactly once, recovers from failure, and leaves an auditable trail?
That is also—almost word for word—the question enterprise AI teams are now asking about agents.
Orkes' co-founder and CEO Jeu George was one of the original Conductor maintainers at Netflix. The company's thesis from day one was that agentic workflows are not a new category. They are distributed systems with an LLM somewhere in the call graph. The same primitives that made Conductor reliable at Netflix—durable state, idempotent retries, event-driven triggers, human-in-the-loop steps, full observability—are the same primitives AI agents need to survive production.
"Developers need orchestration, controls, and visibility to run advanced AI and agentic systems with confidence," George said on the Series B announcement. It is not marketing. It is the entire architectural bet.
The Technical Layer CIOs Should Care About
For CTOs evaluating Orkes against LangGraph, n8n, Temporal, Dagster, or homegrown orchestration, the capability stack is specific.
1. Durable execution, not in-memory state. Conductor persists workflow state to a durable store and resumes from the last successful step on failure. That is the difference between an agent that retries gracefully when GPT-5 returns a 503 and an agent that loses three hours of customer context because a container restarted. Orkes advertises a 99.99% availability SLA on its managed cloud.
2. Polyglot runtime. Workflows can invoke tasks written in Java, Python, Go, C#, JavaScript, or TypeScript. This matters because real enterprise AI stacks are not clean. The LLM call lives in Python. The legacy service is Java. The front-end glue is TypeScript. Conductor does not force a language. It coordinates them.
3. Agent Runtime. Orkes' Agent Runtime blends deterministic workflow stages with LLM-driven decision points. The pattern matters: the platform owns the workflow, the model owns interpretation. That separation is what makes agentic systems auditable. You can look at any run and see which step the agent took, what the LLM returned, and whether a policy gate fired.
4. MCP Gateway. Orkes converts internal APIs into Model Context Protocol tools that any compliant agent—Claude, Gemini, OpenAI-function-calling, LangGraph, your own—can invoke safely. For enterprises with thousands of internal APIs, this removes the single biggest integration barrier to agent deployment. You expose tools once, govern them centrally, and consume them across frameworks.
5. Prompt-to-Workflow. Natural-language workflow generation. A developer describes a process, Orkes drafts a starter workflow graph, and a human edits and deploys. This is the productivity layer that shortens the time from "we want an agent that does X" to "we have an agent that does X in staging."
6. Fine-grained governance. RBAC, audit trails, version control with rollback, step-by-step execution visualization. This is the capability set that separates an enterprise platform from a demo framework.
The design choice that underlies all of it: AI does not decide the workflow. The workflow is defined, versioned, and enforced by the platform. The LLM contributes decisions within pre-authorized steps. That is the architecture pattern that will win regulated enterprises—financial services, healthcare, energy, telecom—because it is the only pattern that survives an audit.
The Customer List Tells the Story
The customer roster Orkes disclosed around the Series B is the single most useful signal in the announcement.
- JP Morgan Chase — regulated financial services, extreme audit demands
- Tesla — manufacturing and engineering at global scale
- American Express — payments, fraud, compliance
- LinkedIn — social graph scale, personalization pipelines
- Quest Diagnostics — healthcare, HIPAA-regulated workflows
- United Wholesale Mortgage — financial services, loan origination
- Twilio — communications infrastructure provider
- VMware — enterprise software
- Woodside Energy — Australian energy major, industrial operations
- Naveo Commerce — global e-commerce order fulfillment
- Foxtel, Coupang, Swiggy — media and marketplace scale
A customer list that includes JP Morgan, Quest Diagnostics, and Tesla is a very different credibility signal than a customer list full of AI-native startups. These are companies whose compliance, security, and reliability bars are set by regulators. They do not buy unproven infrastructure.
Naveo Commerce's use case is instructive. The company runs global supply-chain order fulfillment on Orkes with AI agents handling inventory tracking and real-time disruption detection. That is not a chatbot. It is an agentic workflow where a delivery delay in Rotterdam triggers a reroute decision, a customer notification, and a financial reconciliation across three systems—all durably, all observably, all auditably.
That is the production profile. And it is what justifies a Series B from AVP and Prosperity7.
What CFOs Should See in This Round
For finance leaders trying to make sense of AI infrastructure spend, the Orkes round reframes a simple question: where does the orchestration dollar go?
Most enterprise AI budgets in 2024 and 2025 flowed to three line items: model usage (OpenAI, Anthropic, Google), compute (GPU capacity), and professional services (consultants to glue it together). The orchestration and governance layer was assumed to be free—a side effect of whatever framework the engineering team adopted.
That assumption is breaking. The teams actually running AI in production are discovering that orchestration is where the unit economics live:
- Reliability drives cost. An agent that retries correctly on a 429 costs pennies. An agent that fails, dumps context, and forces a full regeneration costs dollars. At a billion workflows a day, the delta is enormous.
- Observability drives capacity planning. You cannot optimize what you cannot measure. Step-level cost attribution—which model call is burning tokens, which tool invocation is slow—is how enterprises stop over-provisioning model budgets.
- Governance drives insurability. Increasingly, cyber-insurance underwriters and internal risk committees ask for audit trails on AI-driven decisions. A platform with RBAC and full execution logs is a lower risk premium. A LangChain script with no audit trail is an underwriting conversation.
Prosperity7 Ventures' Managing Director Abhishek Shukla framed the thesis precisely: Orkes provides "a single, governed engine to coordinate LLMs, tools, microservices, and human review, so AI can safely sit in the middle of mission-critical workflows."
The operative phrase is "mission-critical workflows." That is the budget line CFOs should actually track. Not the model bill. The orchestration bill—including the downstream cost of failures the orchestrator prevented.
The Competitive Landscape, Honestly
Orkes does not sit in a vacuum. The agentic orchestration market is loud, and most buyers are confused.
The workflow engines: Temporal, Prefect, Dagster, Airflow, AWS Step Functions. These platforms are mature, battle-tested, and increasingly adding AI primitives. Temporal in particular has aggressive roadmap positioning around durable AI workflows. Orkes' advantage is that it started with a runtime (Conductor) that was built for polyglot, asynchronous, event-driven workloads—exactly the shape agentic systems take.
The AI-native frameworks: LangGraph, LlamaIndex agents, CrewAI, AutoGen. These are developer-first, optimized for rapid prototyping, and improving fast. Their weakness is production operations: durability, RBAC, audit, multi-tenant governance, SLA-backed uptime. Many enterprise teams are discovering that a LangGraph prototype does not safely scale to a thousand concurrent customer workflows.
The no-code automation platforms: n8n, Zapier Agents, Make. Excellent for citizen developers and lightweight automation. Not the answer when the workflow touches core banking systems or PHI.
The hyperscaler offerings: AWS Bedrock Agents, Azure AI Foundry, Google Agent Builder. Attractive for single-cloud shops. Lock-in is real. Most Fortune 500s run multi-cloud and need vendor-neutral orchestration.
Orkes' positioning is the middle: enterprise-grade durability like Temporal, agentic primitives like LangGraph, deployable on AWS, Azure, GCP, or on-prem. That is a specific bet. It wins in regulated enterprises running hybrid infrastructure. It loses to AWS Bedrock Agents inside pure AWS shops that do not care about portability.
For buyers, the honest framing: Orkes is the right answer when you have hundreds of existing microservices, multiple LLM providers, regulated workloads, and a need to outlive any single model vendor. It is the wrong answer when you have three agents and a single cloud.
The Decision Framework
For CIOs and CTOs evaluating agentic orchestration in the next ninety days, the decision is not "Orkes vs. LangGraph." It is "do we adopt a platform or accumulate prototypes?"
If you have more than five agents in or approaching production: Stop treating orchestration as a framework choice. It is an architectural tier. Run a bake-off between Orkes, Temporal + AI add-ons, and your hyperscaler's native option. Evaluate on durability, observability, governance, and multi-framework tool integration—not model quality.
If you are all-in on a single hyperscaler: The native orchestration tools (Bedrock Agents, Agent Builder) are cheaper and tighter-integrated. Lock-in is real. Make the decision eyes-open.
If you are in financial services, healthcare, or energy: Governance and audit are the decisive criteria. Any platform that cannot produce step-level execution logs, RBAC on tools, and version-controlled workflows is a procurement blocker. Orkes' customer list suggests it clears that bar. Validate independently.
Regardless of vendor choice: Adopt the workflow-first, model-second pattern. The platform owns the orchestration, governance, and observability. The LLM contributes decisions inside pre-authorized steps. This is the architectural principle that survives model churn. Opus becomes obsolete, Gemini-3 becomes obsolete, the workflow endures.
The Bottom Line
Orkes' Series B is not a story about a $60 million round. It is a story about which layer of the enterprise AI stack is getting rewarded.
In 2024, capital flowed to models. In 2025, capital flowed to inference infrastructure. In 2026, capital is flowing to the layer that decides whether an agentic workflow actually works in production, for a regulated enterprise, at a billion runs a day.
That layer is orchestration. Orkes is not alone there—Temporal, the hyperscalers, and a new wave of agent-native platforms are all competing—but the customer list, the Conductor heritage, and the AVP-led round put the company among the serious contenders.
For Rajesh's peers in enterprise AI leadership, the takeaway is not to buy Orkes specifically. It is to recognize that the framework you chose in 2024 is almost certainly not the platform you will run in 2027. Budget for the migration. Write the governance requirements now. Evaluate orchestration as its own architectural tier.
The agentic workflow war will not be won by the best model. It will be won by the platform that keeps a million concurrent agents running reliably while the models underneath them get swapped out every six months.
Continue Reading
Sources
- Orkes Raises $60M in Series B Funding (FinSMEs)
- Orkes raises $60M as developers increasingly use its platform to deploy AI confidently in production (AVP Capital)
- Orkes Raises $60M to Scale AI Workflow Orchestration (JustAINews)
- Modern Workflow Orchestration Platform (Orkes)
- Conductor open-source project (GitHub)
- Top Startup and Tech Funding News – April 23, 2026 (Tech Startups)
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
