The healthcare industry will spend more than $18 billion on AI in 2026 — roughly 46% of every dollar flowing into healthcare technology investment — and the dirty secret in every CIO's deck is that 95% of those pilots will produce no measurable value, according to MIT's NANDA State of AI in Business 2025 report. On May 14, Salesforce Ventures and Echo Health Ventures led a $17.5 million Series A into Optura, a startup that thinks the answer to that failure rate isn't a better model — it's a measurement layer. The company's ROAI™ (Return on AI Investment) platform already has more than $2 billion in AI initiatives loaded onto it from customers including Independence Blue Cross, Prime Therapeutics, and Ardent Health, with $120 million in tracked value and 700% ROAI on in-flight initiatives. For CIOs trying to defend AI budgets and CFOs being asked to underwrite the next wave of agent deployments, Optura's bet is that the discipline of measurement is the gating function — not compute, not data, not models.
What Changed on May 14
Optura announced its Series A round with co-founders Andy Fanning (CEO, former VP of AI/Automation at Cigna) and Michael Hollis (President, former President and CGO at Emids) leading the company past $25 million in total funding. Salesforce Ventures led the round, with Echo Health Ventures joining as a new strategic investor, and continued participation from Susa Ventures, Matrix Partners, and HC9 Ventures. The capital target is straightforward: expand AI capabilities, grow platform engineering teams, and scale partnerships with frontier model providers as health plans and providers increasingly demand measurable returns from AI dollars.
The platform itself — what Optura calls ROAI™ — sits as an instrumentation layer between an organization's existing data, its standard operating procedures, and the agents or models it deploys to act on them. According to Optura's product disclosures, the platform has five core capabilities: a unified knowledge layer that maps existing organizational data into a single ontology, a use-case scoring engine that prioritizes initiatives by cost, readiness, and strategic alignment, an AI agent translation layer that turns existing workflows and SOPs into deployable agents, a return-simulation engine that models projected value before deployment capital is committed, and real-time dashboards that track live outcomes against those projections. Across deployed customers the platform now reports 8,000 unique users and more than 250 new AI use cases identified that customers had not previously surfaced through their internal innovation processes.
Two customer signals matter most for the broader industry. Michael R. Vennera, EVP and Chief Strategy, Technology and Operations Officer at Independence Blue Cross, framed the buying logic in the press release: "The question for health plans is no longer whether to invest in AI; it's whether those investments are actually delivering better outcomes." That is the buyer's quote that every CIO will be asked to defend against during budget season. The second signal is Andy Fanning's own framing of the macro moment: "As foundational models like Claude for Healthcare enter the market, the cost of chasing AI hype without disciplined ROI has become existential risk." Translation: the marginal cost of running a bad pilot just dropped, which means the marginal volume of bad pilots just rose. Without an instrumentation layer, the 95% failure rate compounds.
Why This Matters
Technical Implications for CIOs and CTOs
For technical leaders, Optura's architecture is a tell about where enterprise AI tooling is heading. The platform is deliberately not a model — it sits above whichever foundation model the organization chooses, whether that is Anthropic's Claude for Healthcare, OpenAI's ChatGPT Health, or an internal fine-tune. That architectural choice mirrors the rise of AI observability and FinOps platforms in the broader enterprise market (Fiddler, Arize, Vellum, Braintrust), where 2026's most defensible category is no longer model serving — it is the control plane that sits across many models. For CIOs running multi-model environments, the implication is that the new procurement question isn't which model wins, but which measurement and governance layer is going to give finance a defensible attribution story across all of them.
The integration surface is the second technical signal. Optura's unified knowledge layer requires connecting to claims systems, EHR data, scheduling platforms, and back-office workflow tools — which means the same data-foundation challenge that has stalled 85% of agent deployments (according to Fivetran's 2026 research) applies here too. The difference is that Optura's value proposition explicitly absorbs the data-mapping work as part of onboarding rather than treating it as a customer prerequisite. For CIOs evaluating any AI ROI platform, the integration depth question is now the gating technical due-diligence item: a measurement layer that can't see the underlying workflow data is just a dashboard with vibes on it.
Business Implications for CFOs and Operating Leaders
For finance leaders, the more important number from the funding announcement is not $17.5 million in equity — it is $120 million in tracked value across $2 billion in initiatives, which implies that even with a measurement layer in place, the realized value rate on disciplined healthcare AI portfolios is currently around 6% of inflight initiative dollars. That sounds low until you compare it to McKinsey's broader enterprise data (cited in the AI Assembly Lines framework) showing only 39% of enterprises can attribute any EBIT impact to AI at all, and most report less than 5% of earnings attributable to AI investment. Disciplined attribution exposes that gap rather than hiding it.
There is also a hard cash-flow argument for moving measurement upstream. Gartner research cited by enterprise practitioners shows that 85% of organizations misestimate AI project costs by more than 10%, with actual deployment costs typically running 2 to 3 times the initial licensing estimate. A platform that simulates return scenarios before capital is committed — Optura's pre-deployment modeling capability — is functionally a Series A round for the CFO's office, not the CIO's. The strategic implication for boards: the next governance question on AI is not whether the organization has a policy, but whether it has an instrumentation layer that can produce an audit-grade attribution story for every initiative on the roadmap.
Market Context: The ROI Measurement Race
Optura is entering a category that effectively did not exist 18 months ago. Until 2024, enterprise AI tooling investment concentrated on model serving, fine-tuning, prompt management, and vector search. Through 2026, capital has rotated decisively toward observability, evaluation, governance, and ROI attribution — the layers that make AI investments defensible to a board. The broader Gartner 2026 AI spending forecast frames this as the "renewal era of ROI": funding now expands fastest where outcomes can be framed as a measurable delta inside the same cycle as the spend. The data backs the framing. Gartner's survey of 782 infrastructure and operations leaders (reported in April 2026) found that only 28% of AI use cases fully succeed and meet ROI expectations, while 20% fail outright — a near-1:1 stall ratio that explains why a platform built around accountability is finding buyers despite a crowded broader AI tooling market.
Healthcare is the right wedge for that strategy. The $18 billion 2026 spending number masks an uglier distribution: administrative automation captured roughly 42% of healthcare AI deals in 2024, clinical AI another 32%, with therapeutics and research at 25% (according to industry tracking). The administrative slice — prior authorization, claims review, scheduling, revenue-cycle automation — is exactly where Optura's customers are concentrated, and exactly where the dollar value of a tracked, redeployable workflow is easiest to attribute. Health plans like Independence Blue Cross and pharmacy benefit managers like Prime Therapeutics have line items in their P&Ls (utilization management, network operations, member services) that can be matched to AI-driven productivity changes with a defensible methodology.
Optura is also not alone in the analytical thesis. Premier's healthcare AI ROI framework argues for a four-dimensional model that puts clinical outcomes first (safety, mortality, readmissions), operational impact second (throughput, length of stay), ethical and safety governance third (bias, fairness audits), and financial outcomes as a downstream consequence. The clinical math is concrete: AI-driven sepsis detection reduces ICU length of stay by $1,500-$3,000 per case and delivers $1 to $2 million annually for a typical 100-bed hospital; heart failure readmission avoidance generates $600,000 to $1.2 million annually; AI-accelerated stroke response saves $70,000-$120,000 per patient. These are the building blocks of the ROAI math — and the reason a measurement platform that can attribute them at the initiative level is, in 2026, more strategically valuable than the model running underneath.
The competitive surface for Optura is therefore not other healthcare AI vendors. It is the broader AI ROI and observability category — Fiddler, Arize, Vellum, the AI control-plane category — applied with healthcare domain depth that horizontal platforms lack. Salesforce Ventures' lead position in this round is the most important strategic signal in the announcement. Salesforce has spent 2025 and 2026 wiring its Agentforce platform into healthcare workflows, and a portfolio bet on the measurement layer above its own agents tells the market that even the model vendors expect ROI attribution to become the buying decision.
Framework #1: The AI Initiative ROAI Calculator
Use this calculator to estimate first-year ROI on a single AI initiative before committing capital. The framework mirrors how Optura scores initiatives on its platform — and it gives a defensible attribution story for any healthcare or enterprise leader pitching board approval.
Inputs you need:
- Baseline cost per transaction (fully loaded, including labor)
- Baseline transaction volume (annual)
- Expected automation rate (% of transactions handled end-to-end by AI)
- Expected efficiency lift on non-automated transactions (% time saved)
- Total Cost of Ownership: licensing, integration, change management, monitoring, retraining
ROAI formula:
Realized Value = (Baseline Cost × Volume × Automation Rate) +
(Baseline Cost × Volume × (1 − Automation Rate) × Efficiency Lift)
ROAI = (Realized Value − Total Cost of Ownership) / Total Cost of Ownership × 100
Three scenarios for a health plan deploying AI on prior authorization:
| Scenario | Volume (annual) | Baseline cost/auth | Automation rate | Efficiency lift | First-year TCO | Realized value | ROAI |
|---|---|---|---|---|---|---|---|
| Small plan (regional) | 200,000 | $32 | 25% | 20% | $850,000 | $2.56M | 201% |
| Mid-size plan | 1.5M | $32 | 35% | 25% | $3.2M | $24.6M | 669% |
| National plan | 8M | $32 | 45% | 30% | $12M | $166M | 1,283% |
The math reveals two non-obvious truths for CFOs. First, ROAI scales non-linearly with volume because Total Cost of Ownership is largely fixed (platform license, integration, governance overhead) while realized value scales with transaction count. Second, automation rate matters more than efficiency lift — every 10-point shift in automation rate moves first-year ROAI by roughly 200 percentage points at mid-size scale. That means the deployment design question that drives ROI the hardest is not "how good is the model" but "what fraction of transactions can we hand off end-to-end versus assist."
Three guardrails to apply:
- Discount for ramp. First-year realized value is typically 40-60% of steady-state because of phased rollout and adoption curves. Run a haircut scenario showing year-one and year-two separately.
- Add a TCO multiplier. Apply the Gartner 2-3x rule on stated vendor pricing. The line items that get missed: data preparation, model monitoring, compliance, retraining, and change management.
- Stress-test automation rate against quality. If automation rate requires a 99%+ model confidence threshold to be safe (clinical use cases, regulated workflows), the effective automation rate is the model accuracy times the safety threshold compliance rate — often half the headline number.
Framework #2: The 25-Point AI Initiative Readiness Assessment
Before greenlighting any AI initiative, score it on five dimensions, 1-5 each, for a maximum of 25 points. Initiatives below 15 should be sent back to discovery. Initiatives at 20+ qualify for fast-track deployment with measurement instrumentation in place from day one. This is the operational discipline that separates the 5% of pilots that succeed from the 95% that stall.
Dimension 1: Data Foundation (1-5)
- 5 — Clean, structured data exists in a single system with API access and is updated daily
- 4 — Data exists but requires moderate ETL across 2-3 systems
- 3 — Data partially digitized, gaps need to be filled before deployment
- 2 — Significant data quality work required (60%+ of effort)
- 1 — Critical inputs are unstructured, undocumented, or live in PDFs
Dimension 2: Workflow Definition (1-5)
- 5 — Standard Operating Procedure documented, tested, and followed >90% of the time
- 4 — SOP exists, followed inconsistently across teams
- 3 — Workflow varies by region or business unit but is understood
- 2 — Tribal knowledge with no documented standard
- 1 — Process is ad hoc; no consistent workflow exists
Dimension 3: Measurable Outcome (1-5)
- 5 — Baseline KPI is tracked monthly, owner accountable, target defined
- 4 — Baseline exists but is tracked manually
- 3 — Baseline can be reconstructed from existing systems
- 2 — Baseline is anecdotal; would need a six-week measurement project
- 1 — No baseline; the outcome metric is itself contested
Dimension 4: Executive Sponsorship (1-5)
- 5 — Named executive owner with quarterly board reporting commitment
- 4 — Senior leader sponsorship with budget authority
- 3 — Mid-level sponsorship; budget approval required at the next stage
- 2 — Project championed by a middle manager without explicit air cover
- 1 — Bottom-up effort with no identified sponsor
Dimension 5: Adoption Pathway (1-5)
- 5 — End-users embedded in design; change management plan funded
- 4 — Pilot users identified and engaged
- 3 — Users will be trained at deployment; no co-design
- 2 — Users will be informed at deployment
- 1 — Users will discover the change after the fact
Scoring guide:
- 20-25: Fast-track. Deploy with measurement layer instrumented from day one.
- 15-19: Standard track. Address weakest dimension before deployment.
- 10-14: Discovery track. Run a 6-week pre-pilot to lift the lowest-scoring dimensions to at least 3.
- <10: Do not deploy. The initiative is fundamentally not AI-ready.
The MIT NANDA research that found a 95% pilot failure rate traced the root causes to organizational learning gaps, not model quality. Three out of four enterprises identified "getting people to change how they work" as the hardest obstacle — which maps directly to dimensions 4 and 5 above. Build the scoring matrix into the AI investment intake process and the top-of-funnel quality of approved initiatives shifts immediately.
Case Study: Independence Blue Cross and the Optura Pattern
Independence Blue Cross — one of the first health plans to sign on to the 2025 White House Healthcare AI commitments — became Optura's anchor reference customer for a reason that is visible in Michael Vennera's quote: the executive team built AI governance around accountability, not enthusiasm. The Optura deployment connected claims systems, member service workflows, and utilization management as a unified knowledge layer. From that foundation, the platform surfaced 250+ new AI use cases that internal innovation channels had not previously identified — a pattern that mirrors what specialized vendors do better than internal builds, where MIT's research shows 67% vendor success rates versus 33% for internal builds.
The disclosed outcome — $120 million in tracked value, 700% ROAI on in-flight initiatives — is not a single project result. It is a portfolio measurement aggregated across many initiatives, the same way private equity firms measure aggregate carry across a fund rather than individual deals. That framing is the most important strategic borrow for any CIO trying to defend AI budgets at a board level. A handful of high-ROI initiatives can carry the math for a much larger initiative book, but only if the measurement infrastructure can isolate signal from noise across the portfolio.
Three lessons from the Optura customer pattern that generalize:
- Centralize the measurement, decentralize the initiative ownership. Independence Blue Cross runs many AI initiatives across many functions, but ROI attribution flows through a single methodology. That makes board reporting defensible and prevents each business unit from inventing its own metric.
- Make use-case discovery a continuous function, not an annual planning exercise. The 250+ new use cases identified through the platform represent value that internal innovation processes had not surfaced. A measurement layer that scores use cases by cost-readiness-strategic-alignment turns ideation into a quantifiable backlog.
- Treat the pre-deployment simulation as a stage gate, not a forecast. Optura's simulation modeling lets the CFO say no to specific initiatives based on projected return — before any production capital is committed. That single discipline kills the long tail of low-ROI pilots that consumes 60-70% of typical AI budgets.
What to Do About It
For CIOs
Inventory every AI initiative currently funded — pilot, production, or in planning — and apply the 25-point readiness assessment retroactively. Initiatives scoring below 15 should either be paused, reset, or formally killed before the next budget cycle. Stand up a measurement layer (Optura if you are in healthcare, the broader AI observability category if not) as a procurement line item alongside model spend, not after the fact. The defensibility of the AI book at board level in Q3 will be determined by whether attribution methodology was in place before deployment, not retrofitted after.
For CFOs
Apply the Gartner 2-3x TCO multiplier to every AI vendor pitch that crosses the desk. Push back on any business case that does not include a six-month measurement plan with baselines and named owners. Use the AI Initiative ROAI Calculator framework in this article to build a portfolio view across all initiatives — and report aggregate ROAI quarterly to the audit committee. The 6% realized-to-inflight value rate visible in the Optura customer book is the realistic high end for a disciplined portfolio; everything below that is investing without measurement.
For Business and Operating Leaders
Push every AI initiative through the dimension-by-dimension readiness scoring before requesting capital. The two dimensions that drive 95% failure outcomes — executive sponsorship and adoption pathway — are the two that operating leaders can most directly influence. Make every proposed AI initiative come with a named executive sponsor, a documented change management plan, and an end-user pilot cohort. Initiatives that cannot field those three artifacts are not ready, regardless of how good the model is.
