In a fourteen-day window between April 15 and April 29, the two largest enterprise SaaS vendors on the planet placed opposite bets on the future of agentic AI. Salesforce ripped the browser off its platform and exposed every capability as an API, MCP tool, or CLI command — explicitly designed for AI agents to operate without a human ever logging in. SAP did the inverse: published a new API policy that prohibits "interaction or integration with (semi-)autonomous or generative AI systems that plan, select, or execute sequences of API calls" except through SAP-endorsed pathways like Joule, Business Data Cloud, and SAP's Agent Gateway.
These are not nuanced differences in roadmap. They are opposite philosophies about who controls the agentic layer in your stack. And by the time most CIOs notice, the architectural decisions made in the next two quarters will be locked in for five years.
I have spent the last week reading both policy documents, the analyst reactions, the user-group statements, and — quietly — comparing them to what enterprise architects in my own circle are actually doing. The divergence matters more than either vendor wants to admit, and the procurement question it forces is one most enterprise AI strategies have not yet articulated.
This is the case for treating the SAP/Salesforce split as a category-defining moment, the test list every CIO needs to run on both, and the strategic question that determines whether your AI architecture survives the next 18 months.
What Salesforce Actually Did
On April 15 at TrailblazerDX in San Francisco, Salesforce announced Headless 360. The marketing language ("no browser required") undersells the architectural shift. The substance: Salesforce exposed its entire platform — Sales Cloud, Service Cloud, Marketing Cloud, Commerce, Industry Clouds, Data Cloud, Flow, Apex, the works — as three parallel access patterns:
- APIs for programmatic access from any system
- MCP tools for AI coding agents to call directly (Claude Code, Cursor, Codex, Windsurf are all named integration points)
- CLI commands through the
sf agenttoolset
Headless 360 ships with 60+ new MCP tools and 30 preconfigured coding skills. The @salesforce/mcp server lets a Claude Code or Cursor session connect directly to a Salesforce org, read schema, generate Apex, deploy components, and run tests — all without human intervention. Combine that with Agentforce Vibes 2.0 (multi-model support including Claude Sonnet and GPT-5, with business-context understanding) and what Salesforce has actually shipped is a CRM platform whose primary user is an AI agent and whose secondary user is a human.
The governance layer matters. Headless 360 ships with a Testing Center for logic-gap detection, Custom Scoring Evals for decision-quality assessment, Agent Script for behavior control, observability with session tracing, A/B testing for agent versions, and an Agent Fabric multi-platform governance control plane. Translation: Salesforce is conceding that if every agent in your stack can talk to every API in your CRM, you need a governance layer that watches them. They built it.
The strategic message is unambiguous. Salesforce wants to be the substrate that every AI agent — theirs, OpenAI's, Anthropic's, Microsoft's — operates against. The bet is that an open API surface attracts more agentic activity, more agentic activity drives more data flowing through Salesforce, and more data flowing through Salesforce makes Data Cloud the system of record nobody can replace.
What SAP Actually Did
Two weeks later, on April 29, SAP published API Policy v.4/2026. Section 2.2.2 prohibits API use for "(a) interaction or integration with (semi-)autonomous or generative AI systems that plan, select, or execute sequences of API calls, and (b) scraping, harvesting, or systematic and/or large-scale data extraction." Use is permitted only through "SAP-endorsed architectures, data services, or service-specific pathways expressly identified" by SAP.
Read carefully, this is not a soft warning. It is a contractual prohibition. The third-party AI agent landscape — Microsoft Copilot, Salesforce Einstein, ServiceNow Now Assist, any custom-built agent calling SAP — falls outside the permitted scope unless it routes through SAP's own AI products. The clause SAP wants you to read is the exception path: Joule, Business Data Cloud, the new SAP Agent Gateway. The clause your procurement team should read is the prohibition.
There is a hard deadline. SAP will begin blocking the ODP RFC interface (the Operational Data Provisioning Remote Function Call channel that most third-party data-extraction tools rely on) starting July 2026, with case-by-case exceptions through year-end. After January 2027, the path is closed.
The German-speaking SAP user group DSAG — which represents customers across Germany, Austria, and Switzerland and includes a meaningful portion of European manufacturing — formally objected. DSAG chairman Jens Hungershausen put it bluntly: "It's not clear which kind of APIs are allowed for use and which ones are not." He warned that the resulting uncertainty "could stifle adoption of new technologies, particularly AI integrations. If you're uncertain, you probably won't do anything about it, and that's a risk that innovation is not taking place." DSAG's specific complaints: the approved API Hub list is poorly maintained and outdated, SAP can revoke API whitelisting without published criteria, and the language around "SAP-endorsed architectures" lacks definition.
CEO Christian Klein responded that customers should not worry, that SAP remains an open platform, and that the policy protects domain expertise and platform performance. The community — consultants like Marian Zeis, the European trade press, the Fivetran and Kai Waehner analyses I read this week — is reading it as vendor lock-in dressed in security language.
The Klein defense is not without merit. ERP systems carry transactional integrity guarantees that a poorly designed AI agent can corrupt at scale. An autonomous agent that fires uncontrolled API calls against SAP financials can produce a reconciliation incident measured in millions of euros and weeks of audit cleanup. That is a real risk. But the policy as written does not narrowly address that risk; it broadly prohibits the access pattern. Those are different things.
Why Both Bets Can Be Defended
Neither move is irrational. Both vendors are reading the same agentic AI tea leaves and reaching opposite conclusions about how they win.
Salesforce's bet is that in an agentic world, the platform with the most agent traffic captures the most data, and the platform with the most data wins the next decade of CRM. The economic logic is the standard internet-era openness story applied to enterprise software. If Headless 360 becomes the default substrate against which OpenAI, Anthropic, Google, and Microsoft agents operate when they touch CRM data, Salesforce wins regardless of which model layer dominates.
The risk Salesforce is accepting: agents that bypass the user interface bypass user-based licensing. The traditional Salesforce business model — per-seat licensing — does not translate cleanly to a world where agents do most of the work. Headless 360 is a license-model bet as much as it is an architectural bet. Salesforce will need to monetize agent activity through API consumption, Data Cloud usage, or Agent Fabric governance fees. The economics of that model are not yet clear.
SAP's bet is that ERP is different from CRM. The systems of record SAP runs are the financial ground truth of multinational corporations. Transactional integrity, audit trail, regulatory compliance, and segregation of duties are not features — they are the product. An autonomous agent ecosystem that can invoke arbitrary API sequences against these systems is, from SAP's institutional perspective, a category error. Routing all agentic activity through SAP-controlled pathways (Joule, BDC, Agent Gateway) preserves the integrity layer SAP has spent four decades building.
The risk SAP is accepting: customers may simply leave. The SAP installed base has lived with vendor lock-in for a generation; it has also developed sophisticated workarounds. ELT pipelines into Snowflake or Databricks, change-data-capture into customer-controlled data lakes, and migrations onto open table formats (Iceberg, Delta) all become more attractive when SAP raises the cost of native API access. Kai Waehner's analysis — published May 2 — explicitly recommends this exit path: extract SAP data into a customer-controlled downstream layer where the AI strategy is no longer constrained by SAP's permitted-pathway list.
Both companies are betting on what they think the next architecture looks like. They cannot both be right.
The CIO Test List
If you run an enterprise stack that includes both Salesforce and SAP — and most Fortune 1000 companies do — you have to evaluate both bets independently. Here is the test list I would run against each vendor before the end of Q3 2026:
For Salesforce Headless 360:
-
Audit MCP tool exposure. Inventory which of the 60+ MCP tools you actually want enabled in production. Default-on is not the same as approved-on. Build an allowlist tied to your data classification policy, then enforce it through the Agent Fabric governance plane.
-
Validate Agent Fabric observability against your SOC. Confirm that session traces, agent decisions, and tool-call sequences feed your SIEM in a usable format. If your SOC analysts cannot reconstruct what an agent did from the telemetry, the governance story is marketing.
-
Test the consumption pricing model. Headless 360 means agents will execute orders of magnitude more API calls than human users do. Get a cost projection from Salesforce on Data Cloud usage, API consumption, and Agent Fabric fees under realistic agentic load. Build a kill-switch that triggers if any single agent exceeds a budget threshold.
-
Stress the multi-model governance. Agentforce Vibes 2.0 supports Claude Sonnet and GPT-5 today. Confirm that your model-routing policy can enforce data-residency and compliance rules across model providers — and that switching models does not require reauthoring your agents.
For SAP API Policy v.4/2026:
-
Get the permitted-pathway list in writing. Do not rely on "Joule, BDC, Agent Gateway" as a verbal answer. Get the documented list of approved channels, the criteria for additions, and the SLA for whitelisting customer-specific integrations. If SAP will not commit to those criteria, your AI strategy is at SAP's quarterly discretion.
-
Map your existing third-party AI integrations. Microsoft Copilot agents that read SAP, Salesforce Einstein agents that look up SAP records, custom Python agents that pull from S/4HANA — every one of them is now in scope of the policy. Inventory them, classify them by criticality, and identify which need to migrate to SAP-endorsed pathways and which need to migrate off SAP entirely.
-
Plan for the ODP RFC sunset. Any data extraction tool that uses ODP RFC needs a migration plan before July 2026. The Fivetran-style ELT pipelines, Snowflake connectors, and BI extracts that depend on this interface will break. Confirm with each vendor what the replacement architecture is and price the migration.
-
Price the exit option seriously. I do not say this casually. The Kai Waehner analysis is correct that Open Data Infrastructure principles — decoupled storage and compute, open table formats, vendor-independent data portability — give you optionality that SAP-controlled pathways do not. If your AI roadmap is strategic, you may need a downstream data architecture that is not gated by SAP's policy. Build the cost model.
That is a one-quarter project, not a one-meeting decision. If you do it correctly, you will know which vendor's bet aligns with your architecture and which one creates an unacceptable lock-in cost.
The CFO Question
Here is the question the CFO needs to put to the CIO before the FY27 budget is locked:
Under realistic agentic AI workloads in 2027, does our SaaS spend grow linearly with seat count or exponentially with API call volume — and which vendors will charge us which way?
The answer is going to be vendor-specific. Salesforce, by exposing every capability through Headless 360, is migrating toward consumption-based pricing whether they say so explicitly or not. The economics of agent-driven traffic do not work under per-seat licensing, and the Agent Fabric governance layer is the natural place to attach metering. Plan for Salesforce TCO to look more like AWS than like a traditional SaaS license — variable, traffic-driven, and harder to forecast.
SAP, by closing the API perimeter and routing everything through Joule and Business Data Cloud, is moving toward a different cost structure: bundled AI capabilities priced as additional product modules on top of your existing ERP license. That looks like the SAP business model you are familiar with, but the bundle scope is expanding aggressively. Joule pricing, BDC pricing, and Agent Gateway pricing will all stack on top of the base ERP cost. Get the three-year quote in writing.
The Ramp AI Index data I cited yesterday — 80 percent of VC-backed companies pay for AI tooling vs. 45 percent of non-VC-backed firms — is downstream of this architecture question. Companies with capital to spend on consumption-based AI infrastructure are pulling ahead. Companies that have built their FY27 budget around traditional per-seat SaaS economics are about to discover that the line items they planned for do not match the bills they receive.
The 18-Month Forecast
By the end of 2027, one of three things will have happened:
Scenario 1 — Salesforce's bet wins. Headless 360 becomes the default agentic substrate for CRM and customer-data workflows. Other SaaS vendors (Workday, ServiceNow, HubSpot) follow with their own headless platforms. SAP's API restrictions force a quiet migration of agentic workloads to data-lake architectures sitting downstream of SAP. SAP retains the system-of-record position but loses the agentic activity layer. The valuation premium shifts toward platforms that maximize agent traffic.
Scenario 2 — SAP's bet wins. The first major agentic-AI-driven incident in a Salesforce-equivalent environment — an autonomous agent that corrupts a quarterly close, drops a customer hierarchy, or floods Data Cloud with garbage in a way that propagates to downstream systems — convinces enterprise boards that ERP-style perimeter controls are required at the agentic layer. Salesforce reverses course and tightens Headless 360 access. SAP's caution looks prescient. Joule and BDC become the model the rest of enterprise SaaS adopts.
Scenario 3 — A two-track architecture stabilizes. Customer-facing systems (CRM, marketing, support) standardize on open agentic access. Systems of record (ERP, HCM, financials) standardize on perimeter-controlled agentic access. Enterprises maintain two distinct agent governance regimes — one for the front office, one for the back office — and an integration layer that manages the boundary. This is the messiest outcome and, in my read, the most likely.
I lean toward scenario 3. The customer-facing side of enterprise SaaS has structurally different risk economics than the back-office side. A bad agent decision in marketing costs you a campaign. A bad agent decision in financial close costs you a restatement. The governance models will continue to diverge because the consequence models are different.
But scenario 3 still requires you to make architectural choices now. Which vendors do you trust on which side of the boundary? Which integrations do you build to the new SAP-endorsed pathways and which to the new Salesforce MCP tools? Which governance layer — Agent Fabric, SAP's Agent Gateway, or a third-party platform like Mistral Workflows that I covered yesterday — owns the cross-boundary policy?
These are not questions your AI vendor will answer for you. They are the architecture questions that define whose agentic strategy you adopt.
The fourteen days between April 15 and April 29 closed the window in which you could pretend this was not a decision.
Rajesh Beri is Head of AI Engineering at Zscaler. Opinions are his own and do not represent Zscaler.
Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.
