Palo Alto + Portkey: AI Gateway Becomes Control Plane

Palo Alto Networks announced its acquisition of Portkey on April 30, folding a 3,000+ LLM AI gateway into Prisma AIRS to govern enterprise AI agents.

By Rajesh Beri·May 1, 2026·9 min read
Share:

THE DAILY BRIEF

AI SecurityAI GatewayPalo Alto NetworksPortkeyPrisma AIRSAI AgentsEnterprise AI

Palo Alto + Portkey: AI Gateway Becomes Control Plane

Palo Alto Networks announced its acquisition of Portkey on April 30, folding a 3,000+ LLM AI gateway into Prisma AIRS to govern enterprise AI agents.

By Rajesh Beri·May 1, 2026·9 min read

Palo Alto Networks announced on April 30, 2026 that it will acquire Portkey, the Bengaluru-based AI Gateway startup that processes trillions of tokens per month across 3,000+ LLMs and MCP tools, and fold it into Prisma AIRS as the central control plane for enterprise AI agents. The deal is expected to close in Palo Alto's Q4 FY2026 (the May-July window), terms were not disclosed, and Portkey CEO Rohit Agarwal will continue leading the platform inside Palo Alto. Portkey raised $3M in its 2023 seed (Lightspeed) and a $15M Series A (Elevation Capital, Lightspeed) less than three months ago — putting the entire round-to-exit timeline at roughly 90 days. That is the tell. The AI Gateway category is being absorbed into the security stack before most CISOs have finished writing the procurement RFI for it.

The strategic frame is not "Palo Alto bought a developer tool." Lee Klarich, Palo Alto's Chief Product & Technology Officer, was explicit in the press release: Portkey will be the AI Gateway for Prisma AIRS, "inspecting AI traffic and enforcing security and governance policies." Read alongside Google's April 23 indirect prompt injection data — 32% growth in malicious payloads across 2-3 billion crawled pages — the move makes obvious sense. Enterprises are wiring autonomous agents into payments, code repos, ticketing, and email at the same moment the open web is filling with traps designed to weaponize them. Palo Alto is positioning the gateway as the only layer with both the visibility and the privilege to stop bad calls before they hit downstream tools. For enterprise architects who have been treating AI gateways as a Day-2 ergonomics decision, this is the day that calculus changed.

What Portkey actually does

Portkey sits between an enterprise's applications and the model providers (OpenAI, Anthropic, Google, Bedrock, Azure OpenAI, open-weights endpoints) plus the growing fleet of MCP servers and agent runtimes. Three lines of code redirect existing OpenAI/Anthropic SDK calls through the gateway, and Portkey then provides:

  • Unified API surface to 3,000+ LLMs and MCP tools — model-agnostic routing without rewriting application code
  • Semantic routing and automated failover for 99.99% uptime on agent-to-agent traffic
  • Caching and quotas to control runaway agent token spend (an increasingly painful CFO conversation)
  • Telemetry, audit logs, and an agent registry for forensic replay
  • Identity-based runtime controls — and post-acquisition, integration with CyberArk for Agent Identity Security so every autonomous action is authenticated against a known identity

The volume claim — "trillions of tokens per month" — is significant. It says Portkey already sits on the inference path for production agent workloads at meaningful enterprise scale. Palo Alto is buying that traffic position, not just the software.

Why the gateway is now a security control plane

For most of 2024-2025, "AI gateway" meant a developer convenience: a way to A/B test models, fail over when one provider went down, and avoid hard-coding API keys. The Palo Alto acquisition reframes it as the agent control plane — the only place in the stack where you can enforce four things that the model itself cannot:

  1. Provenance. Every prompt that reaches a model carries metadata about which retrieval source contributed which span. Without this, IPI defenses are guessing. With it, your detection stack can finally see when an agent acted on instructions from a fetched web page rather than a user.

  2. Tool-call entitlement binding. The gateway knows which agent issued which tool call, with which scope, on whose behalf. That is the difference between an agent legitimately wiring a payment and one tricked into wiring a payment. The model layer cannot enforce this; the gateway can.

  3. Cost containment. Agentic workflows fan out — one user request becomes hundreds of tool calls — and most enterprises cannot tell whether a 10x token spike is product-market fit or a runaway loop. Caching and per-agent quotas at the gateway are the only practical control.

  4. Uniform policy across model providers. Most enterprises run a multi-model stack now (Claude for reasoning, GPT for breadth, Gemini for multimodal, open-weights for cost). Without a gateway, every provider's safety controls are different, and your DLP/PII policy has to be rewritten N times. With a gateway, it is enforced once.

This is why the AI Gateway category — Portkey, Cloudflare AI Gateway, Kong AI Gateway, LiteLLM, Helicone, Truefoundry — has been quietly capturing more strategic ground than its category name suggests. It happens to be the one enforcement point all your AI traffic must traverse.

What this means for engineering and platform leaders

The honest read is that the gateway question is no longer optional. If you have agents in production, you have already made a gateway decision — it's just whether you made it deliberately. Three concrete actions:

Inventory your current model traffic. Add up calls, tokens, and tool invocations across every agent and copilot in your environment. If you cannot produce that table in under a day, you do not have a gateway. You have a leak.

Decide if you are buying the gateway separately or as part of a security platform. Pre-Portkey, AI gateway was a standalone build-or-buy. Post-Portkey, large security incumbents (Palo Alto first, almost certainly others within 90 days) will bundle gateway into existing platforms. If you are already deep on Palo Alto for SASE, NGFW, or XSIAM, the bundle math gets attractive. If you are not, expect aggressive cross-sell and price the standalone alternatives accordingly.

Pin down your MCP exposure. Portkey's value proposition leans heavily on 3,000+ LLM and MCP tool integration. MCP is the agent-to-tool wire format that adoption has run ahead of governance on. Whatever gateway you choose needs an MCP server registry, signed-tool catalog support, and per-agent allowlists. Without that, you are flying blind on which tools your agents can call.

Review your AI Identity story. Portkey's CyberArk integration is a tell — the next 12 months of agent security is going to be a workload-identity problem, not a model-alignment problem. Each agent needs a workload identity, scoped credentials, and a revocation path. If your IAM team has not started this work, the Portkey deal is the budget conversation opener.

What this means for security and compliance leaders

This acquisition is the cleanest signal yet that "AI security" is converging into the network/platform vendor market. The implications stretch beyond Palo Alto:

  • Cisco, Zscaler, CrowdStrike, and Cloudflare all have either explicit AI gateway products or obvious adjacencies. Expect comparable acquisitions or product launches within the quarter. CrowdStrike already moved on AI vulnerability with the Quiltworks coalition; Cisco bought Galileo for AI observability. Palo Alto just took the gateway slot. The squares fill in fast from here.
  • Vendor lock-in risk shifts. A gateway abstracts the model layer, so it should reduce model-vendor lock-in. But it concentrates lock-in at the gateway/security vendor instead. That is fine if your security stack is the strategic anchor — uncomfortable if it is not.
  • Runtime evidence becomes audit evidence. Every audit framework that touches AI (NIST AI RMF, ISO 42001, EU AI Act high-risk categories, SOC 2 Plus, the new AICPA AI attestation criteria) requires evidence that the right model was called with the right inputs by the right user. The gateway is the authoritative source. Build your audit map around it.

Three asks for the next risk-committee meeting:

  1. Map AI traffic flow. Produce a single diagram showing every model provider, every gateway, every retrieval source, and every tool call your agents make. If a path bypasses the gateway, name and date it.
  2. Codify the gateway as a control. Add it to your control library mapped to OWASP LLM01-LLM10, NIST AI RMF Manage-2.x, and ISO 42001 Annex A. Turn the gateway from infrastructure into evidence.
  3. Re-baseline AI vendor contracts. Vendors selling agentic capabilities should disclose (a) which gateway, if any, sits in front of their agent's model calls, (b) whether MCP tool invocations are logged with provenance, and (c) failover behavior when the gateway is unavailable. Most contracts are silent on all three.

The category context

Zoom out. In the last 30 days alone: Salesforce shipped Agentforce Operations, OpenAI launched Bedrock Managed Agents, Nvidia signed Adobe/Salesforce/SAP onto its Agent Toolkit, Workday detailed its agentic HR/finance rails at its Innovation Summit, Google launched Gemini Enterprise Agent Platform, and now Palo Alto is buying the gateway. Each move is independently rational. Together they describe a clear arc: 2026 is when agentic workloads move from pilots into production, and the control plane gets built around them in real time. The vendors who own the choke points — gateway, identity, observability, runtime — are positioning to be the ones enterprises pay for the privilege of operating agents safely.

Portkey is small in revenue terms. The strategic value Palo Alto is buying is not the ARR. It is the traffic position — being the inference path for production enterprise agent workloads — and the timing. Six months from now, gateway acquisitions will cost 5-10x more, because every other security platform will be racing for the same slot.

The bottom line

For Rajesh Beri's audience — engineering leaders building production AI systems, and CISOs governing them — the action item is simple and uncomfortable. You cannot defer the AI gateway decision to next year's planning cycle. Whatever your current architecture, you now have to answer: which gateway, which integration depth, and which security platform owns it. The good news is the answer is no longer hypothetical. Palo Alto, with Portkey, just made it concrete. The bad news is the rest of the field is about to do the same — and your procurement clock is already running.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Palo Alto + Portkey: AI Gateway Becomes Control Plane

Photo by Taylor Vick on Unsplash

Palo Alto Networks announced on April 30, 2026 that it will acquire Portkey, the Bengaluru-based AI Gateway startup that processes trillions of tokens per month across 3,000+ LLMs and MCP tools, and fold it into Prisma AIRS as the central control plane for enterprise AI agents. The deal is expected to close in Palo Alto's Q4 FY2026 (the May-July window), terms were not disclosed, and Portkey CEO Rohit Agarwal will continue leading the platform inside Palo Alto. Portkey raised $3M in its 2023 seed (Lightspeed) and a $15M Series A (Elevation Capital, Lightspeed) less than three months ago — putting the entire round-to-exit timeline at roughly 90 days. That is the tell. The AI Gateway category is being absorbed into the security stack before most CISOs have finished writing the procurement RFI for it.

The strategic frame is not "Palo Alto bought a developer tool." Lee Klarich, Palo Alto's Chief Product & Technology Officer, was explicit in the press release: Portkey will be the AI Gateway for Prisma AIRS, "inspecting AI traffic and enforcing security and governance policies." Read alongside Google's April 23 indirect prompt injection data — 32% growth in malicious payloads across 2-3 billion crawled pages — the move makes obvious sense. Enterprises are wiring autonomous agents into payments, code repos, ticketing, and email at the same moment the open web is filling with traps designed to weaponize them. Palo Alto is positioning the gateway as the only layer with both the visibility and the privilege to stop bad calls before they hit downstream tools. For enterprise architects who have been treating AI gateways as a Day-2 ergonomics decision, this is the day that calculus changed.

What Portkey actually does

Portkey sits between an enterprise's applications and the model providers (OpenAI, Anthropic, Google, Bedrock, Azure OpenAI, open-weights endpoints) plus the growing fleet of MCP servers and agent runtimes. Three lines of code redirect existing OpenAI/Anthropic SDK calls through the gateway, and Portkey then provides:

  • Unified API surface to 3,000+ LLMs and MCP tools — model-agnostic routing without rewriting application code
  • Semantic routing and automated failover for 99.99% uptime on agent-to-agent traffic
  • Caching and quotas to control runaway agent token spend (an increasingly painful CFO conversation)
  • Telemetry, audit logs, and an agent registry for forensic replay
  • Identity-based runtime controls — and post-acquisition, integration with CyberArk for Agent Identity Security so every autonomous action is authenticated against a known identity

The volume claim — "trillions of tokens per month" — is significant. It says Portkey already sits on the inference path for production agent workloads at meaningful enterprise scale. Palo Alto is buying that traffic position, not just the software.

Why the gateway is now a security control plane

For most of 2024-2025, "AI gateway" meant a developer convenience: a way to A/B test models, fail over when one provider went down, and avoid hard-coding API keys. The Palo Alto acquisition reframes it as the agent control plane — the only place in the stack where you can enforce four things that the model itself cannot:

  1. Provenance. Every prompt that reaches a model carries metadata about which retrieval source contributed which span. Without this, IPI defenses are guessing. With it, your detection stack can finally see when an agent acted on instructions from a fetched web page rather than a user.

  2. Tool-call entitlement binding. The gateway knows which agent issued which tool call, with which scope, on whose behalf. That is the difference between an agent legitimately wiring a payment and one tricked into wiring a payment. The model layer cannot enforce this; the gateway can.

  3. Cost containment. Agentic workflows fan out — one user request becomes hundreds of tool calls — and most enterprises cannot tell whether a 10x token spike is product-market fit or a runaway loop. Caching and per-agent quotas at the gateway are the only practical control.

  4. Uniform policy across model providers. Most enterprises run a multi-model stack now (Claude for reasoning, GPT for breadth, Gemini for multimodal, open-weights for cost). Without a gateway, every provider's safety controls are different, and your DLP/PII policy has to be rewritten N times. With a gateway, it is enforced once.

This is why the AI Gateway category — Portkey, Cloudflare AI Gateway, Kong AI Gateway, LiteLLM, Helicone, Truefoundry — has been quietly capturing more strategic ground than its category name suggests. It happens to be the one enforcement point all your AI traffic must traverse.

What this means for engineering and platform leaders

The honest read is that the gateway question is no longer optional. If you have agents in production, you have already made a gateway decision — it's just whether you made it deliberately. Three concrete actions:

Inventory your current model traffic. Add up calls, tokens, and tool invocations across every agent and copilot in your environment. If you cannot produce that table in under a day, you do not have a gateway. You have a leak.

Decide if you are buying the gateway separately or as part of a security platform. Pre-Portkey, AI gateway was a standalone build-or-buy. Post-Portkey, large security incumbents (Palo Alto first, almost certainly others within 90 days) will bundle gateway into existing platforms. If you are already deep on Palo Alto for SASE, NGFW, or XSIAM, the bundle math gets attractive. If you are not, expect aggressive cross-sell and price the standalone alternatives accordingly.

Pin down your MCP exposure. Portkey's value proposition leans heavily on 3,000+ LLM and MCP tool integration. MCP is the agent-to-tool wire format that adoption has run ahead of governance on. Whatever gateway you choose needs an MCP server registry, signed-tool catalog support, and per-agent allowlists. Without that, you are flying blind on which tools your agents can call.

Review your AI Identity story. Portkey's CyberArk integration is a tell — the next 12 months of agent security is going to be a workload-identity problem, not a model-alignment problem. Each agent needs a workload identity, scoped credentials, and a revocation path. If your IAM team has not started this work, the Portkey deal is the budget conversation opener.

What this means for security and compliance leaders

This acquisition is the cleanest signal yet that "AI security" is converging into the network/platform vendor market. The implications stretch beyond Palo Alto:

  • Cisco, Zscaler, CrowdStrike, and Cloudflare all have either explicit AI gateway products or obvious adjacencies. Expect comparable acquisitions or product launches within the quarter. CrowdStrike already moved on AI vulnerability with the Quiltworks coalition; Cisco bought Galileo for AI observability. Palo Alto just took the gateway slot. The squares fill in fast from here.
  • Vendor lock-in risk shifts. A gateway abstracts the model layer, so it should reduce model-vendor lock-in. But it concentrates lock-in at the gateway/security vendor instead. That is fine if your security stack is the strategic anchor — uncomfortable if it is not.
  • Runtime evidence becomes audit evidence. Every audit framework that touches AI (NIST AI RMF, ISO 42001, EU AI Act high-risk categories, SOC 2 Plus, the new AICPA AI attestation criteria) requires evidence that the right model was called with the right inputs by the right user. The gateway is the authoritative source. Build your audit map around it.

Three asks for the next risk-committee meeting:

  1. Map AI traffic flow. Produce a single diagram showing every model provider, every gateway, every retrieval source, and every tool call your agents make. If a path bypasses the gateway, name and date it.
  2. Codify the gateway as a control. Add it to your control library mapped to OWASP LLM01-LLM10, NIST AI RMF Manage-2.x, and ISO 42001 Annex A. Turn the gateway from infrastructure into evidence.
  3. Re-baseline AI vendor contracts. Vendors selling agentic capabilities should disclose (a) which gateway, if any, sits in front of their agent's model calls, (b) whether MCP tool invocations are logged with provenance, and (c) failover behavior when the gateway is unavailable. Most contracts are silent on all three.

The category context

Zoom out. In the last 30 days alone: Salesforce shipped Agentforce Operations, OpenAI launched Bedrock Managed Agents, Nvidia signed Adobe/Salesforce/SAP onto its Agent Toolkit, Workday detailed its agentic HR/finance rails at its Innovation Summit, Google launched Gemini Enterprise Agent Platform, and now Palo Alto is buying the gateway. Each move is independently rational. Together they describe a clear arc: 2026 is when agentic workloads move from pilots into production, and the control plane gets built around them in real time. The vendors who own the choke points — gateway, identity, observability, runtime — are positioning to be the ones enterprises pay for the privilege of operating agents safely.

Portkey is small in revenue terms. The strategic value Palo Alto is buying is not the ARR. It is the traffic position — being the inference path for production enterprise agent workloads — and the timing. Six months from now, gateway acquisitions will cost 5-10x more, because every other security platform will be racing for the same slot.

The bottom line

For Rajesh Beri's audience — engineering leaders building production AI systems, and CISOs governing them — the action item is simple and uncomfortable. You cannot defer the AI gateway decision to next year's planning cycle. Whatever your current architecture, you now have to answer: which gateway, which integration depth, and which security platform owns it. The good news is the answer is no longer hypothetical. Palo Alto, with Portkey, just made it concrete. The bad news is the rest of the field is about to do the same — and your procurement clock is already running.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

AI SecurityAI GatewayPalo Alto NetworksPortkeyPrisma AIRSAI AgentsEnterprise AI

Palo Alto + Portkey: AI Gateway Becomes Control Plane

Palo Alto Networks announced its acquisition of Portkey on April 30, folding a 3,000+ LLM AI gateway into Prisma AIRS to govern enterprise AI agents.

By Rajesh Beri·May 1, 2026·9 min read

Palo Alto Networks announced on April 30, 2026 that it will acquire Portkey, the Bengaluru-based AI Gateway startup that processes trillions of tokens per month across 3,000+ LLMs and MCP tools, and fold it into Prisma AIRS as the central control plane for enterprise AI agents. The deal is expected to close in Palo Alto's Q4 FY2026 (the May-July window), terms were not disclosed, and Portkey CEO Rohit Agarwal will continue leading the platform inside Palo Alto. Portkey raised $3M in its 2023 seed (Lightspeed) and a $15M Series A (Elevation Capital, Lightspeed) less than three months ago — putting the entire round-to-exit timeline at roughly 90 days. That is the tell. The AI Gateway category is being absorbed into the security stack before most CISOs have finished writing the procurement RFI for it.

The strategic frame is not "Palo Alto bought a developer tool." Lee Klarich, Palo Alto's Chief Product & Technology Officer, was explicit in the press release: Portkey will be the AI Gateway for Prisma AIRS, "inspecting AI traffic and enforcing security and governance policies." Read alongside Google's April 23 indirect prompt injection data — 32% growth in malicious payloads across 2-3 billion crawled pages — the move makes obvious sense. Enterprises are wiring autonomous agents into payments, code repos, ticketing, and email at the same moment the open web is filling with traps designed to weaponize them. Palo Alto is positioning the gateway as the only layer with both the visibility and the privilege to stop bad calls before they hit downstream tools. For enterprise architects who have been treating AI gateways as a Day-2 ergonomics decision, this is the day that calculus changed.

What Portkey actually does

Portkey sits between an enterprise's applications and the model providers (OpenAI, Anthropic, Google, Bedrock, Azure OpenAI, open-weights endpoints) plus the growing fleet of MCP servers and agent runtimes. Three lines of code redirect existing OpenAI/Anthropic SDK calls through the gateway, and Portkey then provides:

  • Unified API surface to 3,000+ LLMs and MCP tools — model-agnostic routing without rewriting application code
  • Semantic routing and automated failover for 99.99% uptime on agent-to-agent traffic
  • Caching and quotas to control runaway agent token spend (an increasingly painful CFO conversation)
  • Telemetry, audit logs, and an agent registry for forensic replay
  • Identity-based runtime controls — and post-acquisition, integration with CyberArk for Agent Identity Security so every autonomous action is authenticated against a known identity

The volume claim — "trillions of tokens per month" — is significant. It says Portkey already sits on the inference path for production agent workloads at meaningful enterprise scale. Palo Alto is buying that traffic position, not just the software.

Why the gateway is now a security control plane

For most of 2024-2025, "AI gateway" meant a developer convenience: a way to A/B test models, fail over when one provider went down, and avoid hard-coding API keys. The Palo Alto acquisition reframes it as the agent control plane — the only place in the stack where you can enforce four things that the model itself cannot:

  1. Provenance. Every prompt that reaches a model carries metadata about which retrieval source contributed which span. Without this, IPI defenses are guessing. With it, your detection stack can finally see when an agent acted on instructions from a fetched web page rather than a user.

  2. Tool-call entitlement binding. The gateway knows which agent issued which tool call, with which scope, on whose behalf. That is the difference between an agent legitimately wiring a payment and one tricked into wiring a payment. The model layer cannot enforce this; the gateway can.

  3. Cost containment. Agentic workflows fan out — one user request becomes hundreds of tool calls — and most enterprises cannot tell whether a 10x token spike is product-market fit or a runaway loop. Caching and per-agent quotas at the gateway are the only practical control.

  4. Uniform policy across model providers. Most enterprises run a multi-model stack now (Claude for reasoning, GPT for breadth, Gemini for multimodal, open-weights for cost). Without a gateway, every provider's safety controls are different, and your DLP/PII policy has to be rewritten N times. With a gateway, it is enforced once.

This is why the AI Gateway category — Portkey, Cloudflare AI Gateway, Kong AI Gateway, LiteLLM, Helicone, Truefoundry — has been quietly capturing more strategic ground than its category name suggests. It happens to be the one enforcement point all your AI traffic must traverse.

What this means for engineering and platform leaders

The honest read is that the gateway question is no longer optional. If you have agents in production, you have already made a gateway decision — it's just whether you made it deliberately. Three concrete actions:

Inventory your current model traffic. Add up calls, tokens, and tool invocations across every agent and copilot in your environment. If you cannot produce that table in under a day, you do not have a gateway. You have a leak.

Decide if you are buying the gateway separately or as part of a security platform. Pre-Portkey, AI gateway was a standalone build-or-buy. Post-Portkey, large security incumbents (Palo Alto first, almost certainly others within 90 days) will bundle gateway into existing platforms. If you are already deep on Palo Alto for SASE, NGFW, or XSIAM, the bundle math gets attractive. If you are not, expect aggressive cross-sell and price the standalone alternatives accordingly.

Pin down your MCP exposure. Portkey's value proposition leans heavily on 3,000+ LLM and MCP tool integration. MCP is the agent-to-tool wire format that adoption has run ahead of governance on. Whatever gateway you choose needs an MCP server registry, signed-tool catalog support, and per-agent allowlists. Without that, you are flying blind on which tools your agents can call.

Review your AI Identity story. Portkey's CyberArk integration is a tell — the next 12 months of agent security is going to be a workload-identity problem, not a model-alignment problem. Each agent needs a workload identity, scoped credentials, and a revocation path. If your IAM team has not started this work, the Portkey deal is the budget conversation opener.

What this means for security and compliance leaders

This acquisition is the cleanest signal yet that "AI security" is converging into the network/platform vendor market. The implications stretch beyond Palo Alto:

  • Cisco, Zscaler, CrowdStrike, and Cloudflare all have either explicit AI gateway products or obvious adjacencies. Expect comparable acquisitions or product launches within the quarter. CrowdStrike already moved on AI vulnerability with the Quiltworks coalition; Cisco bought Galileo for AI observability. Palo Alto just took the gateway slot. The squares fill in fast from here.
  • Vendor lock-in risk shifts. A gateway abstracts the model layer, so it should reduce model-vendor lock-in. But it concentrates lock-in at the gateway/security vendor instead. That is fine if your security stack is the strategic anchor — uncomfortable if it is not.
  • Runtime evidence becomes audit evidence. Every audit framework that touches AI (NIST AI RMF, ISO 42001, EU AI Act high-risk categories, SOC 2 Plus, the new AICPA AI attestation criteria) requires evidence that the right model was called with the right inputs by the right user. The gateway is the authoritative source. Build your audit map around it.

Three asks for the next risk-committee meeting:

  1. Map AI traffic flow. Produce a single diagram showing every model provider, every gateway, every retrieval source, and every tool call your agents make. If a path bypasses the gateway, name and date it.
  2. Codify the gateway as a control. Add it to your control library mapped to OWASP LLM01-LLM10, NIST AI RMF Manage-2.x, and ISO 42001 Annex A. Turn the gateway from infrastructure into evidence.
  3. Re-baseline AI vendor contracts. Vendors selling agentic capabilities should disclose (a) which gateway, if any, sits in front of their agent's model calls, (b) whether MCP tool invocations are logged with provenance, and (c) failover behavior when the gateway is unavailable. Most contracts are silent on all three.

The category context

Zoom out. In the last 30 days alone: Salesforce shipped Agentforce Operations, OpenAI launched Bedrock Managed Agents, Nvidia signed Adobe/Salesforce/SAP onto its Agent Toolkit, Workday detailed its agentic HR/finance rails at its Innovation Summit, Google launched Gemini Enterprise Agent Platform, and now Palo Alto is buying the gateway. Each move is independently rational. Together they describe a clear arc: 2026 is when agentic workloads move from pilots into production, and the control plane gets built around them in real time. The vendors who own the choke points — gateway, identity, observability, runtime — are positioning to be the ones enterprises pay for the privilege of operating agents safely.

Portkey is small in revenue terms. The strategic value Palo Alto is buying is not the ARR. It is the traffic position — being the inference path for production enterprise agent workloads — and the timing. Six months from now, gateway acquisitions will cost 5-10x more, because every other security platform will be racing for the same slot.

The bottom line

For Rajesh Beri's audience — engineering leaders building production AI systems, and CISOs governing them — the action item is simple and uncomfortable. You cannot defer the AI gateway decision to next year's planning cycle. Whatever your current architecture, you now have to answer: which gateway, which integration depth, and which security platform owns it. The good news is the answer is no longer hypothetical. Palo Alto, with Portkey, just made it concrete. The bad news is the rest of the field is about to do the same — and your procurement clock is already running.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe