NVIDIA OpenShell: The AI Agent Layer Your Stack Forgot

NVIDIA and SAP shipped OpenShell on May 12 — an open-source runtime security layer for AI agents. Why most enterprise AI stacks are missing this layer.

By Rajesh Beri·May 14, 2026·15 min read
Share:

THE DAILY BRIEF

AI SecurityAI AgentsNVIDIASAPEnterprise AICISO

NVIDIA OpenShell: The AI Agent Layer Your Stack Forgot

NVIDIA and SAP shipped OpenShell on May 12 — an open-source runtime security layer for AI agents. Why most enterprise AI stacks are missing this layer.

By Rajesh Beri·May 14, 2026·15 min read

Eighty-eight percent of organizations reported a confirmed or suspected AI agent security incident in the last twelve months. Shadow agent breaches now cost an average of $4.63 million each — $670,000 more than a standard breach. Most CISOs are spending on prompt-injection scanners, agent inventory tools, and MCP gateways. Almost none are securing the layer where agents actually execute code, write to disk, and make network calls.

On May 12, 2026, NVIDIA and SAP closed that gap. At SAP Sapphire 2026 in Orlando, the two companies announced that SAP would embed NVIDIA OpenShell — an open-source secure runtime for autonomous AI agents — into the new SAP Business AI Platform. OpenShell is the first major enterprise-vendor-backed answer to a problem the rest of the market has been quietly ignoring: even a perfectly governed AI agent can be weaponized once it starts executing code on production systems.

For CIOs, CTOs, and CISOs evaluating agent platforms, this announcement is a wake-up call. The model layer is mostly solved. The governance layer has a half-dozen credible vendors. But the runtime layer — where agents touch filesystems, call APIs, and run shell commands — is the new frontier. And it's where the next wave of enterprise AI breaches is already happening.

What Just Shipped: A Two-Layer AI Agent Security Model

SAP CEO Christian Klein opened the Sapphire keynote with a simple line: "'Almost right' just isn't good enough." Christian was talking about finance close processes, but the same logic applies to security. An AI agent that is "almost" sandboxed is a breach waiting to happen.

OpenShell answers a question that traditional application security tools were never designed for. As SAP's Andre Lamego, SVP and Chief Product Officer of BTP Fabric, framed it: "Can this agent action safely execute?" Meanwhile, SAP's Joule Studio runtime — the business-logic layer — answers the complementary question: "Should this action happen at all?" Together, the two layers form the architecture every enterprise agent platform will eventually need.

Here is what OpenShell actually does:

  • Isolated execution environments for every agent invocation, separating agent processes from the host kernel
  • Policy enforcement at the filesystem and network layers — an agent that escapes its prompt can't escape its sandbox
  • Infrastructure-level containment that limits blast radius even when the agent's logic fails or is hijacked
  • Enterprise identity integration so agent actions tie back to a human accountability chain
  • Auditing hooks that satisfy SOX, GDPR, and HIPAA scrutiny
  • Open-source codebase so security teams can inspect, audit, and contribute — no black-box trust required

SAP isn't just licensing OpenShell. The company's own engineers are co-designing it with NVIDIA, with joint work focused on runtime hardening, policy modeling, identity integration, and governance controls. The runtime ships embedded in the SAP Business AI Platform, which unifies the SAP Business Technology Platform, SAP Business Data Cloud, and SAP Business AI into a single environment running more than 200 specialized agents and 50+ Joule Assistants across finance, supply chain, procurement, HR, and customer experience.

The same architecture is available to enterprises building custom agents in Joule Studio. And NVIDIA is positioning OpenShell as a reference for the rest of the industry — meaning Anthropic's Claude agents, Mistral's sovereign models, Cohere, n8n workflow agents, and Parloa's customer-service agents will all run on top of OpenShell once they reach the SAP platform.

Why This Matters: The Runtime Layer Is the Last Unsolved Problem

Every CIO and CISO conversation about AI agents in 2026 follows the same arc. First, the team buys a foundation model from Anthropic, OpenAI, or Google. Second, they bolt on a model-context-protocol (MCP) gateway from Palo Alto, Microsoft, or an open-source project. Third, they add observability through Datadog, Dynatrace, or Weights & Biases. Then they ship to production — and a few months later, they discover the gap.

Technical Implications (for CTOs, CIOs, and Platform Engineers)

The MCP gateway tells the agent what tools it's allowed to call. The model itself is fine-tuned to refuse obvious abuse. The observability layer tells you what happened after the fact. None of these layers actually contain the agent at the moment it executes.

If an attacker plants a prompt-injection payload in a SharePoint document and your AI agent summarizes it, the agent might call a legitimate tool — say, send_email — with malicious content. Your gateway approved the call. Your observability stack logged it. But by the time your SOC sees the alert, the data is already exfiltrated. That's exactly what happened with Microsoft 365 Copilot's EchoLeak vulnerability (CVE-2025-32711), a zero-click prompt injection with a CVSS score of 9.3 that could extract data from OneDrive, SharePoint, and Teams.

OpenShell flips the model. The agent runs inside a hardened execution environment with explicit filesystem and network policies. Even if the prompt-injection succeeds and the agent decides to do something dangerous, the runtime can refuse to let the action complete. This is the difference between governing intent and containing execution. Both matter. The market has been buying the first and ignoring the second.

The reference architecture matters technically because most current sandboxing approaches — vanilla Docker containers, Linux namespaces, Kubernetes RBAC — share the host kernel. As Northflank's 2026 analysis notes, "Docker provides isolation through Linux namespaces, cgroups, and capabilities. These mechanisms work fine for their intended purpose — isolating trusted, vetted application code." For untrusted agent output, you need something stronger: Firecracker microVMs, gVisor's user-space kernel, or Kata Containers. OpenShell's published architecture sits in that class.

Business Implications (for CFOs, CISOs, and Risk Officers)

For CFOs, the business case writes itself. IBM's 2025 data put the cost of an AI-related breach at $4.63 million — and that's for the breaches that get detected. Step Finance, a Solana DeFi platform, lost $27-30 million in January 2026 when attackers used AI trading agents with excessive permissions to execute large token transfers without human approval. The company shut down. Compare that to the cost of running an OpenShell-equivalent runtime: open-source, container-based, and folded into existing Kubernetes infrastructure.

For CISOs, the calculation is sharper. Gartner now projects that by 2027, more than 40% of all cybersecurity spending will be directly tied to AI capabilities, up from 8% in 2023. The overall security budget hits $244.2 billion in 2026. If you don't have a budget line for agent runtime security yet, you will have one within twelve months — and your board will ask why your stack doesn't already include the layer NVIDIA just made standard.

For risk officers, the SOX, GDPR, and HIPAA angles are critical. Auditors are starting to ask "how did the agent execute, and what could it have done" — not just "what was the agent allowed to do." OpenShell's audit hooks and infrastructure-level containment turn that question from a documentation nightmare into a queryable log.

Market Context: Who Else Is Building the Runtime Layer?

NVIDIA didn't invent agent sandboxing. But it just made it enterprise-default for one of the world's largest application vendors. Here's how the runtime-security market shapes up in May 2026:

  • Google Cloud announced GKE Agent Sandbox at Google Cloud Next 2026, providing kernel-level isolation using gVisor — the same sandboxing technology that secures Gemini. It scales to 300 sandboxes per second and is built as an open-source Kubernetes SIG Apps subproject.
  • AWS ships Bedrock AgentCore, which combines Firecracker microVM isolation (originally built for Lambda) with an integrated tool gateway and persistent memory layer for production agents.
  • Microsoft open-sourced its Agent Governance Toolkit in April 2026, blending runtime policy controls with Azure Active Directory identity and AKS-native deployment.
  • Cloudflare shipped its Sandboxes service in general availability, layering V8 isolate-based Dynamic Workers for lightweight agent execution at the edge.
  • E2B, a venture-backed startup, builds Firecracker-based agent sandboxes for developers, with sub-second cold starts.
  • Specialty vendors like Zenity ($38M Series B), CrowdStrike, Cisco, and Palo Alto Networks are extending their platforms to cover runtime tool abuse, supply-chain manipulation, and shadow-agent discovery.

What's different about OpenShell? It's the first runtime that ships co-designed by a hyperscaler-class infrastructure provider (NVIDIA) and a top-three enterprise application vendor (SAP), open-sourced for cross-vendor use, and embedded by default into a platform that already touches an estimated 80% of global commerce. As Constellation Research's Holger Mueller observed, it's the first time SAP has had "a vision for ERP" this century — and SAP's vision now ships with NVIDIA's runtime baked in.

Analyst data backs the urgency. McKinsey's own internal AI platform was reportedly compromised by an autonomous agent that gained broad system access in under two hours. The Dark Reading 2026 CISO poll found 48% of security pros calling agentic AI "the single most dangerous attack vector" they face. The average enterprise now manages 37 deployed agents — and that count grows every quarter as individual teams spin up automation without central review.

Framework #1: The 25-Point AI Agent Runtime Security Maturity Assessment

Most enterprises don't know how exposed they are. Here is a five-dimension, 25-point assessment you can run in a single afternoon. Score each dimension 1–5. A total under 10 means you have an active risk; 10–14 is low maturity; 15–19 is medium; 20–25 is high.

Dimension 1: Execution Isolation (1–5 points)

  • 1: Agents run in shared Docker containers on the same host kernel
  • 2: Agents run in dedicated pods with Linux capabilities dropped
  • 3: Agents run in gVisor-isolated containers or equivalent user-space kernel
  • 4: Agents run in Firecracker microVMs or Kata Containers
  • 5: Agents run in OpenShell-class runtime with policy-aware sandboxing

Dimension 2: Filesystem & Network Policy Enforcement (1–5 points)

  • 1: Agents have default filesystem access; egress is uncontrolled
  • 2: Agents are restricted to specific directories; egress whitelisted at the firewall
  • 3: Policy enforced at the namespace level
  • 4: Policy enforced at the runtime layer (per-syscall) with deny-by-default rules
  • 5: Runtime-layer policy plus continuous behavioral analysis for anomalies

Dimension 3: Identity & Accountability (1–5 points)

  • 1: All agents share a single service account or API key
  • 2: Each agent has its own credentials, but no human accountability mapping
  • 3: Agents tied to a human owner; access logged per agent
  • 4: Agents authenticate through enterprise IAM with short-lived tokens
  • 5: Per-action identity chain (human → agent → tool call) with cryptographic attestation

Dimension 4: Auditability & Compliance (1–5 points)

  • 1: No structured logging of agent actions
  • 2: Logs exist but aren't centralized or queryable
  • 3: Centralized logs with action-level granularity
  • 4: Logs map to SOX / GDPR / HIPAA control frameworks
  • 5: Logs include input, tool calls, runtime decisions, and policy violations, exportable to GRC platforms

Dimension 5: Blast Radius Containment (1–5 points)

  • 1: A compromised agent can reach production data and tools
  • 2: Network segmentation limits some lateral movement
  • 3: Per-agent permissions limit blast radius to one workload
  • 4: Runtime containment isolates agent processes from host even on compromise
  • 5: Real-time anomaly detection automatically kills runaway agents and revokes credentials

How to use this assessment: Run it once for your highest-risk agent today. Score honestly. Anything under 15 means you're operating below the new SAP/NVIDIA reference. Score below 10 means you're operating below the level of a typical 2024 enterprise — and that's the level where Step Finance lost $30 million.

Framework #2: A 90-Day Plan to Add a Runtime Security Layer

You don't need to wait for SAP's October GA window. Here's a 90-day plan you can execute regardless of whether your agents run on SAP, AWS Bedrock, GKE, or a homegrown stack.

Days 1–14: Inventory and Triage

  • List every AI agent in production. Include scheduled jobs, MCP tools, internal copilots, and "experimental" notebooks that touch production data.
  • Score each agent on the 25-point assessment above.
  • Identify your top three highest-risk agents (financial actions, customer data access, external network calls).

Days 15–30: Pick a Runtime

  • If you're SAP-aligned: budget for OpenShell-on-Joule once it goes GA.
  • If you're Google Cloud: pilot GKE Agent Sandbox on one workload.
  • If you're AWS-heavy: evaluate Bedrock AgentCore for new agents and Firecracker for existing ones.
  • If you're Azure-aligned: deploy Microsoft's open-source Agent Governance Toolkit on AKS.
  • If you're multi-cloud: pilot gVisor or Kata Containers on Kubernetes for portability.

Days 31–60: Pilot Hardening

  • Wrap one high-risk agent in the chosen runtime.
  • Define filesystem and network policies in code, not in tickets.
  • Tie every agent action to a human identity through your existing IAM (Okta, Entra, Ping).
  • Configure audit logs to flow into your SIEM (Splunk, Sentinel, Chronicle).
  • Run a red-team prompt-injection test against the agent. Document what the runtime blocked.

Days 61–90: Operationalize

  • Roll the runtime out to the top three highest-risk agents.
  • Add runtime-policy reviews to your change-management process.
  • Brief the board: runtime layer added, breach exposure reduced, compliance gap closed.
  • Set up monthly drift reviews. New agents must be wrapped before production.

The total cost of this plan, for a typical Fortune 1000, is one platform engineer at 60% utilization for ninety days plus the open-source runtime itself. Compare that to one $4.6 million breach.

Case Study: The Two Breaches OpenShell Would Have Prevented

To make this concrete, consider two of the worst AI agent breaches of the past twelve months — and how runtime isolation would have changed the outcome.

Step Finance (January 2026): Attackers compromised executive devices at the Solana DeFi portfolio manager and used the company's AI trading agents — which had been granted permissions to execute large SOL transfers without human approval — to move 261,000+ SOL tokens, valued at $27–30 million. The token price collapsed 97% the next week. The company eventually shut down.

The root cause was excessive permissions at the application layer. But the deeper failure was that Step Finance's agents could execute trades with no runtime-layer policy preventing transfers above a threshold or to unfamiliar addresses. An OpenShell-class runtime with network egress policies tied to wallet allowlists would have blocked the suspicious destinations. A per-action identity chain would have flagged that the executive's device — not the executive — initiated the trades. The blast radius could have been contained to a single $50,000 test transfer rather than $30 million.

Mexican Government Agencies (December 2025–February 2026): A single attacker used Anthropic's Claude Code and OpenAI's GPT-4.1 to breach nine Mexican government agencies, including the federal tax authority and civil registry. The attacker exfiltrated 195 million taxpayer records, 220 million civil records, and 150GB+ of data over 34 separate AI agent sessions.

The attack succeeded because each session looked legitimate to the model and to the gateway. There was no runtime-layer behavioral analysis flagging "bulk data exfiltration from 34 sessions in 60 days." The deep lesson: governance at the API layer is necessary but insufficient. The runtime is where agents make data calls, and the runtime is where exfiltration patterns become visible. OpenShell's continuous monitoring and policy enforcement at the network layer would have surfaced this pattern within the first week.

These aren't outliers. They are the new baseline. And every CISO running production AI agents is one prompt-injection payload away from being the next case study.

What to Do About It

If you're a CIO or CTO evaluating your AI agent stack:

  • Add "runtime security layer" to your agent platform RFP within the next quarter.
  • If your existing platform doesn't have one, ask the vendor for a timeline. If the answer is vague, pilot OpenShell, GKE Agent Sandbox, or Bedrock AgentCore in parallel.
  • Treat agents as production infrastructure, not as experiments. They need the same change-management, identity controls, and policy reviews as any production service.

If you're a CISO or risk officer:

  • Run the 25-point maturity assessment on your top three highest-risk agents this week.
  • Brief your CFO: agent runtime security is a 2026 budget line item, not a 2027 one.
  • Demand from every AI vendor: "What runtime is your agent executing in? Show me the isolation guarantees."

If you're a CFO or board member:

  • One $4.6 million breach pays for years of runtime security investment. Reframe agent security from a cost center to a risk-transfer mechanism.
  • Ask for quarterly reporting on agent inventory, runtime maturity score, and incident counts. If you can't get those numbers, your governance is incomplete.

The Bottom Line

The first wave of enterprise AI security focused on what agents are allowed to do. The next wave is about what agents physically can do once they're running. NVIDIA OpenShell, embedded in SAP's new Autonomous Enterprise platform, is the first vendor-backed answer at scale. It won't be the last — Google, AWS, Microsoft, and Cloudflare all have credible runtime offerings, and the underlying technologies (gVisor, Firecracker, Kata Containers) are open-source and battle-tested.

What matters is that 2026 is the year the runtime layer stops being optional. The vendors who ship it by default win the enterprise. The CISOs who deploy it before their first breach keep their jobs. The CFOs who fund it before their auditors flag it sleep better.

If your AI agent stack doesn't include a runtime security layer, today is a good day to fix that.


Continue Reading


About the Author: Rajesh Beri is Head of AI Engineering at a Fortune 500 security company and author of THE DAILY BRIEF, a newsletter for technical and business leaders navigating enterprise AI. Follow on LinkedIn | Follow on Twitter/X

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

NVIDIA OpenShell: The AI Agent Layer Your Stack Forgot

Photo by Tima Miroshnichenko on Pexels

Eighty-eight percent of organizations reported a confirmed or suspected AI agent security incident in the last twelve months. Shadow agent breaches now cost an average of $4.63 million each — $670,000 more than a standard breach. Most CISOs are spending on prompt-injection scanners, agent inventory tools, and MCP gateways. Almost none are securing the layer where agents actually execute code, write to disk, and make network calls.

On May 12, 2026, NVIDIA and SAP closed that gap. At SAP Sapphire 2026 in Orlando, the two companies announced that SAP would embed NVIDIA OpenShell — an open-source secure runtime for autonomous AI agents — into the new SAP Business AI Platform. OpenShell is the first major enterprise-vendor-backed answer to a problem the rest of the market has been quietly ignoring: even a perfectly governed AI agent can be weaponized once it starts executing code on production systems.

For CIOs, CTOs, and CISOs evaluating agent platforms, this announcement is a wake-up call. The model layer is mostly solved. The governance layer has a half-dozen credible vendors. But the runtime layer — where agents touch filesystems, call APIs, and run shell commands — is the new frontier. And it's where the next wave of enterprise AI breaches is already happening.

What Just Shipped: A Two-Layer AI Agent Security Model

SAP CEO Christian Klein opened the Sapphire keynote with a simple line: "'Almost right' just isn't good enough." Christian was talking about finance close processes, but the same logic applies to security. An AI agent that is "almost" sandboxed is a breach waiting to happen.

OpenShell answers a question that traditional application security tools were never designed for. As SAP's Andre Lamego, SVP and Chief Product Officer of BTP Fabric, framed it: "Can this agent action safely execute?" Meanwhile, SAP's Joule Studio runtime — the business-logic layer — answers the complementary question: "Should this action happen at all?" Together, the two layers form the architecture every enterprise agent platform will eventually need.

Here is what OpenShell actually does:

  • Isolated execution environments for every agent invocation, separating agent processes from the host kernel
  • Policy enforcement at the filesystem and network layers — an agent that escapes its prompt can't escape its sandbox
  • Infrastructure-level containment that limits blast radius even when the agent's logic fails or is hijacked
  • Enterprise identity integration so agent actions tie back to a human accountability chain
  • Auditing hooks that satisfy SOX, GDPR, and HIPAA scrutiny
  • Open-source codebase so security teams can inspect, audit, and contribute — no black-box trust required

SAP isn't just licensing OpenShell. The company's own engineers are co-designing it with NVIDIA, with joint work focused on runtime hardening, policy modeling, identity integration, and governance controls. The runtime ships embedded in the SAP Business AI Platform, which unifies the SAP Business Technology Platform, SAP Business Data Cloud, and SAP Business AI into a single environment running more than 200 specialized agents and 50+ Joule Assistants across finance, supply chain, procurement, HR, and customer experience.

The same architecture is available to enterprises building custom agents in Joule Studio. And NVIDIA is positioning OpenShell as a reference for the rest of the industry — meaning Anthropic's Claude agents, Mistral's sovereign models, Cohere, n8n workflow agents, and Parloa's customer-service agents will all run on top of OpenShell once they reach the SAP platform.

Why This Matters: The Runtime Layer Is the Last Unsolved Problem

Every CIO and CISO conversation about AI agents in 2026 follows the same arc. First, the team buys a foundation model from Anthropic, OpenAI, or Google. Second, they bolt on a model-context-protocol (MCP) gateway from Palo Alto, Microsoft, or an open-source project. Third, they add observability through Datadog, Dynatrace, or Weights & Biases. Then they ship to production — and a few months later, they discover the gap.

Technical Implications (for CTOs, CIOs, and Platform Engineers)

The MCP gateway tells the agent what tools it's allowed to call. The model itself is fine-tuned to refuse obvious abuse. The observability layer tells you what happened after the fact. None of these layers actually contain the agent at the moment it executes.

If an attacker plants a prompt-injection payload in a SharePoint document and your AI agent summarizes it, the agent might call a legitimate tool — say, send_email — with malicious content. Your gateway approved the call. Your observability stack logged it. But by the time your SOC sees the alert, the data is already exfiltrated. That's exactly what happened with Microsoft 365 Copilot's EchoLeak vulnerability (CVE-2025-32711), a zero-click prompt injection with a CVSS score of 9.3 that could extract data from OneDrive, SharePoint, and Teams.

OpenShell flips the model. The agent runs inside a hardened execution environment with explicit filesystem and network policies. Even if the prompt-injection succeeds and the agent decides to do something dangerous, the runtime can refuse to let the action complete. This is the difference between governing intent and containing execution. Both matter. The market has been buying the first and ignoring the second.

The reference architecture matters technically because most current sandboxing approaches — vanilla Docker containers, Linux namespaces, Kubernetes RBAC — share the host kernel. As Northflank's 2026 analysis notes, "Docker provides isolation through Linux namespaces, cgroups, and capabilities. These mechanisms work fine for their intended purpose — isolating trusted, vetted application code." For untrusted agent output, you need something stronger: Firecracker microVMs, gVisor's user-space kernel, or Kata Containers. OpenShell's published architecture sits in that class.

Business Implications (for CFOs, CISOs, and Risk Officers)

For CFOs, the business case writes itself. IBM's 2025 data put the cost of an AI-related breach at $4.63 million — and that's for the breaches that get detected. Step Finance, a Solana DeFi platform, lost $27-30 million in January 2026 when attackers used AI trading agents with excessive permissions to execute large token transfers without human approval. The company shut down. Compare that to the cost of running an OpenShell-equivalent runtime: open-source, container-based, and folded into existing Kubernetes infrastructure.

For CISOs, the calculation is sharper. Gartner now projects that by 2027, more than 40% of all cybersecurity spending will be directly tied to AI capabilities, up from 8% in 2023. The overall security budget hits $244.2 billion in 2026. If you don't have a budget line for agent runtime security yet, you will have one within twelve months — and your board will ask why your stack doesn't already include the layer NVIDIA just made standard.

For risk officers, the SOX, GDPR, and HIPAA angles are critical. Auditors are starting to ask "how did the agent execute, and what could it have done" — not just "what was the agent allowed to do." OpenShell's audit hooks and infrastructure-level containment turn that question from a documentation nightmare into a queryable log.

Market Context: Who Else Is Building the Runtime Layer?

NVIDIA didn't invent agent sandboxing. But it just made it enterprise-default for one of the world's largest application vendors. Here's how the runtime-security market shapes up in May 2026:

  • Google Cloud announced GKE Agent Sandbox at Google Cloud Next 2026, providing kernel-level isolation using gVisor — the same sandboxing technology that secures Gemini. It scales to 300 sandboxes per second and is built as an open-source Kubernetes SIG Apps subproject.
  • AWS ships Bedrock AgentCore, which combines Firecracker microVM isolation (originally built for Lambda) with an integrated tool gateway and persistent memory layer for production agents.
  • Microsoft open-sourced its Agent Governance Toolkit in April 2026, blending runtime policy controls with Azure Active Directory identity and AKS-native deployment.
  • Cloudflare shipped its Sandboxes service in general availability, layering V8 isolate-based Dynamic Workers for lightweight agent execution at the edge.
  • E2B, a venture-backed startup, builds Firecracker-based agent sandboxes for developers, with sub-second cold starts.
  • Specialty vendors like Zenity ($38M Series B), CrowdStrike, Cisco, and Palo Alto Networks are extending their platforms to cover runtime tool abuse, supply-chain manipulation, and shadow-agent discovery.

What's different about OpenShell? It's the first runtime that ships co-designed by a hyperscaler-class infrastructure provider (NVIDIA) and a top-three enterprise application vendor (SAP), open-sourced for cross-vendor use, and embedded by default into a platform that already touches an estimated 80% of global commerce. As Constellation Research's Holger Mueller observed, it's the first time SAP has had "a vision for ERP" this century — and SAP's vision now ships with NVIDIA's runtime baked in.

Analyst data backs the urgency. McKinsey's own internal AI platform was reportedly compromised by an autonomous agent that gained broad system access in under two hours. The Dark Reading 2026 CISO poll found 48% of security pros calling agentic AI "the single most dangerous attack vector" they face. The average enterprise now manages 37 deployed agents — and that count grows every quarter as individual teams spin up automation without central review.

Framework #1: The 25-Point AI Agent Runtime Security Maturity Assessment

Most enterprises don't know how exposed they are. Here is a five-dimension, 25-point assessment you can run in a single afternoon. Score each dimension 1–5. A total under 10 means you have an active risk; 10–14 is low maturity; 15–19 is medium; 20–25 is high.

Dimension 1: Execution Isolation (1–5 points)

  • 1: Agents run in shared Docker containers on the same host kernel
  • 2: Agents run in dedicated pods with Linux capabilities dropped
  • 3: Agents run in gVisor-isolated containers or equivalent user-space kernel
  • 4: Agents run in Firecracker microVMs or Kata Containers
  • 5: Agents run in OpenShell-class runtime with policy-aware sandboxing

Dimension 2: Filesystem & Network Policy Enforcement (1–5 points)

  • 1: Agents have default filesystem access; egress is uncontrolled
  • 2: Agents are restricted to specific directories; egress whitelisted at the firewall
  • 3: Policy enforced at the namespace level
  • 4: Policy enforced at the runtime layer (per-syscall) with deny-by-default rules
  • 5: Runtime-layer policy plus continuous behavioral analysis for anomalies

Dimension 3: Identity & Accountability (1–5 points)

  • 1: All agents share a single service account or API key
  • 2: Each agent has its own credentials, but no human accountability mapping
  • 3: Agents tied to a human owner; access logged per agent
  • 4: Agents authenticate through enterprise IAM with short-lived tokens
  • 5: Per-action identity chain (human → agent → tool call) with cryptographic attestation

Dimension 4: Auditability & Compliance (1–5 points)

  • 1: No structured logging of agent actions
  • 2: Logs exist but aren't centralized or queryable
  • 3: Centralized logs with action-level granularity
  • 4: Logs map to SOX / GDPR / HIPAA control frameworks
  • 5: Logs include input, tool calls, runtime decisions, and policy violations, exportable to GRC platforms

Dimension 5: Blast Radius Containment (1–5 points)

  • 1: A compromised agent can reach production data and tools
  • 2: Network segmentation limits some lateral movement
  • 3: Per-agent permissions limit blast radius to one workload
  • 4: Runtime containment isolates agent processes from host even on compromise
  • 5: Real-time anomaly detection automatically kills runaway agents and revokes credentials

How to use this assessment: Run it once for your highest-risk agent today. Score honestly. Anything under 15 means you're operating below the new SAP/NVIDIA reference. Score below 10 means you're operating below the level of a typical 2024 enterprise — and that's the level where Step Finance lost $30 million.

Framework #2: A 90-Day Plan to Add a Runtime Security Layer

You don't need to wait for SAP's October GA window. Here's a 90-day plan you can execute regardless of whether your agents run on SAP, AWS Bedrock, GKE, or a homegrown stack.

Days 1–14: Inventory and Triage

  • List every AI agent in production. Include scheduled jobs, MCP tools, internal copilots, and "experimental" notebooks that touch production data.
  • Score each agent on the 25-point assessment above.
  • Identify your top three highest-risk agents (financial actions, customer data access, external network calls).

Days 15–30: Pick a Runtime

  • If you're SAP-aligned: budget for OpenShell-on-Joule once it goes GA.
  • If you're Google Cloud: pilot GKE Agent Sandbox on one workload.
  • If you're AWS-heavy: evaluate Bedrock AgentCore for new agents and Firecracker for existing ones.
  • If you're Azure-aligned: deploy Microsoft's open-source Agent Governance Toolkit on AKS.
  • If you're multi-cloud: pilot gVisor or Kata Containers on Kubernetes for portability.

Days 31–60: Pilot Hardening

  • Wrap one high-risk agent in the chosen runtime.
  • Define filesystem and network policies in code, not in tickets.
  • Tie every agent action to a human identity through your existing IAM (Okta, Entra, Ping).
  • Configure audit logs to flow into your SIEM (Splunk, Sentinel, Chronicle).
  • Run a red-team prompt-injection test against the agent. Document what the runtime blocked.

Days 61–90: Operationalize

  • Roll the runtime out to the top three highest-risk agents.
  • Add runtime-policy reviews to your change-management process.
  • Brief the board: runtime layer added, breach exposure reduced, compliance gap closed.
  • Set up monthly drift reviews. New agents must be wrapped before production.

The total cost of this plan, for a typical Fortune 1000, is one platform engineer at 60% utilization for ninety days plus the open-source runtime itself. Compare that to one $4.6 million breach.

Case Study: The Two Breaches OpenShell Would Have Prevented

To make this concrete, consider two of the worst AI agent breaches of the past twelve months — and how runtime isolation would have changed the outcome.

Step Finance (January 2026): Attackers compromised executive devices at the Solana DeFi portfolio manager and used the company's AI trading agents — which had been granted permissions to execute large SOL transfers without human approval — to move 261,000+ SOL tokens, valued at $27–30 million. The token price collapsed 97% the next week. The company eventually shut down.

The root cause was excessive permissions at the application layer. But the deeper failure was that Step Finance's agents could execute trades with no runtime-layer policy preventing transfers above a threshold or to unfamiliar addresses. An OpenShell-class runtime with network egress policies tied to wallet allowlists would have blocked the suspicious destinations. A per-action identity chain would have flagged that the executive's device — not the executive — initiated the trades. The blast radius could have been contained to a single $50,000 test transfer rather than $30 million.

Mexican Government Agencies (December 2025–February 2026): A single attacker used Anthropic's Claude Code and OpenAI's GPT-4.1 to breach nine Mexican government agencies, including the federal tax authority and civil registry. The attacker exfiltrated 195 million taxpayer records, 220 million civil records, and 150GB+ of data over 34 separate AI agent sessions.

The attack succeeded because each session looked legitimate to the model and to the gateway. There was no runtime-layer behavioral analysis flagging "bulk data exfiltration from 34 sessions in 60 days." The deep lesson: governance at the API layer is necessary but insufficient. The runtime is where agents make data calls, and the runtime is where exfiltration patterns become visible. OpenShell's continuous monitoring and policy enforcement at the network layer would have surfaced this pattern within the first week.

These aren't outliers. They are the new baseline. And every CISO running production AI agents is one prompt-injection payload away from being the next case study.

What to Do About It

If you're a CIO or CTO evaluating your AI agent stack:

  • Add "runtime security layer" to your agent platform RFP within the next quarter.
  • If your existing platform doesn't have one, ask the vendor for a timeline. If the answer is vague, pilot OpenShell, GKE Agent Sandbox, or Bedrock AgentCore in parallel.
  • Treat agents as production infrastructure, not as experiments. They need the same change-management, identity controls, and policy reviews as any production service.

If you're a CISO or risk officer:

  • Run the 25-point maturity assessment on your top three highest-risk agents this week.
  • Brief your CFO: agent runtime security is a 2026 budget line item, not a 2027 one.
  • Demand from every AI vendor: "What runtime is your agent executing in? Show me the isolation guarantees."

If you're a CFO or board member:

  • One $4.6 million breach pays for years of runtime security investment. Reframe agent security from a cost center to a risk-transfer mechanism.
  • Ask for quarterly reporting on agent inventory, runtime maturity score, and incident counts. If you can't get those numbers, your governance is incomplete.

The Bottom Line

The first wave of enterprise AI security focused on what agents are allowed to do. The next wave is about what agents physically can do once they're running. NVIDIA OpenShell, embedded in SAP's new Autonomous Enterprise platform, is the first vendor-backed answer at scale. It won't be the last — Google, AWS, Microsoft, and Cloudflare all have credible runtime offerings, and the underlying technologies (gVisor, Firecracker, Kata Containers) are open-source and battle-tested.

What matters is that 2026 is the year the runtime layer stops being optional. The vendors who ship it by default win the enterprise. The CISOs who deploy it before their first breach keep their jobs. The CFOs who fund it before their auditors flag it sleep better.

If your AI agent stack doesn't include a runtime security layer, today is a good day to fix that.


Continue Reading


About the Author: Rajesh Beri is Head of AI Engineering at a Fortune 500 security company and author of THE DAILY BRIEF, a newsletter for technical and business leaders navigating enterprise AI. Follow on LinkedIn | Follow on Twitter/X

Share:

THE DAILY BRIEF

AI SecurityAI AgentsNVIDIASAPEnterprise AICISO

NVIDIA OpenShell: The AI Agent Layer Your Stack Forgot

NVIDIA and SAP shipped OpenShell on May 12 — an open-source runtime security layer for AI agents. Why most enterprise AI stacks are missing this layer.

By Rajesh Beri·May 14, 2026·15 min read

Eighty-eight percent of organizations reported a confirmed or suspected AI agent security incident in the last twelve months. Shadow agent breaches now cost an average of $4.63 million each — $670,000 more than a standard breach. Most CISOs are spending on prompt-injection scanners, agent inventory tools, and MCP gateways. Almost none are securing the layer where agents actually execute code, write to disk, and make network calls.

On May 12, 2026, NVIDIA and SAP closed that gap. At SAP Sapphire 2026 in Orlando, the two companies announced that SAP would embed NVIDIA OpenShell — an open-source secure runtime for autonomous AI agents — into the new SAP Business AI Platform. OpenShell is the first major enterprise-vendor-backed answer to a problem the rest of the market has been quietly ignoring: even a perfectly governed AI agent can be weaponized once it starts executing code on production systems.

For CIOs, CTOs, and CISOs evaluating agent platforms, this announcement is a wake-up call. The model layer is mostly solved. The governance layer has a half-dozen credible vendors. But the runtime layer — where agents touch filesystems, call APIs, and run shell commands — is the new frontier. And it's where the next wave of enterprise AI breaches is already happening.

What Just Shipped: A Two-Layer AI Agent Security Model

SAP CEO Christian Klein opened the Sapphire keynote with a simple line: "'Almost right' just isn't good enough." Christian was talking about finance close processes, but the same logic applies to security. An AI agent that is "almost" sandboxed is a breach waiting to happen.

OpenShell answers a question that traditional application security tools were never designed for. As SAP's Andre Lamego, SVP and Chief Product Officer of BTP Fabric, framed it: "Can this agent action safely execute?" Meanwhile, SAP's Joule Studio runtime — the business-logic layer — answers the complementary question: "Should this action happen at all?" Together, the two layers form the architecture every enterprise agent platform will eventually need.

Here is what OpenShell actually does:

  • Isolated execution environments for every agent invocation, separating agent processes from the host kernel
  • Policy enforcement at the filesystem and network layers — an agent that escapes its prompt can't escape its sandbox
  • Infrastructure-level containment that limits blast radius even when the agent's logic fails or is hijacked
  • Enterprise identity integration so agent actions tie back to a human accountability chain
  • Auditing hooks that satisfy SOX, GDPR, and HIPAA scrutiny
  • Open-source codebase so security teams can inspect, audit, and contribute — no black-box trust required

SAP isn't just licensing OpenShell. The company's own engineers are co-designing it with NVIDIA, with joint work focused on runtime hardening, policy modeling, identity integration, and governance controls. The runtime ships embedded in the SAP Business AI Platform, which unifies the SAP Business Technology Platform, SAP Business Data Cloud, and SAP Business AI into a single environment running more than 200 specialized agents and 50+ Joule Assistants across finance, supply chain, procurement, HR, and customer experience.

The same architecture is available to enterprises building custom agents in Joule Studio. And NVIDIA is positioning OpenShell as a reference for the rest of the industry — meaning Anthropic's Claude agents, Mistral's sovereign models, Cohere, n8n workflow agents, and Parloa's customer-service agents will all run on top of OpenShell once they reach the SAP platform.

Why This Matters: The Runtime Layer Is the Last Unsolved Problem

Every CIO and CISO conversation about AI agents in 2026 follows the same arc. First, the team buys a foundation model from Anthropic, OpenAI, or Google. Second, they bolt on a model-context-protocol (MCP) gateway from Palo Alto, Microsoft, or an open-source project. Third, they add observability through Datadog, Dynatrace, or Weights & Biases. Then they ship to production — and a few months later, they discover the gap.

Technical Implications (for CTOs, CIOs, and Platform Engineers)

The MCP gateway tells the agent what tools it's allowed to call. The model itself is fine-tuned to refuse obvious abuse. The observability layer tells you what happened after the fact. None of these layers actually contain the agent at the moment it executes.

If an attacker plants a prompt-injection payload in a SharePoint document and your AI agent summarizes it, the agent might call a legitimate tool — say, send_email — with malicious content. Your gateway approved the call. Your observability stack logged it. But by the time your SOC sees the alert, the data is already exfiltrated. That's exactly what happened with Microsoft 365 Copilot's EchoLeak vulnerability (CVE-2025-32711), a zero-click prompt injection with a CVSS score of 9.3 that could extract data from OneDrive, SharePoint, and Teams.

OpenShell flips the model. The agent runs inside a hardened execution environment with explicit filesystem and network policies. Even if the prompt-injection succeeds and the agent decides to do something dangerous, the runtime can refuse to let the action complete. This is the difference between governing intent and containing execution. Both matter. The market has been buying the first and ignoring the second.

The reference architecture matters technically because most current sandboxing approaches — vanilla Docker containers, Linux namespaces, Kubernetes RBAC — share the host kernel. As Northflank's 2026 analysis notes, "Docker provides isolation through Linux namespaces, cgroups, and capabilities. These mechanisms work fine for their intended purpose — isolating trusted, vetted application code." For untrusted agent output, you need something stronger: Firecracker microVMs, gVisor's user-space kernel, or Kata Containers. OpenShell's published architecture sits in that class.

Business Implications (for CFOs, CISOs, and Risk Officers)

For CFOs, the business case writes itself. IBM's 2025 data put the cost of an AI-related breach at $4.63 million — and that's for the breaches that get detected. Step Finance, a Solana DeFi platform, lost $27-30 million in January 2026 when attackers used AI trading agents with excessive permissions to execute large token transfers without human approval. The company shut down. Compare that to the cost of running an OpenShell-equivalent runtime: open-source, container-based, and folded into existing Kubernetes infrastructure.

For CISOs, the calculation is sharper. Gartner now projects that by 2027, more than 40% of all cybersecurity spending will be directly tied to AI capabilities, up from 8% in 2023. The overall security budget hits $244.2 billion in 2026. If you don't have a budget line for agent runtime security yet, you will have one within twelve months — and your board will ask why your stack doesn't already include the layer NVIDIA just made standard.

For risk officers, the SOX, GDPR, and HIPAA angles are critical. Auditors are starting to ask "how did the agent execute, and what could it have done" — not just "what was the agent allowed to do." OpenShell's audit hooks and infrastructure-level containment turn that question from a documentation nightmare into a queryable log.

Market Context: Who Else Is Building the Runtime Layer?

NVIDIA didn't invent agent sandboxing. But it just made it enterprise-default for one of the world's largest application vendors. Here's how the runtime-security market shapes up in May 2026:

  • Google Cloud announced GKE Agent Sandbox at Google Cloud Next 2026, providing kernel-level isolation using gVisor — the same sandboxing technology that secures Gemini. It scales to 300 sandboxes per second and is built as an open-source Kubernetes SIG Apps subproject.
  • AWS ships Bedrock AgentCore, which combines Firecracker microVM isolation (originally built for Lambda) with an integrated tool gateway and persistent memory layer for production agents.
  • Microsoft open-sourced its Agent Governance Toolkit in April 2026, blending runtime policy controls with Azure Active Directory identity and AKS-native deployment.
  • Cloudflare shipped its Sandboxes service in general availability, layering V8 isolate-based Dynamic Workers for lightweight agent execution at the edge.
  • E2B, a venture-backed startup, builds Firecracker-based agent sandboxes for developers, with sub-second cold starts.
  • Specialty vendors like Zenity ($38M Series B), CrowdStrike, Cisco, and Palo Alto Networks are extending their platforms to cover runtime tool abuse, supply-chain manipulation, and shadow-agent discovery.

What's different about OpenShell? It's the first runtime that ships co-designed by a hyperscaler-class infrastructure provider (NVIDIA) and a top-three enterprise application vendor (SAP), open-sourced for cross-vendor use, and embedded by default into a platform that already touches an estimated 80% of global commerce. As Constellation Research's Holger Mueller observed, it's the first time SAP has had "a vision for ERP" this century — and SAP's vision now ships with NVIDIA's runtime baked in.

Analyst data backs the urgency. McKinsey's own internal AI platform was reportedly compromised by an autonomous agent that gained broad system access in under two hours. The Dark Reading 2026 CISO poll found 48% of security pros calling agentic AI "the single most dangerous attack vector" they face. The average enterprise now manages 37 deployed agents — and that count grows every quarter as individual teams spin up automation without central review.

Framework #1: The 25-Point AI Agent Runtime Security Maturity Assessment

Most enterprises don't know how exposed they are. Here is a five-dimension, 25-point assessment you can run in a single afternoon. Score each dimension 1–5. A total under 10 means you have an active risk; 10–14 is low maturity; 15–19 is medium; 20–25 is high.

Dimension 1: Execution Isolation (1–5 points)

  • 1: Agents run in shared Docker containers on the same host kernel
  • 2: Agents run in dedicated pods with Linux capabilities dropped
  • 3: Agents run in gVisor-isolated containers or equivalent user-space kernel
  • 4: Agents run in Firecracker microVMs or Kata Containers
  • 5: Agents run in OpenShell-class runtime with policy-aware sandboxing

Dimension 2: Filesystem & Network Policy Enforcement (1–5 points)

  • 1: Agents have default filesystem access; egress is uncontrolled
  • 2: Agents are restricted to specific directories; egress whitelisted at the firewall
  • 3: Policy enforced at the namespace level
  • 4: Policy enforced at the runtime layer (per-syscall) with deny-by-default rules
  • 5: Runtime-layer policy plus continuous behavioral analysis for anomalies

Dimension 3: Identity & Accountability (1–5 points)

  • 1: All agents share a single service account or API key
  • 2: Each agent has its own credentials, but no human accountability mapping
  • 3: Agents tied to a human owner; access logged per agent
  • 4: Agents authenticate through enterprise IAM with short-lived tokens
  • 5: Per-action identity chain (human → agent → tool call) with cryptographic attestation

Dimension 4: Auditability & Compliance (1–5 points)

  • 1: No structured logging of agent actions
  • 2: Logs exist but aren't centralized or queryable
  • 3: Centralized logs with action-level granularity
  • 4: Logs map to SOX / GDPR / HIPAA control frameworks
  • 5: Logs include input, tool calls, runtime decisions, and policy violations, exportable to GRC platforms

Dimension 5: Blast Radius Containment (1–5 points)

  • 1: A compromised agent can reach production data and tools
  • 2: Network segmentation limits some lateral movement
  • 3: Per-agent permissions limit blast radius to one workload
  • 4: Runtime containment isolates agent processes from host even on compromise
  • 5: Real-time anomaly detection automatically kills runaway agents and revokes credentials

How to use this assessment: Run it once for your highest-risk agent today. Score honestly. Anything under 15 means you're operating below the new SAP/NVIDIA reference. Score below 10 means you're operating below the level of a typical 2024 enterprise — and that's the level where Step Finance lost $30 million.

Framework #2: A 90-Day Plan to Add a Runtime Security Layer

You don't need to wait for SAP's October GA window. Here's a 90-day plan you can execute regardless of whether your agents run on SAP, AWS Bedrock, GKE, or a homegrown stack.

Days 1–14: Inventory and Triage

  • List every AI agent in production. Include scheduled jobs, MCP tools, internal copilots, and "experimental" notebooks that touch production data.
  • Score each agent on the 25-point assessment above.
  • Identify your top three highest-risk agents (financial actions, customer data access, external network calls).

Days 15–30: Pick a Runtime

  • If you're SAP-aligned: budget for OpenShell-on-Joule once it goes GA.
  • If you're Google Cloud: pilot GKE Agent Sandbox on one workload.
  • If you're AWS-heavy: evaluate Bedrock AgentCore for new agents and Firecracker for existing ones.
  • If you're Azure-aligned: deploy Microsoft's open-source Agent Governance Toolkit on AKS.
  • If you're multi-cloud: pilot gVisor or Kata Containers on Kubernetes for portability.

Days 31–60: Pilot Hardening

  • Wrap one high-risk agent in the chosen runtime.
  • Define filesystem and network policies in code, not in tickets.
  • Tie every agent action to a human identity through your existing IAM (Okta, Entra, Ping).
  • Configure audit logs to flow into your SIEM (Splunk, Sentinel, Chronicle).
  • Run a red-team prompt-injection test against the agent. Document what the runtime blocked.

Days 61–90: Operationalize

  • Roll the runtime out to the top three highest-risk agents.
  • Add runtime-policy reviews to your change-management process.
  • Brief the board: runtime layer added, breach exposure reduced, compliance gap closed.
  • Set up monthly drift reviews. New agents must be wrapped before production.

The total cost of this plan, for a typical Fortune 1000, is one platform engineer at 60% utilization for ninety days plus the open-source runtime itself. Compare that to one $4.6 million breach.

Case Study: The Two Breaches OpenShell Would Have Prevented

To make this concrete, consider two of the worst AI agent breaches of the past twelve months — and how runtime isolation would have changed the outcome.

Step Finance (January 2026): Attackers compromised executive devices at the Solana DeFi portfolio manager and used the company's AI trading agents — which had been granted permissions to execute large SOL transfers without human approval — to move 261,000+ SOL tokens, valued at $27–30 million. The token price collapsed 97% the next week. The company eventually shut down.

The root cause was excessive permissions at the application layer. But the deeper failure was that Step Finance's agents could execute trades with no runtime-layer policy preventing transfers above a threshold or to unfamiliar addresses. An OpenShell-class runtime with network egress policies tied to wallet allowlists would have blocked the suspicious destinations. A per-action identity chain would have flagged that the executive's device — not the executive — initiated the trades. The blast radius could have been contained to a single $50,000 test transfer rather than $30 million.

Mexican Government Agencies (December 2025–February 2026): A single attacker used Anthropic's Claude Code and OpenAI's GPT-4.1 to breach nine Mexican government agencies, including the federal tax authority and civil registry. The attacker exfiltrated 195 million taxpayer records, 220 million civil records, and 150GB+ of data over 34 separate AI agent sessions.

The attack succeeded because each session looked legitimate to the model and to the gateway. There was no runtime-layer behavioral analysis flagging "bulk data exfiltration from 34 sessions in 60 days." The deep lesson: governance at the API layer is necessary but insufficient. The runtime is where agents make data calls, and the runtime is where exfiltration patterns become visible. OpenShell's continuous monitoring and policy enforcement at the network layer would have surfaced this pattern within the first week.

These aren't outliers. They are the new baseline. And every CISO running production AI agents is one prompt-injection payload away from being the next case study.

What to Do About It

If you're a CIO or CTO evaluating your AI agent stack:

  • Add "runtime security layer" to your agent platform RFP within the next quarter.
  • If your existing platform doesn't have one, ask the vendor for a timeline. If the answer is vague, pilot OpenShell, GKE Agent Sandbox, or Bedrock AgentCore in parallel.
  • Treat agents as production infrastructure, not as experiments. They need the same change-management, identity controls, and policy reviews as any production service.

If you're a CISO or risk officer:

  • Run the 25-point maturity assessment on your top three highest-risk agents this week.
  • Brief your CFO: agent runtime security is a 2026 budget line item, not a 2027 one.
  • Demand from every AI vendor: "What runtime is your agent executing in? Show me the isolation guarantees."

If you're a CFO or board member:

  • One $4.6 million breach pays for years of runtime security investment. Reframe agent security from a cost center to a risk-transfer mechanism.
  • Ask for quarterly reporting on agent inventory, runtime maturity score, and incident counts. If you can't get those numbers, your governance is incomplete.

The Bottom Line

The first wave of enterprise AI security focused on what agents are allowed to do. The next wave is about what agents physically can do once they're running. NVIDIA OpenShell, embedded in SAP's new Autonomous Enterprise platform, is the first vendor-backed answer at scale. It won't be the last — Google, AWS, Microsoft, and Cloudflare all have credible runtime offerings, and the underlying technologies (gVisor, Firecracker, Kata Containers) are open-source and battle-tested.

What matters is that 2026 is the year the runtime layer stops being optional. The vendors who ship it by default win the enterprise. The CISOs who deploy it before their first breach keep their jobs. The CFOs who fund it before their auditors flag it sleep better.

If your AI agent stack doesn't include a runtime security layer, today is a good day to fix that.


Continue Reading


About the Author: Rajesh Beri is Head of AI Engineering at a Fortune 500 security company and author of THE DAILY BRIEF, a newsletter for technical and business leaders navigating enterprise AI. Follow on LinkedIn | Follow on Twitter/X

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe