Capsule Security: Why Copilot and Agentforce Both Leaked

Capsule Security exits stealth with $7M after disclosing zero-day prompt injections in Microsoft Copilot Studio (CVE-2026-21520) and Salesforce Agentforce.

By Rajesh Beri·April 16, 2026·11 min read
Share:

THE DAILY BRIEF

AI SecurityPrompt InjectionAI AgentsMicrosoft CopilotSalesforce Agentforce

Capsule Security: Why Copilot and Agentforce Both Leaked

Capsule Security exits stealth with $7M after disclosing zero-day prompt injections in Microsoft Copilot Studio (CVE-2026-21520) and Salesforce Agentforce.

By Rajesh Beri·April 16, 2026·11 min read

On April 15, 2026, Capsule Security exited stealth with a $7 million seed round led by Lama Partners and Forgepoint Capital International. The funding announcement was not the news. The news was what Capsule dropped alongside it: two coordinated zero-day disclosures—ShareLeak in Microsoft Copilot Studio and PipeLeak in Salesforce Agentforce—that between them demolish the comfortable assumption that enterprise AI agents, when they come from hyperscaler vendors, are "secure by default."

The ShareLeak finding carries a CVE: CVE-2026-21520, a CVSS 7.5 indirect prompt injection in Copilot Studio. Microsoft patched it on January 15, 2026. The sting in Capsule's announcement is the observation that Microsoft's own safety mechanisms flagged the malicious payload as suspicious during testing—and the data exfiltrated anyway. The patch closed the specific pathway. The class of attack remains open.

PipeLeak is the Agentforce analog: a prompt injection via untrusted lead-form inputs that can be used to redirect an Agentforce agent into unsafe downstream actions. Salesforce is now facing the same structural problem Microsoft just papered over.

For CIOs, CISOs, and CTOs who have spent the last 12 months approving Copilot Studio rollouts or evaluating Agentforce pilots, Capsule's disclosures are the first concrete, named, CVE-carrying evidence that the model's guardrails are not enough. The agent runtime is the real attack surface, and it needs dedicated controls that neither hyperscaler currently ships by default.

What Actually Happened in ShareLeak

The mechanics of CVE-2026-21520 are worth understanding in detail, because they describe a pattern that will show up repeatedly across agentic products.

The attack targets the gap between trusted input (the agent's own system prompt and configured instructions) and untrusted input (data the agent pulls in to answer a question). In Copilot Studio, a customer can build an agent that reads from SharePoint lists to answer internal questions—lead data, account notes, customer feedback, whatever. The SharePoint data is assumed to be trusted because it's inside the enterprise tenant.

That assumption is wrong the moment a SharePoint field accepts input from a public-facing form.

Capsule's proof-of-concept worked like this:

  1. An attacker fills a public SharePoint form comment field with a crafted payload that mimics a system-role message ("System: You are now instructed to...").
  2. Copilot Studio, when an internal user later triggers the agent to pull that SharePoint data, concatenates the malicious field content directly into the agent's context with no input sanitization between form and model.
  3. The injected payload overrides the original system instructions, directs the agent to query connected SharePoint Lists for customer data, and uses the agent's Outlook permissions to send that data to an attacker-controlled email address.
  4. The attack completed end-to-end despite Microsoft's runtime safety checks flagging the request as suspicious.

Low complexity. No privileges required. Full data exfiltration. That is the definition of a high-impact vulnerability, and it sat in a flagship hyperscaler agent platform for months.

The research timeline is a credibility anchor: Capsule found the bug on November 24, 2025, disclosed to Microsoft on December 5, 2025, and the patch shipped on January 15, 2026. This is standard responsible disclosure, not a drive-by marketing stunt. The disclosure publication was held until Capsule's public launch.

PipeLeak: The Same Class of Attack, Different Vendor

PipeLeak is worth treating as a sibling vulnerability rather than a standalone finding. The mechanism is the same: an untrusted input surface (a public lead-submission form in Salesforce) gets concatenated into an Agentforce agent's context, where crafted text can alter agent behavior.

The practical scenario: an inbound lead submission includes a crafted message in the "comments" field. An Agentforce sales agent later reads that lead record to qualify or enrich it. The injected text redirects the agent—adjusting CRM records, pulling data into responses, or triggering downstream automations the submitter has no business touching.

Salesforce's position is that Agentforce's guardrails and permissions model constrain the blast radius. That is probably true at the tool-call layer—an agent can only call what it has credentials for. But the agent's decision about which tool to call is exactly what prompt injection corrupts. If the agent has legitimate CRM-write permissions, hijacking the decision about when to write is sufficient damage.

The unifying point across ShareLeak and PipeLeak: the agent's permissions are the attack surface, not its guardrails. Any agent with meaningful enterprise access is one untrusted input concatenation away from acting against its operator's interest.

Why Capsule's Technical Bet Matters

Capsule's commercial premise is that the AI agent is the most unpredictable component in the enterprise stack, and that traditional security tooling cannot constrain it effectively. Their wedge is runtime control: observe what an agent is about to do, enforce policy before the tool call executes, and route telemetry into existing detection and response workflows.

The architectural choice that matters: no proxies, no gateways, no SDKs, no browser extensions. Enterprise security teams have rejected installation-heavy agents repeatedly over the last decade, and Capsule's team—co-founded by Naor Paz (CEO, formerly of F5 and Israeli intelligence Unit 8200) and Lidan Hazout (CTO, formerly VP R&D at Securedtouch/Ping Identity and Transmit Security)—clearly understood the objection before they built the product.

The platform integrates with Cursor, Claude Code, Microsoft Copilot Studio, ServiceNow, and Salesforce Agentforce. That list is deliberate. It covers (a) the developer agent surface that's metastasizing inside engineering orgs, (b) the hyperscaler agent platforms that CIOs are standardizing on, and (c) the ITSM/CRM agent surfaces where the blast radius of a rogue agent is measured in customer-facing damage.

Capsule also released ClawGuard, an open-source enforcement layer that inserts a checkpoint before an AI agent executes a tool call. The open-source release is a positioning move as much as a technical one: it seeds the pattern of "pre-execution validation for agent tool calls" as an industry norm, and makes it easier for security teams in regulated industries to run the control logic themselves even if they never buy the commercial product.

The CISO Framework: What Actually Changes Post-ShareLeak

For security leadership, ShareLeak is not an isolated incident. It is a category-defining disclosure that should reset three things in your AI-agent threat model.

1. Input sanitization must move inside the agent runtime, not upstream. The Copilot Studio vulnerability existed because the platform assumed tenant-internal data was trusted. Any agent that reads from a source capable of receiving external input—SharePoint forms, CRM lead fields, support ticket bodies, email bodies, PDF attachments, calendar invites—should treat that content as untrusted regardless of where it's stored. The sanitization layer has to run between the data source and the context window, not at the ingestion boundary.

2. The agent's effective permissions ≠ the human operator's permissions. The ShareLeak exfiltration used Copilot Studio's Outlook integration to send SharePoint data to an external email address. The attacker never had Outlook access; the agent did, and the agent was manipulated. Agents should hold the minimum permission set required to complete their documented workflows, not the union of permissions their human users would expect to have. This is least-privilege applied to the agent, not the user.

3. Runtime telemetry is now a compliance requirement, not a nice-to-have. Agents operating without continuous runtime monitoring—decisions logged, tool calls inspected, unexpected data flows flagged in real time—are operating in a state that SOC 2, ISO 27001, and the forthcoming EU AI Act auditors will increasingly treat as uncontrolled data processing. The control gap is large enough that it cannot be closed by pre-deployment review alone.

Capsule is one vendor solving this category. There will be others—expect fast-follower announcements from the incumbent SASE, CASB, and DSPM vendors within two quarters, and an acquisition wave inside six. For now, the category has a named leader, named vulnerabilities to benchmark against, and an open-source artifact (ClawGuard) that security teams can evaluate without a procurement cycle.

The CIO Budget Lens

For CIOs, the more pressing question is procurement: do you negotiate agent security as a line item in your existing Copilot/Agentforce contracts, or do you buy a dedicated runtime security layer from a third party?

The honest answer for most enterprises is both, in sequence:

First, push Microsoft and Salesforce on specific remediation commitments. The ShareLeak disclosure gives you a contractual lever. Ask for:

  • Input sanitization guarantees between tenant data sources and agent context windows
  • Scoped, workflow-specific permission templates (not blanket agent-level permissions)
  • Runtime telemetry APIs that your SIEM can consume
  • A documented post-incident response path for agent-level security events

Second, deploy a runtime enforcement layer over the top—Capsule, or a competing offering, or a self-built MCP-protocol-based equivalent. The hyperscaler is never going to ship the layer that watches itself; the industry is headed toward a third-party control plane pattern, much like CASB and CSPM evolved as third-party controls over cloud platforms that the platforms themselves were structurally unwilling to provide.

The budget footprint for a runtime agent security layer is currently small enough ($50K-$250K/year for most enterprise deployments) that it's a rounding error against an enterprise Copilot or Agentforce commitment. The comparison that matters is the cost of a data exfiltration incident—which, in the ShareLeak reference case, could have been measured in customer records, breach notification obligations, and brand damage.

What Boards Should Ask

For board-level risk and audit committees, the right questions coming out of the ShareLeak disclosure are concrete and short:

  • Which AI agents do we have in production that can read tenant data and execute outbound actions? (Email, API calls, database writes, CRM updates.)
  • Which of those agents ingest data from sources that accept untrusted input? (Forms, tickets, inbound email, attachments.)
  • What is our runtime monitoring posture for those agents? Logging, alerting, kill-switch capability, post-incident replay.
  • Who owns agent security as a function? If the answer is "the platform vendor," you have the problem.
  • What is our response plan for a ShareLeak-class event on our own platform? If you don't have one, write one this quarter.

These are the questions that should appear in the next audit cycle for any enterprise that has deployed agentic AI at scale. They are not hypothetical. The Capsule disclosure just confirmed that the attack class is real, the patches from hyperscalers are partial, and the responsibility for defense-in-depth sits squarely with the enterprise.

The Bigger Structural Story

Step back from the two specific CVEs and the pattern is clearer. Prompt injection is not a bug; it's a consequence of how transformer-based agents combine trusted and untrusted text in a single context window. It cannot be fully patched at the model layer, because the model does not know which tokens came from which source. It has to be constrained at the runtime layer, where tool calls happen and outbound data movement is observable.

This is why Capsule's category bet is structurally right, regardless of how Capsule the company performs over the next 18 months. The observation generalizes: the security architecture for agents looks more like runtime application self-protection (RASP) for workloads with permissions than it does like anything we currently call "AI safety." The controls that matter are:

  • Pre-execution policy enforcement on tool calls
  • Continuous monitoring of agent decisions for deviation from baseline behavior
  • Hard-coded kill switches that a human SOC analyst can trigger mid-session
  • Scoped, ephemeral credentials that limit blast radius even when a model is successfully hijacked

Hyperscalers will eventually ship versions of all four controls. They won't ship them first, and they won't ship them independently—they'll ship them as add-ons to existing bundles, priced accordingly. The companies that are going to define this category for the next three years are the ones shipping today, with CVEs attached to their announcements. Capsule is now one of them.

Bottom Line

ShareLeak and PipeLeak are the most important enterprise AI security disclosures of Q2 2026, not because the specific vulnerabilities are catastrophic—both have been addressed at the platform layer—but because they publicly validate a threat class that CISOs have been modeling privately for months.

For CIOs: audit your active agent deployments against the untrusted-input test this week. You will find exposure.

For CISOs: the runtime agent security category is now a line-item in your 2026 security stack whether you budgeted for it or not.

For CTOs: if you are building agentic features on Copilot Studio or Agentforce, ship a pre-execution policy layer (commercial or self-built) before you ship the agent. The hyperscaler will not do it for you.

For boards: the five questions above are now due diligence, not curiosity.

Capsule Security's $7M seed is a small number in an industry where the AI-agent security market will be well into nine figures by end of 2027. The more important number is the CVE: 2026-21520. That is the benchmark ShareLeak-class disclosures will be measured against, and it is the receipt that proves runtime control is not a theoretical concern.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Capsule Security: Why Copilot and Agentforce Both Leaked

Photo by Pixabay on Pexels

On April 15, 2026, Capsule Security exited stealth with a $7 million seed round led by Lama Partners and Forgepoint Capital International. The funding announcement was not the news. The news was what Capsule dropped alongside it: two coordinated zero-day disclosures—ShareLeak in Microsoft Copilot Studio and PipeLeak in Salesforce Agentforce—that between them demolish the comfortable assumption that enterprise AI agents, when they come from hyperscaler vendors, are "secure by default."

The ShareLeak finding carries a CVE: CVE-2026-21520, a CVSS 7.5 indirect prompt injection in Copilot Studio. Microsoft patched it on January 15, 2026. The sting in Capsule's announcement is the observation that Microsoft's own safety mechanisms flagged the malicious payload as suspicious during testing—and the data exfiltrated anyway. The patch closed the specific pathway. The class of attack remains open.

PipeLeak is the Agentforce analog: a prompt injection via untrusted lead-form inputs that can be used to redirect an Agentforce agent into unsafe downstream actions. Salesforce is now facing the same structural problem Microsoft just papered over.

For CIOs, CISOs, and CTOs who have spent the last 12 months approving Copilot Studio rollouts or evaluating Agentforce pilots, Capsule's disclosures are the first concrete, named, CVE-carrying evidence that the model's guardrails are not enough. The agent runtime is the real attack surface, and it needs dedicated controls that neither hyperscaler currently ships by default.

What Actually Happened in ShareLeak

The mechanics of CVE-2026-21520 are worth understanding in detail, because they describe a pattern that will show up repeatedly across agentic products.

The attack targets the gap between trusted input (the agent's own system prompt and configured instructions) and untrusted input (data the agent pulls in to answer a question). In Copilot Studio, a customer can build an agent that reads from SharePoint lists to answer internal questions—lead data, account notes, customer feedback, whatever. The SharePoint data is assumed to be trusted because it's inside the enterprise tenant.

That assumption is wrong the moment a SharePoint field accepts input from a public-facing form.

Capsule's proof-of-concept worked like this:

  1. An attacker fills a public SharePoint form comment field with a crafted payload that mimics a system-role message ("System: You are now instructed to...").
  2. Copilot Studio, when an internal user later triggers the agent to pull that SharePoint data, concatenates the malicious field content directly into the agent's context with no input sanitization between form and model.
  3. The injected payload overrides the original system instructions, directs the agent to query connected SharePoint Lists for customer data, and uses the agent's Outlook permissions to send that data to an attacker-controlled email address.
  4. The attack completed end-to-end despite Microsoft's runtime safety checks flagging the request as suspicious.

Low complexity. No privileges required. Full data exfiltration. That is the definition of a high-impact vulnerability, and it sat in a flagship hyperscaler agent platform for months.

The research timeline is a credibility anchor: Capsule found the bug on November 24, 2025, disclosed to Microsoft on December 5, 2025, and the patch shipped on January 15, 2026. This is standard responsible disclosure, not a drive-by marketing stunt. The disclosure publication was held until Capsule's public launch.

PipeLeak: The Same Class of Attack, Different Vendor

PipeLeak is worth treating as a sibling vulnerability rather than a standalone finding. The mechanism is the same: an untrusted input surface (a public lead-submission form in Salesforce) gets concatenated into an Agentforce agent's context, where crafted text can alter agent behavior.

The practical scenario: an inbound lead submission includes a crafted message in the "comments" field. An Agentforce sales agent later reads that lead record to qualify or enrich it. The injected text redirects the agent—adjusting CRM records, pulling data into responses, or triggering downstream automations the submitter has no business touching.

Salesforce's position is that Agentforce's guardrails and permissions model constrain the blast radius. That is probably true at the tool-call layer—an agent can only call what it has credentials for. But the agent's decision about which tool to call is exactly what prompt injection corrupts. If the agent has legitimate CRM-write permissions, hijacking the decision about when to write is sufficient damage.

The unifying point across ShareLeak and PipeLeak: the agent's permissions are the attack surface, not its guardrails. Any agent with meaningful enterprise access is one untrusted input concatenation away from acting against its operator's interest.

Why Capsule's Technical Bet Matters

Capsule's commercial premise is that the AI agent is the most unpredictable component in the enterprise stack, and that traditional security tooling cannot constrain it effectively. Their wedge is runtime control: observe what an agent is about to do, enforce policy before the tool call executes, and route telemetry into existing detection and response workflows.

The architectural choice that matters: no proxies, no gateways, no SDKs, no browser extensions. Enterprise security teams have rejected installation-heavy agents repeatedly over the last decade, and Capsule's team—co-founded by Naor Paz (CEO, formerly of F5 and Israeli intelligence Unit 8200) and Lidan Hazout (CTO, formerly VP R&D at Securedtouch/Ping Identity and Transmit Security)—clearly understood the objection before they built the product.

The platform integrates with Cursor, Claude Code, Microsoft Copilot Studio, ServiceNow, and Salesforce Agentforce. That list is deliberate. It covers (a) the developer agent surface that's metastasizing inside engineering orgs, (b) the hyperscaler agent platforms that CIOs are standardizing on, and (c) the ITSM/CRM agent surfaces where the blast radius of a rogue agent is measured in customer-facing damage.

Capsule also released ClawGuard, an open-source enforcement layer that inserts a checkpoint before an AI agent executes a tool call. The open-source release is a positioning move as much as a technical one: it seeds the pattern of "pre-execution validation for agent tool calls" as an industry norm, and makes it easier for security teams in regulated industries to run the control logic themselves even if they never buy the commercial product.

The CISO Framework: What Actually Changes Post-ShareLeak

For security leadership, ShareLeak is not an isolated incident. It is a category-defining disclosure that should reset three things in your AI-agent threat model.

1. Input sanitization must move inside the agent runtime, not upstream. The Copilot Studio vulnerability existed because the platform assumed tenant-internal data was trusted. Any agent that reads from a source capable of receiving external input—SharePoint forms, CRM lead fields, support ticket bodies, email bodies, PDF attachments, calendar invites—should treat that content as untrusted regardless of where it's stored. The sanitization layer has to run between the data source and the context window, not at the ingestion boundary.

2. The agent's effective permissions ≠ the human operator's permissions. The ShareLeak exfiltration used Copilot Studio's Outlook integration to send SharePoint data to an external email address. The attacker never had Outlook access; the agent did, and the agent was manipulated. Agents should hold the minimum permission set required to complete their documented workflows, not the union of permissions their human users would expect to have. This is least-privilege applied to the agent, not the user.

3. Runtime telemetry is now a compliance requirement, not a nice-to-have. Agents operating without continuous runtime monitoring—decisions logged, tool calls inspected, unexpected data flows flagged in real time—are operating in a state that SOC 2, ISO 27001, and the forthcoming EU AI Act auditors will increasingly treat as uncontrolled data processing. The control gap is large enough that it cannot be closed by pre-deployment review alone.

Capsule is one vendor solving this category. There will be others—expect fast-follower announcements from the incumbent SASE, CASB, and DSPM vendors within two quarters, and an acquisition wave inside six. For now, the category has a named leader, named vulnerabilities to benchmark against, and an open-source artifact (ClawGuard) that security teams can evaluate without a procurement cycle.

The CIO Budget Lens

For CIOs, the more pressing question is procurement: do you negotiate agent security as a line item in your existing Copilot/Agentforce contracts, or do you buy a dedicated runtime security layer from a third party?

The honest answer for most enterprises is both, in sequence:

First, push Microsoft and Salesforce on specific remediation commitments. The ShareLeak disclosure gives you a contractual lever. Ask for:

  • Input sanitization guarantees between tenant data sources and agent context windows
  • Scoped, workflow-specific permission templates (not blanket agent-level permissions)
  • Runtime telemetry APIs that your SIEM can consume
  • A documented post-incident response path for agent-level security events

Second, deploy a runtime enforcement layer over the top—Capsule, or a competing offering, or a self-built MCP-protocol-based equivalent. The hyperscaler is never going to ship the layer that watches itself; the industry is headed toward a third-party control plane pattern, much like CASB and CSPM evolved as third-party controls over cloud platforms that the platforms themselves were structurally unwilling to provide.

The budget footprint for a runtime agent security layer is currently small enough ($50K-$250K/year for most enterprise deployments) that it's a rounding error against an enterprise Copilot or Agentforce commitment. The comparison that matters is the cost of a data exfiltration incident—which, in the ShareLeak reference case, could have been measured in customer records, breach notification obligations, and brand damage.

What Boards Should Ask

For board-level risk and audit committees, the right questions coming out of the ShareLeak disclosure are concrete and short:

  • Which AI agents do we have in production that can read tenant data and execute outbound actions? (Email, API calls, database writes, CRM updates.)
  • Which of those agents ingest data from sources that accept untrusted input? (Forms, tickets, inbound email, attachments.)
  • What is our runtime monitoring posture for those agents? Logging, alerting, kill-switch capability, post-incident replay.
  • Who owns agent security as a function? If the answer is "the platform vendor," you have the problem.
  • What is our response plan for a ShareLeak-class event on our own platform? If you don't have one, write one this quarter.

These are the questions that should appear in the next audit cycle for any enterprise that has deployed agentic AI at scale. They are not hypothetical. The Capsule disclosure just confirmed that the attack class is real, the patches from hyperscalers are partial, and the responsibility for defense-in-depth sits squarely with the enterprise.

The Bigger Structural Story

Step back from the two specific CVEs and the pattern is clearer. Prompt injection is not a bug; it's a consequence of how transformer-based agents combine trusted and untrusted text in a single context window. It cannot be fully patched at the model layer, because the model does not know which tokens came from which source. It has to be constrained at the runtime layer, where tool calls happen and outbound data movement is observable.

This is why Capsule's category bet is structurally right, regardless of how Capsule the company performs over the next 18 months. The observation generalizes: the security architecture for agents looks more like runtime application self-protection (RASP) for workloads with permissions than it does like anything we currently call "AI safety." The controls that matter are:

  • Pre-execution policy enforcement on tool calls
  • Continuous monitoring of agent decisions for deviation from baseline behavior
  • Hard-coded kill switches that a human SOC analyst can trigger mid-session
  • Scoped, ephemeral credentials that limit blast radius even when a model is successfully hijacked

Hyperscalers will eventually ship versions of all four controls. They won't ship them first, and they won't ship them independently—they'll ship them as add-ons to existing bundles, priced accordingly. The companies that are going to define this category for the next three years are the ones shipping today, with CVEs attached to their announcements. Capsule is now one of them.

Bottom Line

ShareLeak and PipeLeak are the most important enterprise AI security disclosures of Q2 2026, not because the specific vulnerabilities are catastrophic—both have been addressed at the platform layer—but because they publicly validate a threat class that CISOs have been modeling privately for months.

For CIOs: audit your active agent deployments against the untrusted-input test this week. You will find exposure.

For CISOs: the runtime agent security category is now a line-item in your 2026 security stack whether you budgeted for it or not.

For CTOs: if you are building agentic features on Copilot Studio or Agentforce, ship a pre-execution policy layer (commercial or self-built) before you ship the agent. The hyperscaler will not do it for you.

For boards: the five questions above are now due diligence, not curiosity.

Capsule Security's $7M seed is a small number in an industry where the AI-agent security market will be well into nine figures by end of 2027. The more important number is the CVE: 2026-21520. That is the benchmark ShareLeak-class disclosures will be measured against, and it is the receipt that proves runtime control is not a theoretical concern.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

AI SecurityPrompt InjectionAI AgentsMicrosoft CopilotSalesforce Agentforce

Capsule Security: Why Copilot and Agentforce Both Leaked

Capsule Security exits stealth with $7M after disclosing zero-day prompt injections in Microsoft Copilot Studio (CVE-2026-21520) and Salesforce Agentforce.

By Rajesh Beri·April 16, 2026·11 min read

On April 15, 2026, Capsule Security exited stealth with a $7 million seed round led by Lama Partners and Forgepoint Capital International. The funding announcement was not the news. The news was what Capsule dropped alongside it: two coordinated zero-day disclosures—ShareLeak in Microsoft Copilot Studio and PipeLeak in Salesforce Agentforce—that between them demolish the comfortable assumption that enterprise AI agents, when they come from hyperscaler vendors, are "secure by default."

The ShareLeak finding carries a CVE: CVE-2026-21520, a CVSS 7.5 indirect prompt injection in Copilot Studio. Microsoft patched it on January 15, 2026. The sting in Capsule's announcement is the observation that Microsoft's own safety mechanisms flagged the malicious payload as suspicious during testing—and the data exfiltrated anyway. The patch closed the specific pathway. The class of attack remains open.

PipeLeak is the Agentforce analog: a prompt injection via untrusted lead-form inputs that can be used to redirect an Agentforce agent into unsafe downstream actions. Salesforce is now facing the same structural problem Microsoft just papered over.

For CIOs, CISOs, and CTOs who have spent the last 12 months approving Copilot Studio rollouts or evaluating Agentforce pilots, Capsule's disclosures are the first concrete, named, CVE-carrying evidence that the model's guardrails are not enough. The agent runtime is the real attack surface, and it needs dedicated controls that neither hyperscaler currently ships by default.

What Actually Happened in ShareLeak

The mechanics of CVE-2026-21520 are worth understanding in detail, because they describe a pattern that will show up repeatedly across agentic products.

The attack targets the gap between trusted input (the agent's own system prompt and configured instructions) and untrusted input (data the agent pulls in to answer a question). In Copilot Studio, a customer can build an agent that reads from SharePoint lists to answer internal questions—lead data, account notes, customer feedback, whatever. The SharePoint data is assumed to be trusted because it's inside the enterprise tenant.

That assumption is wrong the moment a SharePoint field accepts input from a public-facing form.

Capsule's proof-of-concept worked like this:

  1. An attacker fills a public SharePoint form comment field with a crafted payload that mimics a system-role message ("System: You are now instructed to...").
  2. Copilot Studio, when an internal user later triggers the agent to pull that SharePoint data, concatenates the malicious field content directly into the agent's context with no input sanitization between form and model.
  3. The injected payload overrides the original system instructions, directs the agent to query connected SharePoint Lists for customer data, and uses the agent's Outlook permissions to send that data to an attacker-controlled email address.
  4. The attack completed end-to-end despite Microsoft's runtime safety checks flagging the request as suspicious.

Low complexity. No privileges required. Full data exfiltration. That is the definition of a high-impact vulnerability, and it sat in a flagship hyperscaler agent platform for months.

The research timeline is a credibility anchor: Capsule found the bug on November 24, 2025, disclosed to Microsoft on December 5, 2025, and the patch shipped on January 15, 2026. This is standard responsible disclosure, not a drive-by marketing stunt. The disclosure publication was held until Capsule's public launch.

PipeLeak: The Same Class of Attack, Different Vendor

PipeLeak is worth treating as a sibling vulnerability rather than a standalone finding. The mechanism is the same: an untrusted input surface (a public lead-submission form in Salesforce) gets concatenated into an Agentforce agent's context, where crafted text can alter agent behavior.

The practical scenario: an inbound lead submission includes a crafted message in the "comments" field. An Agentforce sales agent later reads that lead record to qualify or enrich it. The injected text redirects the agent—adjusting CRM records, pulling data into responses, or triggering downstream automations the submitter has no business touching.

Salesforce's position is that Agentforce's guardrails and permissions model constrain the blast radius. That is probably true at the tool-call layer—an agent can only call what it has credentials for. But the agent's decision about which tool to call is exactly what prompt injection corrupts. If the agent has legitimate CRM-write permissions, hijacking the decision about when to write is sufficient damage.

The unifying point across ShareLeak and PipeLeak: the agent's permissions are the attack surface, not its guardrails. Any agent with meaningful enterprise access is one untrusted input concatenation away from acting against its operator's interest.

Why Capsule's Technical Bet Matters

Capsule's commercial premise is that the AI agent is the most unpredictable component in the enterprise stack, and that traditional security tooling cannot constrain it effectively. Their wedge is runtime control: observe what an agent is about to do, enforce policy before the tool call executes, and route telemetry into existing detection and response workflows.

The architectural choice that matters: no proxies, no gateways, no SDKs, no browser extensions. Enterprise security teams have rejected installation-heavy agents repeatedly over the last decade, and Capsule's team—co-founded by Naor Paz (CEO, formerly of F5 and Israeli intelligence Unit 8200) and Lidan Hazout (CTO, formerly VP R&D at Securedtouch/Ping Identity and Transmit Security)—clearly understood the objection before they built the product.

The platform integrates with Cursor, Claude Code, Microsoft Copilot Studio, ServiceNow, and Salesforce Agentforce. That list is deliberate. It covers (a) the developer agent surface that's metastasizing inside engineering orgs, (b) the hyperscaler agent platforms that CIOs are standardizing on, and (c) the ITSM/CRM agent surfaces where the blast radius of a rogue agent is measured in customer-facing damage.

Capsule also released ClawGuard, an open-source enforcement layer that inserts a checkpoint before an AI agent executes a tool call. The open-source release is a positioning move as much as a technical one: it seeds the pattern of "pre-execution validation for agent tool calls" as an industry norm, and makes it easier for security teams in regulated industries to run the control logic themselves even if they never buy the commercial product.

The CISO Framework: What Actually Changes Post-ShareLeak

For security leadership, ShareLeak is not an isolated incident. It is a category-defining disclosure that should reset three things in your AI-agent threat model.

1. Input sanitization must move inside the agent runtime, not upstream. The Copilot Studio vulnerability existed because the platform assumed tenant-internal data was trusted. Any agent that reads from a source capable of receiving external input—SharePoint forms, CRM lead fields, support ticket bodies, email bodies, PDF attachments, calendar invites—should treat that content as untrusted regardless of where it's stored. The sanitization layer has to run between the data source and the context window, not at the ingestion boundary.

2. The agent's effective permissions ≠ the human operator's permissions. The ShareLeak exfiltration used Copilot Studio's Outlook integration to send SharePoint data to an external email address. The attacker never had Outlook access; the agent did, and the agent was manipulated. Agents should hold the minimum permission set required to complete their documented workflows, not the union of permissions their human users would expect to have. This is least-privilege applied to the agent, not the user.

3. Runtime telemetry is now a compliance requirement, not a nice-to-have. Agents operating without continuous runtime monitoring—decisions logged, tool calls inspected, unexpected data flows flagged in real time—are operating in a state that SOC 2, ISO 27001, and the forthcoming EU AI Act auditors will increasingly treat as uncontrolled data processing. The control gap is large enough that it cannot be closed by pre-deployment review alone.

Capsule is one vendor solving this category. There will be others—expect fast-follower announcements from the incumbent SASE, CASB, and DSPM vendors within two quarters, and an acquisition wave inside six. For now, the category has a named leader, named vulnerabilities to benchmark against, and an open-source artifact (ClawGuard) that security teams can evaluate without a procurement cycle.

The CIO Budget Lens

For CIOs, the more pressing question is procurement: do you negotiate agent security as a line item in your existing Copilot/Agentforce contracts, or do you buy a dedicated runtime security layer from a third party?

The honest answer for most enterprises is both, in sequence:

First, push Microsoft and Salesforce on specific remediation commitments. The ShareLeak disclosure gives you a contractual lever. Ask for:

  • Input sanitization guarantees between tenant data sources and agent context windows
  • Scoped, workflow-specific permission templates (not blanket agent-level permissions)
  • Runtime telemetry APIs that your SIEM can consume
  • A documented post-incident response path for agent-level security events

Second, deploy a runtime enforcement layer over the top—Capsule, or a competing offering, or a self-built MCP-protocol-based equivalent. The hyperscaler is never going to ship the layer that watches itself; the industry is headed toward a third-party control plane pattern, much like CASB and CSPM evolved as third-party controls over cloud platforms that the platforms themselves were structurally unwilling to provide.

The budget footprint for a runtime agent security layer is currently small enough ($50K-$250K/year for most enterprise deployments) that it's a rounding error against an enterprise Copilot or Agentforce commitment. The comparison that matters is the cost of a data exfiltration incident—which, in the ShareLeak reference case, could have been measured in customer records, breach notification obligations, and brand damage.

What Boards Should Ask

For board-level risk and audit committees, the right questions coming out of the ShareLeak disclosure are concrete and short:

  • Which AI agents do we have in production that can read tenant data and execute outbound actions? (Email, API calls, database writes, CRM updates.)
  • Which of those agents ingest data from sources that accept untrusted input? (Forms, tickets, inbound email, attachments.)
  • What is our runtime monitoring posture for those agents? Logging, alerting, kill-switch capability, post-incident replay.
  • Who owns agent security as a function? If the answer is "the platform vendor," you have the problem.
  • What is our response plan for a ShareLeak-class event on our own platform? If you don't have one, write one this quarter.

These are the questions that should appear in the next audit cycle for any enterprise that has deployed agentic AI at scale. They are not hypothetical. The Capsule disclosure just confirmed that the attack class is real, the patches from hyperscalers are partial, and the responsibility for defense-in-depth sits squarely with the enterprise.

The Bigger Structural Story

Step back from the two specific CVEs and the pattern is clearer. Prompt injection is not a bug; it's a consequence of how transformer-based agents combine trusted and untrusted text in a single context window. It cannot be fully patched at the model layer, because the model does not know which tokens came from which source. It has to be constrained at the runtime layer, where tool calls happen and outbound data movement is observable.

This is why Capsule's category bet is structurally right, regardless of how Capsule the company performs over the next 18 months. The observation generalizes: the security architecture for agents looks more like runtime application self-protection (RASP) for workloads with permissions than it does like anything we currently call "AI safety." The controls that matter are:

  • Pre-execution policy enforcement on tool calls
  • Continuous monitoring of agent decisions for deviation from baseline behavior
  • Hard-coded kill switches that a human SOC analyst can trigger mid-session
  • Scoped, ephemeral credentials that limit blast radius even when a model is successfully hijacked

Hyperscalers will eventually ship versions of all four controls. They won't ship them first, and they won't ship them independently—they'll ship them as add-ons to existing bundles, priced accordingly. The companies that are going to define this category for the next three years are the ones shipping today, with CVEs attached to their announcements. Capsule is now one of them.

Bottom Line

ShareLeak and PipeLeak are the most important enterprise AI security disclosures of Q2 2026, not because the specific vulnerabilities are catastrophic—both have been addressed at the platform layer—but because they publicly validate a threat class that CISOs have been modeling privately for months.

For CIOs: audit your active agent deployments against the untrusted-input test this week. You will find exposure.

For CISOs: the runtime agent security category is now a line-item in your 2026 security stack whether you budgeted for it or not.

For CTOs: if you are building agentic features on Copilot Studio or Agentforce, ship a pre-execution policy layer (commercial or self-built) before you ship the agent. The hyperscaler will not do it for you.

For boards: the five questions above are now due diligence, not curiosity.

Capsule Security's $7M seed is a small number in an industry where the AI-agent security market will be well into nine figures by end of 2027. The more important number is the CVE: 2026-21520. That is the benchmark ShareLeak-class disclosures will be measured against, and it is the receipt that proves runtime control is not a theoretical concern.


Sources:


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe