The 44% Visibility Gap: Enterprise's Hidden AI Agent Crisis

Nokod's 2026 survey of 200 CISOs finds security teams see only 44% of business-built AI agents. 80% lack visibility, citizen builders outnumber pros 4:1.

By Rajesh Beri·April 27, 2026·11 min read
Share:

THE DAILY BRIEF

Shadow AIAI Agent SecurityEnterprise AI GovernanceCitizen DevelopersLow-Code Security

The 44% Visibility Gap: Enterprise's Hidden AI Agent Crisis

Nokod's 2026 survey of 200 CISOs finds security teams see only 44% of business-built AI agents. 80% lack visibility, citizen builders outnumber pros 4:1.

By Rajesh Beri·April 27, 2026·11 min read

For every professional developer in your enterprise, four business users are now building applications, AI agents, and automations on platforms like Microsoft Copilot Studio, Power Platform, ServiceNow, and UiPath. In some organizations, the ratio is ten to one. And your security team can see fewer than half of them.

That is the central finding of Nokod Security's 2026 State of Security in Business-Built Applications and AI Agents Survey, released April 27, 2026. The survey of 200 enterprise CISOs paints a picture that should alarm anyone responsible for AI risk: security teams can track only 44% of AI tools handling sensitive enterprise data, and 80% admit they lack full visibility into what business users are building.

This is not the familiar "shadow AI" story of employees pasting customer data into ChatGPT. This is a structurally different problem—shadow engineering—where business users are not just consuming AI, they are building production-grade agents and automations on enterprise-sanctioned platforms. The platforms are approved. The governance around what gets built on them is not.

For Rajesh Beri's team at Zscaler and every other AI engineering organization wrestling with this exact issue, Nokod's data confirms what we have been seeing in practice: the vibe-coding problem has scaled past the point where AppSec, DLP, and traditional change management can contain it.

The Numbers: A Visibility Crisis Hiding in Plain Sight

Nokod surveyed 200 CISOs at large enterprises. The findings:

  • 44% — share of AI tools handling sensitive data that security teams can actually track
  • 80% — share of security teams that lack full visibility into business-built apps and AI agents
  • 4:1 — average ratio of business builders to professional developers per organization
  • 10:1 — ratio in some heavily-Power-Platform organizations
  • 50%+ — share of CISOs who confirm business users build apps that support critical business processes
  • 90% — share of enterprises planning to standardize AI tool security in 2026
  • 67% — already allocate budget for securing business-built apps; 15% growth expected this year

Yair Finzi, Nokod's CEO, framed the takeaway bluntly: "Security teams are losing a race they don't even realize they are running." The platforms in question—Microsoft Copilot Studio, Power Automate, ServiceNow, UiPath, Salesforce, Retool—were sanctioned to accelerate the business. They have done that. They have also created the largest unmonitored attack surface most enterprises have ever owned.

Shadow AI vs. Shadow Engineering: Two Different Problems

The Nokod findings sit on top of a separate but related dataset. Lenovo's Work Reborn Report released the same morning found that 70% of employees use AI weekly and one-third operate beyond IT oversight. Microsoft's first-party telemetry from late 2025 already showed 80% of Fortune 500 companies running active AI agents built in Copilot Studio.

These are not the same problem.

Shadow AI is consumption: an employee pastes a customer call transcript into a free Claude or Gemini tab. The risk is data egress. The fix is gateway controls and DLP—Zscaler, Palo Alto Prisma AIRS, Cloudflare AI Gateway. We know how to do this.

Shadow engineering is production: a regional sales ops manager builds a Copilot Studio agent that reads from Dataverse, calls a Salesforce connector, queries an internal pricing API, and emails customers based on the output. The agent inherits her permissions, runs on a sanctioned platform, generates a Power Platform connection record IT can technically see—but no one is reviewing what it does, what it can be tricked into doing, or what data it touches.

The Nokod data tells us shadow engineering is now bigger, by builder headcount, than the entire professional developer population at most enterprises. Gartner's prediction that 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025, is not happening through formal SDLC. It is happening through Copilot Studio prompt boxes, ServiceNow Now Assist, and Power Automate flows.

Why Traditional AppSec Cannot See This

I want to be specific about why this slips through, because the answer matters for what to actually do about it.

Traditional AppSec instruments three things: code repositories (SAST/SCA), running services (DAST/RASP), and the network (WAF, gateway). Business-built agents touch none of these surfaces in a way that AppSec recognizes.

A Copilot Studio agent is not in a Git repo. It is a topic graph stored in Dataverse. A Power Automate flow is not a deployable artifact—it is a JSON definition tied to environment connections. A ServiceNow Now Assist skill is configured in the platform's catalog. A UiPath bot lives in Orchestrator. None of these emit findings to your existing scanner stack unless you have explicitly stood up platform-specific tooling.

Worse, the risk surface of these agents is qualitatively different. Where traditional applications fail through CVEs and injection bugs, agentic systems fail through:

  • Prompt injection via untrusted data sources — an agent that reads emails or tickets can be hijacked by content in those records
  • Permission inheritance and over-scoping — agents run with the builder's identity, often a power user with broad data access
  • Tool/connector chaining — an agent with read access to CRM and write access to email becomes an exfiltration primitive
  • Data residency and oversharing — agents trained or grounded on internal corpora leak across business unit boundaries

Microsoft's own zoned governance guidance for Copilot Studio acknowledges this: Zone 1 is the "citizen development zone" where anyone can build personal and team agents. Zone 1 is also where almost everything is being built, and almost nothing is being reviewed.

The CFO Math: $492M Becoming $1B

Gartner forecasts AI governance spending will hit $492 million in 2026 and surpass $1 billion by 2030. Read alongside Nokod's finding that 67% of enterprises already budget for business-built app security with 15% YoY growth, this is no longer a discretionary line item. It has become a category.

The cost driver is not the tooling itself—Nokod, Zenity, AppOmni, and the platform-native governance from Microsoft Purview and ServiceNow AI Control Tower are reasonably priced relative to enterprise security stacks. The cost driver is the control plane work: discovery, inventorying, policy authoring, exception handling, and the human-in-the-loop review of who is allowed to build what.

The CFO question is whether to fund that control plane proactively or pay for it reactively after an incident. Nokod's number that 40% of agentic AI initiatives could be abandoned by 2027 if governance fundamentals fail (Gartner's projection) is the financial case in a single line. Pilots that cannot pass governance review do not reach production. Investment that does not reach production has no ROI.

Photo by Pixabay on Pexels

What CISOs Should Do This Quarter

Nokod's data is most actionable for security leaders. Three concrete moves:

1. Inventory before you control. You cannot govern what you cannot see. Start with platform-native discovery: Microsoft Purview's Copilot dashboard, Power Platform admin center DLP analytics, ServiceNow's Now Assist usage reports, UiPath Insights. Layer specialized tooling—Nokod, Zenity, or AppOmni—where coverage gaps remain. The goal is a single inventory of every business-built agent, its owner, its data scopes, and its connectors. Most enterprises are starting from zero here. Plan a 90-day discovery sprint.

2. Tier by blast radius, not by builder. The instinct is to lock down citizen development. That fails because it pushes builders to even less governed surfaces (browser-based agents, personal accounts). The better model is risk tiering by what an agent can touch: agents accessing PII, financial data, customer-facing channels, or production systems get full review; agents that summarize internal docs in a single team workspace get a lighter path. Microsoft's zoned governance gives you the primitive—use it.

3. Make agent identity a first-class control. Today, most business-built agents run as the builder. That means a sales rep's Copilot agent has all of her CRM, email, and Teams permissions. When she leaves, the agent breaks—or worse, keeps running. Move agents to dedicated service principals with scoped, audited permissions, and make this the default in your platform configuration. This single change collapses an entire class of insider and lateral movement risk.

What CIOs Should Do This Quarter

For CIOs balancing innovation against control, Nokod's findings are a forcing function but not a stop sign:

Sanction a paved road, then route traffic to it. The reason 4:1 builder ratios exist is that business users have real work to automate and the platforms make it easy. Trying to slow them down loses. Instead, make the governed path the easy path: a curated catalog of approved connectors, pre-built templates for common agent patterns, sandbox environments wired to non-production data, and clear graduation criteria to production. Every business unit gets the same starter kit.

Centralize agent observability. You already centralize logs, traces, and metrics for traditional applications. Do the same for agents. Tools like Weights & Biases, Langfuse, and platform-native observability (Power Platform Activity Logging, ServiceNow Performance Analytics) can pipe agent interactions—prompts, tool calls, outputs—into your SIEM and analytics stack. This is the difference between "we have AI agents somewhere" and "we know what every agent did yesterday."

Designate an agent owner of record. Every agent in production needs a named human accountable for it. Not a team. Not a distribution list. A person. When the agent breaks, leaks, or behaves badly, that person is paged. This sounds bureaucratic. It is the single highest-leverage governance control I have seen work.

What AI Engineering Teams Should Do

If you run AI engineering—as Rajesh does at Zscaler—your role is the enablement layer between security policy and business builders. The Nokod data argues for three priorities:

Build, do not buy, the discovery layer. Vendor tools are necessary but not sufficient. Your enterprise has unique connector usage, unique data classifications, and unique platform mixes. A thin internal service that ingests platform inventory APIs and joins them with your CMDB, identity provider, and data classification system gives you a defensible source of truth. Nokod and Zenity are accelerants for this; they are not replacements.

Create AI Guard equivalents for business-built agents. If your professional development teams already use prompt-injection, DLP, and content moderation guardrails (Zscaler AI Guard, Microsoft Prompt Shield, NVIDIA NeMo Guardrails), the same controls need to be available—mandatorily—for Copilot Studio and Power Platform builds. Negotiate a single guardrail SDK that platform admins can enforce as policy.

Run continuous red-team campaigns against the platforms, not just the models. SPLX-style automated red teaming should be applied to your top 100 business-built agents quarterly. Test for prompt injection via the data sources they read, permission abuse via their connectors, and exfiltration via their tool chains. Report results to platform owners, not just CISOs.

The Bottom Line

The Nokod 2026 survey crystallizes a transition that has been quietly underway for two years: enterprise application development has moved off the hands of professional engineers and into the hands of business users, with AI as the accelerant. The platforms enabling this—Microsoft Copilot Studio, Power Platform, ServiceNow, UiPath, Salesforce—are sanctioned, the builders are paid employees, and the use cases are real. What is missing is the control plane that makes any of this safe at scale.

44% visibility is not a baseline to celebrate—it is a starting line. The 67% of enterprises already budgeting for this and the 90% planning to standardize governance by year-end are buying time, not solving the problem. The work is now: inventory every agent, tier them by risk, give them their own identities, route builders to a paved road, and make AI engineering the team that closes the gap between platform capability and platform safety.

Yair Finzi was right. Security teams are losing a race they did not know they were running. The question is whether enterprises figure out where the finish line actually is before regulators, breaches, or abandoned pilots decide for them.

Sources

  1. The Invisible Enterprise AI Jungle: Nokod Survey Finds Security Teams Only See 44% of Apps, Agents, and Automations Built By Business Users (April 27, 2026)
  2. Nokod Security Platform Overview
  3. Microsoft Security Blog: 80% of Fortune 500 Use Active AI Agents
  4. Gartner: 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026
  5. Microsoft Copilot Studio Zoned Governance Guidance

How is your organization tracking business-built AI agents? Connect with me on LinkedIn, Twitter/X, or via the contact form to share your governance approach.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

The 44% Visibility Gap: Enterprise's Hidden AI Agent Crisis

Photo by Pixabay on Pexels

For every professional developer in your enterprise, four business users are now building applications, AI agents, and automations on platforms like Microsoft Copilot Studio, Power Platform, ServiceNow, and UiPath. In some organizations, the ratio is ten to one. And your security team can see fewer than half of them.

That is the central finding of Nokod Security's 2026 State of Security in Business-Built Applications and AI Agents Survey, released April 27, 2026. The survey of 200 enterprise CISOs paints a picture that should alarm anyone responsible for AI risk: security teams can track only 44% of AI tools handling sensitive enterprise data, and 80% admit they lack full visibility into what business users are building.

This is not the familiar "shadow AI" story of employees pasting customer data into ChatGPT. This is a structurally different problem—shadow engineering—where business users are not just consuming AI, they are building production-grade agents and automations on enterprise-sanctioned platforms. The platforms are approved. The governance around what gets built on them is not.

For Rajesh Beri's team at Zscaler and every other AI engineering organization wrestling with this exact issue, Nokod's data confirms what we have been seeing in practice: the vibe-coding problem has scaled past the point where AppSec, DLP, and traditional change management can contain it.

The Numbers: A Visibility Crisis Hiding in Plain Sight

Nokod surveyed 200 CISOs at large enterprises. The findings:

  • 44% — share of AI tools handling sensitive data that security teams can actually track
  • 80% — share of security teams that lack full visibility into business-built apps and AI agents
  • 4:1 — average ratio of business builders to professional developers per organization
  • 10:1 — ratio in some heavily-Power-Platform organizations
  • 50%+ — share of CISOs who confirm business users build apps that support critical business processes
  • 90% — share of enterprises planning to standardize AI tool security in 2026
  • 67% — already allocate budget for securing business-built apps; 15% growth expected this year

Yair Finzi, Nokod's CEO, framed the takeaway bluntly: "Security teams are losing a race they don't even realize they are running." The platforms in question—Microsoft Copilot Studio, Power Automate, ServiceNow, UiPath, Salesforce, Retool—were sanctioned to accelerate the business. They have done that. They have also created the largest unmonitored attack surface most enterprises have ever owned.

Shadow AI vs. Shadow Engineering: Two Different Problems

The Nokod findings sit on top of a separate but related dataset. Lenovo's Work Reborn Report released the same morning found that 70% of employees use AI weekly and one-third operate beyond IT oversight. Microsoft's first-party telemetry from late 2025 already showed 80% of Fortune 500 companies running active AI agents built in Copilot Studio.

These are not the same problem.

Shadow AI is consumption: an employee pastes a customer call transcript into a free Claude or Gemini tab. The risk is data egress. The fix is gateway controls and DLP—Zscaler, Palo Alto Prisma AIRS, Cloudflare AI Gateway. We know how to do this.

Shadow engineering is production: a regional sales ops manager builds a Copilot Studio agent that reads from Dataverse, calls a Salesforce connector, queries an internal pricing API, and emails customers based on the output. The agent inherits her permissions, runs on a sanctioned platform, generates a Power Platform connection record IT can technically see—but no one is reviewing what it does, what it can be tricked into doing, or what data it touches.

The Nokod data tells us shadow engineering is now bigger, by builder headcount, than the entire professional developer population at most enterprises. Gartner's prediction that 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025, is not happening through formal SDLC. It is happening through Copilot Studio prompt boxes, ServiceNow Now Assist, and Power Automate flows.

Why Traditional AppSec Cannot See This

I want to be specific about why this slips through, because the answer matters for what to actually do about it.

Traditional AppSec instruments three things: code repositories (SAST/SCA), running services (DAST/RASP), and the network (WAF, gateway). Business-built agents touch none of these surfaces in a way that AppSec recognizes.

A Copilot Studio agent is not in a Git repo. It is a topic graph stored in Dataverse. A Power Automate flow is not a deployable artifact—it is a JSON definition tied to environment connections. A ServiceNow Now Assist skill is configured in the platform's catalog. A UiPath bot lives in Orchestrator. None of these emit findings to your existing scanner stack unless you have explicitly stood up platform-specific tooling.

Worse, the risk surface of these agents is qualitatively different. Where traditional applications fail through CVEs and injection bugs, agentic systems fail through:

  • Prompt injection via untrusted data sources — an agent that reads emails or tickets can be hijacked by content in those records
  • Permission inheritance and over-scoping — agents run with the builder's identity, often a power user with broad data access
  • Tool/connector chaining — an agent with read access to CRM and write access to email becomes an exfiltration primitive
  • Data residency and oversharing — agents trained or grounded on internal corpora leak across business unit boundaries

Microsoft's own zoned governance guidance for Copilot Studio acknowledges this: Zone 1 is the "citizen development zone" where anyone can build personal and team agents. Zone 1 is also where almost everything is being built, and almost nothing is being reviewed.

The CFO Math: $492M Becoming $1B

Gartner forecasts AI governance spending will hit $492 million in 2026 and surpass $1 billion by 2030. Read alongside Nokod's finding that 67% of enterprises already budget for business-built app security with 15% YoY growth, this is no longer a discretionary line item. It has become a category.

The cost driver is not the tooling itself—Nokod, Zenity, AppOmni, and the platform-native governance from Microsoft Purview and ServiceNow AI Control Tower are reasonably priced relative to enterprise security stacks. The cost driver is the control plane work: discovery, inventorying, policy authoring, exception handling, and the human-in-the-loop review of who is allowed to build what.

The CFO question is whether to fund that control plane proactively or pay for it reactively after an incident. Nokod's number that 40% of agentic AI initiatives could be abandoned by 2027 if governance fundamentals fail (Gartner's projection) is the financial case in a single line. Pilots that cannot pass governance review do not reach production. Investment that does not reach production has no ROI.

AI agent visibility and governance Photo by Pixabay on Pexels

What CISOs Should Do This Quarter

Nokod's data is most actionable for security leaders. Three concrete moves:

1. Inventory before you control. You cannot govern what you cannot see. Start with platform-native discovery: Microsoft Purview's Copilot dashboard, Power Platform admin center DLP analytics, ServiceNow's Now Assist usage reports, UiPath Insights. Layer specialized tooling—Nokod, Zenity, or AppOmni—where coverage gaps remain. The goal is a single inventory of every business-built agent, its owner, its data scopes, and its connectors. Most enterprises are starting from zero here. Plan a 90-day discovery sprint.

2. Tier by blast radius, not by builder. The instinct is to lock down citizen development. That fails because it pushes builders to even less governed surfaces (browser-based agents, personal accounts). The better model is risk tiering by what an agent can touch: agents accessing PII, financial data, customer-facing channels, or production systems get full review; agents that summarize internal docs in a single team workspace get a lighter path. Microsoft's zoned governance gives you the primitive—use it.

3. Make agent identity a first-class control. Today, most business-built agents run as the builder. That means a sales rep's Copilot agent has all of her CRM, email, and Teams permissions. When she leaves, the agent breaks—or worse, keeps running. Move agents to dedicated service principals with scoped, audited permissions, and make this the default in your platform configuration. This single change collapses an entire class of insider and lateral movement risk.

What CIOs Should Do This Quarter

For CIOs balancing innovation against control, Nokod's findings are a forcing function but not a stop sign:

Sanction a paved road, then route traffic to it. The reason 4:1 builder ratios exist is that business users have real work to automate and the platforms make it easy. Trying to slow them down loses. Instead, make the governed path the easy path: a curated catalog of approved connectors, pre-built templates for common agent patterns, sandbox environments wired to non-production data, and clear graduation criteria to production. Every business unit gets the same starter kit.

Centralize agent observability. You already centralize logs, traces, and metrics for traditional applications. Do the same for agents. Tools like Weights & Biases, Langfuse, and platform-native observability (Power Platform Activity Logging, ServiceNow Performance Analytics) can pipe agent interactions—prompts, tool calls, outputs—into your SIEM and analytics stack. This is the difference between "we have AI agents somewhere" and "we know what every agent did yesterday."

Designate an agent owner of record. Every agent in production needs a named human accountable for it. Not a team. Not a distribution list. A person. When the agent breaks, leaks, or behaves badly, that person is paged. This sounds bureaucratic. It is the single highest-leverage governance control I have seen work.

What AI Engineering Teams Should Do

If you run AI engineering—as Rajesh does at Zscaler—your role is the enablement layer between security policy and business builders. The Nokod data argues for three priorities:

Build, do not buy, the discovery layer. Vendor tools are necessary but not sufficient. Your enterprise has unique connector usage, unique data classifications, and unique platform mixes. A thin internal service that ingests platform inventory APIs and joins them with your CMDB, identity provider, and data classification system gives you a defensible source of truth. Nokod and Zenity are accelerants for this; they are not replacements.

Create AI Guard equivalents for business-built agents. If your professional development teams already use prompt-injection, DLP, and content moderation guardrails (Zscaler AI Guard, Microsoft Prompt Shield, NVIDIA NeMo Guardrails), the same controls need to be available—mandatorily—for Copilot Studio and Power Platform builds. Negotiate a single guardrail SDK that platform admins can enforce as policy.

Run continuous red-team campaigns against the platforms, not just the models. SPLX-style automated red teaming should be applied to your top 100 business-built agents quarterly. Test for prompt injection via the data sources they read, permission abuse via their connectors, and exfiltration via their tool chains. Report results to platform owners, not just CISOs.

The Bottom Line

The Nokod 2026 survey crystallizes a transition that has been quietly underway for two years: enterprise application development has moved off the hands of professional engineers and into the hands of business users, with AI as the accelerant. The platforms enabling this—Microsoft Copilot Studio, Power Platform, ServiceNow, UiPath, Salesforce—are sanctioned, the builders are paid employees, and the use cases are real. What is missing is the control plane that makes any of this safe at scale.

44% visibility is not a baseline to celebrate—it is a starting line. The 67% of enterprises already budgeting for this and the 90% planning to standardize governance by year-end are buying time, not solving the problem. The work is now: inventory every agent, tier them by risk, give them their own identities, route builders to a paved road, and make AI engineering the team that closes the gap between platform capability and platform safety.

Yair Finzi was right. Security teams are losing a race they did not know they were running. The question is whether enterprises figure out where the finish line actually is before regulators, breaches, or abandoned pilots decide for them.

Sources

  1. The Invisible Enterprise AI Jungle: Nokod Survey Finds Security Teams Only See 44% of Apps, Agents, and Automations Built By Business Users (April 27, 2026)
  2. Nokod Security Platform Overview
  3. Microsoft Security Blog: 80% of Fortune 500 Use Active AI Agents
  4. Gartner: 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026
  5. Microsoft Copilot Studio Zoned Governance Guidance

How is your organization tracking business-built AI agents? Connect with me on LinkedIn, Twitter/X, or via the contact form to share your governance approach.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

Shadow AIAI Agent SecurityEnterprise AI GovernanceCitizen DevelopersLow-Code Security

The 44% Visibility Gap: Enterprise's Hidden AI Agent Crisis

Nokod's 2026 survey of 200 CISOs finds security teams see only 44% of business-built AI agents. 80% lack visibility, citizen builders outnumber pros 4:1.

By Rajesh Beri·April 27, 2026·11 min read

For every professional developer in your enterprise, four business users are now building applications, AI agents, and automations on platforms like Microsoft Copilot Studio, Power Platform, ServiceNow, and UiPath. In some organizations, the ratio is ten to one. And your security team can see fewer than half of them.

That is the central finding of Nokod Security's 2026 State of Security in Business-Built Applications and AI Agents Survey, released April 27, 2026. The survey of 200 enterprise CISOs paints a picture that should alarm anyone responsible for AI risk: security teams can track only 44% of AI tools handling sensitive enterprise data, and 80% admit they lack full visibility into what business users are building.

This is not the familiar "shadow AI" story of employees pasting customer data into ChatGPT. This is a structurally different problem—shadow engineering—where business users are not just consuming AI, they are building production-grade agents and automations on enterprise-sanctioned platforms. The platforms are approved. The governance around what gets built on them is not.

For Rajesh Beri's team at Zscaler and every other AI engineering organization wrestling with this exact issue, Nokod's data confirms what we have been seeing in practice: the vibe-coding problem has scaled past the point where AppSec, DLP, and traditional change management can contain it.

The Numbers: A Visibility Crisis Hiding in Plain Sight

Nokod surveyed 200 CISOs at large enterprises. The findings:

  • 44% — share of AI tools handling sensitive data that security teams can actually track
  • 80% — share of security teams that lack full visibility into business-built apps and AI agents
  • 4:1 — average ratio of business builders to professional developers per organization
  • 10:1 — ratio in some heavily-Power-Platform organizations
  • 50%+ — share of CISOs who confirm business users build apps that support critical business processes
  • 90% — share of enterprises planning to standardize AI tool security in 2026
  • 67% — already allocate budget for securing business-built apps; 15% growth expected this year

Yair Finzi, Nokod's CEO, framed the takeaway bluntly: "Security teams are losing a race they don't even realize they are running." The platforms in question—Microsoft Copilot Studio, Power Automate, ServiceNow, UiPath, Salesforce, Retool—were sanctioned to accelerate the business. They have done that. They have also created the largest unmonitored attack surface most enterprises have ever owned.

Shadow AI vs. Shadow Engineering: Two Different Problems

The Nokod findings sit on top of a separate but related dataset. Lenovo's Work Reborn Report released the same morning found that 70% of employees use AI weekly and one-third operate beyond IT oversight. Microsoft's first-party telemetry from late 2025 already showed 80% of Fortune 500 companies running active AI agents built in Copilot Studio.

These are not the same problem.

Shadow AI is consumption: an employee pastes a customer call transcript into a free Claude or Gemini tab. The risk is data egress. The fix is gateway controls and DLP—Zscaler, Palo Alto Prisma AIRS, Cloudflare AI Gateway. We know how to do this.

Shadow engineering is production: a regional sales ops manager builds a Copilot Studio agent that reads from Dataverse, calls a Salesforce connector, queries an internal pricing API, and emails customers based on the output. The agent inherits her permissions, runs on a sanctioned platform, generates a Power Platform connection record IT can technically see—but no one is reviewing what it does, what it can be tricked into doing, or what data it touches.

The Nokod data tells us shadow engineering is now bigger, by builder headcount, than the entire professional developer population at most enterprises. Gartner's prediction that 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025, is not happening through formal SDLC. It is happening through Copilot Studio prompt boxes, ServiceNow Now Assist, and Power Automate flows.

Why Traditional AppSec Cannot See This

I want to be specific about why this slips through, because the answer matters for what to actually do about it.

Traditional AppSec instruments three things: code repositories (SAST/SCA), running services (DAST/RASP), and the network (WAF, gateway). Business-built agents touch none of these surfaces in a way that AppSec recognizes.

A Copilot Studio agent is not in a Git repo. It is a topic graph stored in Dataverse. A Power Automate flow is not a deployable artifact—it is a JSON definition tied to environment connections. A ServiceNow Now Assist skill is configured in the platform's catalog. A UiPath bot lives in Orchestrator. None of these emit findings to your existing scanner stack unless you have explicitly stood up platform-specific tooling.

Worse, the risk surface of these agents is qualitatively different. Where traditional applications fail through CVEs and injection bugs, agentic systems fail through:

  • Prompt injection via untrusted data sources — an agent that reads emails or tickets can be hijacked by content in those records
  • Permission inheritance and over-scoping — agents run with the builder's identity, often a power user with broad data access
  • Tool/connector chaining — an agent with read access to CRM and write access to email becomes an exfiltration primitive
  • Data residency and oversharing — agents trained or grounded on internal corpora leak across business unit boundaries

Microsoft's own zoned governance guidance for Copilot Studio acknowledges this: Zone 1 is the "citizen development zone" where anyone can build personal and team agents. Zone 1 is also where almost everything is being built, and almost nothing is being reviewed.

The CFO Math: $492M Becoming $1B

Gartner forecasts AI governance spending will hit $492 million in 2026 and surpass $1 billion by 2030. Read alongside Nokod's finding that 67% of enterprises already budget for business-built app security with 15% YoY growth, this is no longer a discretionary line item. It has become a category.

The cost driver is not the tooling itself—Nokod, Zenity, AppOmni, and the platform-native governance from Microsoft Purview and ServiceNow AI Control Tower are reasonably priced relative to enterprise security stacks. The cost driver is the control plane work: discovery, inventorying, policy authoring, exception handling, and the human-in-the-loop review of who is allowed to build what.

The CFO question is whether to fund that control plane proactively or pay for it reactively after an incident. Nokod's number that 40% of agentic AI initiatives could be abandoned by 2027 if governance fundamentals fail (Gartner's projection) is the financial case in a single line. Pilots that cannot pass governance review do not reach production. Investment that does not reach production has no ROI.

Photo by Pixabay on Pexels

What CISOs Should Do This Quarter

Nokod's data is most actionable for security leaders. Three concrete moves:

1. Inventory before you control. You cannot govern what you cannot see. Start with platform-native discovery: Microsoft Purview's Copilot dashboard, Power Platform admin center DLP analytics, ServiceNow's Now Assist usage reports, UiPath Insights. Layer specialized tooling—Nokod, Zenity, or AppOmni—where coverage gaps remain. The goal is a single inventory of every business-built agent, its owner, its data scopes, and its connectors. Most enterprises are starting from zero here. Plan a 90-day discovery sprint.

2. Tier by blast radius, not by builder. The instinct is to lock down citizen development. That fails because it pushes builders to even less governed surfaces (browser-based agents, personal accounts). The better model is risk tiering by what an agent can touch: agents accessing PII, financial data, customer-facing channels, or production systems get full review; agents that summarize internal docs in a single team workspace get a lighter path. Microsoft's zoned governance gives you the primitive—use it.

3. Make agent identity a first-class control. Today, most business-built agents run as the builder. That means a sales rep's Copilot agent has all of her CRM, email, and Teams permissions. When she leaves, the agent breaks—or worse, keeps running. Move agents to dedicated service principals with scoped, audited permissions, and make this the default in your platform configuration. This single change collapses an entire class of insider and lateral movement risk.

What CIOs Should Do This Quarter

For CIOs balancing innovation against control, Nokod's findings are a forcing function but not a stop sign:

Sanction a paved road, then route traffic to it. The reason 4:1 builder ratios exist is that business users have real work to automate and the platforms make it easy. Trying to slow them down loses. Instead, make the governed path the easy path: a curated catalog of approved connectors, pre-built templates for common agent patterns, sandbox environments wired to non-production data, and clear graduation criteria to production. Every business unit gets the same starter kit.

Centralize agent observability. You already centralize logs, traces, and metrics for traditional applications. Do the same for agents. Tools like Weights & Biases, Langfuse, and platform-native observability (Power Platform Activity Logging, ServiceNow Performance Analytics) can pipe agent interactions—prompts, tool calls, outputs—into your SIEM and analytics stack. This is the difference between "we have AI agents somewhere" and "we know what every agent did yesterday."

Designate an agent owner of record. Every agent in production needs a named human accountable for it. Not a team. Not a distribution list. A person. When the agent breaks, leaks, or behaves badly, that person is paged. This sounds bureaucratic. It is the single highest-leverage governance control I have seen work.

What AI Engineering Teams Should Do

If you run AI engineering—as Rajesh does at Zscaler—your role is the enablement layer between security policy and business builders. The Nokod data argues for three priorities:

Build, do not buy, the discovery layer. Vendor tools are necessary but not sufficient. Your enterprise has unique connector usage, unique data classifications, and unique platform mixes. A thin internal service that ingests platform inventory APIs and joins them with your CMDB, identity provider, and data classification system gives you a defensible source of truth. Nokod and Zenity are accelerants for this; they are not replacements.

Create AI Guard equivalents for business-built agents. If your professional development teams already use prompt-injection, DLP, and content moderation guardrails (Zscaler AI Guard, Microsoft Prompt Shield, NVIDIA NeMo Guardrails), the same controls need to be available—mandatorily—for Copilot Studio and Power Platform builds. Negotiate a single guardrail SDK that platform admins can enforce as policy.

Run continuous red-team campaigns against the platforms, not just the models. SPLX-style automated red teaming should be applied to your top 100 business-built agents quarterly. Test for prompt injection via the data sources they read, permission abuse via their connectors, and exfiltration via their tool chains. Report results to platform owners, not just CISOs.

The Bottom Line

The Nokod 2026 survey crystallizes a transition that has been quietly underway for two years: enterprise application development has moved off the hands of professional engineers and into the hands of business users, with AI as the accelerant. The platforms enabling this—Microsoft Copilot Studio, Power Platform, ServiceNow, UiPath, Salesforce—are sanctioned, the builders are paid employees, and the use cases are real. What is missing is the control plane that makes any of this safe at scale.

44% visibility is not a baseline to celebrate—it is a starting line. The 67% of enterprises already budgeting for this and the 90% planning to standardize governance by year-end are buying time, not solving the problem. The work is now: inventory every agent, tier them by risk, give them their own identities, route builders to a paved road, and make AI engineering the team that closes the gap between platform capability and platform safety.

Yair Finzi was right. Security teams are losing a race they did not know they were running. The question is whether enterprises figure out where the finish line actually is before regulators, breaches, or abandoned pilots decide for them.

Sources

  1. The Invisible Enterprise AI Jungle: Nokod Survey Finds Security Teams Only See 44% of Apps, Agents, and Automations Built By Business Users (April 27, 2026)
  2. Nokod Security Platform Overview
  3. Microsoft Security Blog: 80% of Fortune 500 Use Active AI Agents
  4. Gartner: 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026
  5. Microsoft Copilot Studio Zoned Governance Guidance

How is your organization tracking business-built AI agents? Connect with me on LinkedIn, Twitter/X, or via the contact form to share your governance approach.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe