Your Custom GPTs Die May 15—OpenAI Workspace Agents Replace Them

OpenAI retired Custom GPTs for Codex-powered Workspace Agents. 24/7 cloud agents, Slack and SharePoint via MCP. Free until May 6, then credits.

By Rajesh Beri·April 23, 2026·13 min read
Share:

THE DAILY BRIEF

OpenAIWorkspace AgentsCustom GPTsCodexChatGPT EnterpriseMCPSlack integrationenterprise AI agentsagent pricingCompliance APIprompt injection defense

Your Custom GPTs Die May 15—OpenAI Workspace Agents Replace Them

OpenAI retired Custom GPTs for Codex-powered Workspace Agents. 24/7 cloud agents, Slack and SharePoint via MCP. Free until May 6, then credits.

By Rajesh Beri·April 23, 2026·13 min read

On April 22, OpenAI quietly retired the product it launched at its first DevDay three years ago. Custom GPTs — the shareable personas that were supposed to be the enterprise agent story in 2023 — are being superseded by Workspace Agents, a Codex-powered, cloud-resident, 24/7 autonomous tier available only to ChatGPT Business, Enterprise, Edu, and Teachers subscribers. Existing custom GPTs can be converted into Workspace Agents. Most won't survive the conversion, because most were never agents in any meaningful sense.

The launch is worth watching for two reasons. First, it came one day before Google Cloud shipped the Gemini Enterprise Agent Platform with its own identity-registry-gateway stack, and the two announcements now mark the outer walls of how hyperscalers think an enterprise agent platform should look in 2026. Second, OpenAI did something unusually aggressive on commercial terms: Workspace Agents are free until May 6, after which a credit-based pricing model kicks in. That is exactly two weeks of runway for enterprise buyers to kick tires before the meter starts — a far more compressed evaluation window than typical enterprise SaaS rollouts.

This is the story of what OpenAI actually shipped, how it compares to Google's governance-first play, and what CIOs and AI engineers need to decide before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What OpenAI Actually Shipped

The product is a research preview, which in OpenAI's vocabulary means "production for enterprise customers who accept early-stage bugs." Workspace Agents are built on Codex — OpenAI's code-trained model family — and designed to execute long-running workflows rather than serve conversational replies.

The construction experience is the biggest UX change from custom GPTs. Rather than giving the agent a system prompt and a handful of knowledge files, a user describes the workflow in plain language inside a dedicated ChatGPT tab. The system then maps the process into steps, proposes the tools the agent will need, wires up the connections, runs a test pass, and asks the user to confirm before activation. This is meaningfully closer to the way Zapier and n8n handle workflow construction than the way GPT Builder handled prompt authoring.

Once activated, agents run in the cloud on schedules or in response to triggers. They are not chat sessions; they are daemons. A team can build one sales-opportunity-scoring agent and have it run every four hours against Salesforce, pull context from Slack and Google Drive, score opportunities, and drop the results into a shared channel. Rippling, an early customer, reported that exactly this pattern saves their sales reps five to six hours per week.

The four primary surfaces Workspace Agents connect to are Slack, Google Drive, Google Calendar, and SharePoint, with connections riding on top of Model Context Protocol. The MCP dependency matters: it means Workspace Agents can, in principle, reach anything in the exploding MCP ecosystem, which crossed 97 million installs in March. It also means OpenAI is ratifying MCP as the default agent-to-tool protocol, alongside Anthropic (which created it), Google (which now routes agent traffic through it inside Agent Gateway), and Microsoft (which shipped MCP support in Copilot Studio earlier this quarter).

Security and admin controls include protection against prompt injection, ability to limit which data sources and tools a user group can access, approval workflows before sensitive actions execute, RBAC across the workspace, and monitoring through the Compliance API. The last one is the piece CISOs should pay attention to, because it is the API that routes Workspace Agent activity into SIEM and DLP tooling.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


OpenAI vs. Google: Two Bets on the Same Future

Yesterday I wrote about Google's launch of Gemini Enterprise Agent Platform, which leads with three primitives — Agent Identity, Agent Registry, Agent Gateway — and folds Vertex AI into that shape. Today's OpenAI launch is the same product category with a fundamentally different opinion about what matters.

Google's bet is that governance is the moat. Every agent gets a cryptographic identity. Every tool and sub-agent is vetted through a registry. Every tool call goes through a policy gateway. The value proposition to CIOs is: we make unsanctioned agents structurally impossible, and we let you run Salesforce, ServiceNow, Workday, Oracle, and Adobe agents inside our governed marketplace. Model choice is secondary; governance architecture is the product.

OpenAI's bet is that UX and model quality are the moat. Workspace Agents are easier to build (describe what you want, the system does the rest), easier to share (a team URL, not a deployment), easier to live inside Slack and Google Workspace (direct integrations rather than a governed marketplace of third-party agents), and powered by Codex rather than a multi-model shelf. The governance surface exists — admin controls, Compliance API, prompt-injection defense — but it is framed as "the safe way to go faster," not as the organizing principle.

For enterprise buyers, these are almost mirror-image value propositions. Google wins if CIOs conclude that 2026 is the year agents sprawl and governance becomes the acquisition trigger. OpenAI wins if CIOs conclude that adoption velocity beats governance maturity and the right bet is to get agents into as many workflows as possible now and retrofit control later.

The honest answer is that most Fortune 500 enterprises will buy both, treat Workspace Agents as the point-of-presence for knowledge workers who already live in ChatGPT and Slack, and treat Gemini Enterprise Agent Platform as the orchestration and governance substrate for agents that touch regulated systems. The middle-market segment is where the conflict will be sharpest, because those buyers can only afford to consolidate on one. For them, Workspace Agents' May 6 pricing change is the forcing function.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What CIOs Need to Know Before May 6

Three things matter commercially.

First, the "free until May 6" window is two weeks from the launch date. That is not a free tier; it is a trial with a billing cliff. After May 6, every Workspace Agent execution will draw from a credit-based pool. OpenAI has not published the credit-to-dollar conversion, which means any CIO approving a Workspace Agent rollout before May 6 is approving a pricing model they have not yet seen. For organizations with procurement policies that require signed pricing before deployment, the rational move is to delay activation until the post-May-6 pricing drops.

Second, admin controls are strong on data access but weaker on cost controls. The launch materials detail how admins can limit which tools and data sources user groups can reach, which is exactly the shadow-AI protection Fortune 500 security teams need. What is not detailed is how admins cap credit spend per user group, per agent, or per workflow. A Workspace Agent that loops because of a degraded external tool could burn credits continuously; whether per-agent budget caps exist is one of the most important questions to ask your OpenAI account team this week.

Third, the Compliance API is the integration your CISO will want to wire before production. It is how Workspace Agent execution becomes visible to SIEM, DLP, and audit tooling. Any organization with a SOC 2, ISO 27001, or HIPAA commitment has a defensible baseline once the Compliance API is flowing to Splunk, Datadog, or equivalent. Without that integration, Workspace Agents are a blind spot on the same scale that Copilot rollouts have been for the last eighteen months.

The strategic read: Workspace Agents are the most approachable enterprise agent product yet shipped, and that approachability is itself the risk. A business-unit leader can build a production-adjacent agent in an afternoon. The governance program, the pricing guardrails, and the SIEM integration need to be in place before that afternoon starts, not after.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What AI Engineers Should Know

The technical shape is more interesting than the marketing suggests.

Codex as the backbone is a deliberate choice. OpenAI is betting that agents that write, modify, and execute code are the common case — and that a code-first base model outperforms a chat-first base model for orchestration work. This matches the trajectory of Claude Code and Cursor Agents, where the agent-as-code-executor pattern has produced the most measurable enterprise ROI. If you have been building agents on GPT-4o or o1 for reasoning-heavy workflows, the Codex foundation means Workspace Agents will be stronger at multi-step tool orchestration and weaker at open-ended analysis than you are used to.

The scheduling and trigger model is what makes these agents architecturally distinct from ChatGPT sessions. Agents can run on cron-style schedules or be triggered by external events — a new email matching a filter, a row added to a spreadsheet, a Slack message in a specific channel. The underlying execution is cloud-resident, so engineers no longer need to own the runtime. That is a meaningful deletion from the operational burden of LangChain or CrewAI deployments, where scheduling is typically stitched together with Airflow or an equivalent.

MCP is the integration boundary. Slack, Google Drive, Calendar, and SharePoint are the highlighted connectors, but the underlying architecture rides on MCP, which means any MCP server your team has already built can be registered as a Workspace Agent tool. For organizations that have invested in MCP gateways — which, increasingly, is most Fortune 500 security teams — this is direct reuse. For organizations that have not yet stood up an MCP gateway, the right reading of today's announcement is that MCP is now unambiguously the default integration substrate, and the window for choosing a different protocol has closed.

Memory and persistence are less clearly specified than Google's Agent Memory Bank. OpenAI describes continuous improvement and Compliance-API-surfaced state, but the memory architecture (durable profiles, shared memories across agents, TTL policies) is not documented in the launch materials. For engineers planning to rely on long-term memory, this is a gap that will need to be filled through customer reference conversations or the eventual research-preview documentation updates.

The conversion path from custom GPTs is the piece to test immediately. OpenAI has promised a direct converter, but "direct" tends to mean "best-effort" in practice. Teams with heavily-customized GPTs — particularly ones that rely on obscure GPT Builder patterns like nested knowledge files or tool-chaining through function descriptions — should expect to spend the next two weeks rewriting rather than migrating. That is another reason to delay activation until May: it gives the conversion toolkit time to stabilize, and it aligns deployment with the post-May-6 pricing model.

Prompt injection defense is called out explicitly in the security framing. OpenAI has not detailed the technique — whether it is classifier-based filtering, output constraining, tool-call mediation, or some combination — but the commitment is notable. Enterprise agent deployments have been losing ground to prompt injection in red-team exercises for eighteen months; any improvement that holds up under adversarial testing is a real one. Red-teaming a Workspace Agent against indirect prompt injection from Slack message content or Google Drive files is the most valuable technical work any enterprise AI team can do in the two weeks before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What's Missing, and What to Watch

Three gaps deserve calling out.

First, there is no published credit-to-dollar conversion. Every CFO who approves a Workspace Agent rollout is approving a variable cost they cannot model. That is a genuine commercial risk, not a rhetorical one.

Second, there are no marquee customer case studies in the launch materials. Rippling's sales-agent example surfaced in coverage, not in the OpenAI announcement. For an enterprise tier product, the absence of named customer reference architectures is noteworthy. Compare to Google's Gemini Enterprise Agent Platform launch yesterday, which leaned on named case studies from Burns & McDonnell, Color Health, Comcast, L'Oréal, Payhawk, and PayPal. OpenAI either has fewer production references ready or is holding them for a subsequent announcement.

Third, the competitive positioning is quietly acknowledging that Anthropic is "widely regarded as having taken the lead in the agentic AI race." That framing — which shows up in coverage rather than OpenAI's own materials — is consistent with Anthropic's $30 billion ARR milestone and the doubling of its $1M+ enterprise clients from 500 to 1,000 since the Series G. Workspace Agents are, in part, OpenAI's attempt to reclaim the enterprise agent narrative at a moment when Anthropic has measurably pulled ahead with Claude-powered agent deployments.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


The Bottom Line

Workspace Agents are the most polished enterprise agent product OpenAI has shipped and the quiet euthanasia of Custom GPTs. They are also, measured against Google's launch yesterday, a bet that UX and model quality will win the enterprise agent market against a governance-first opponent.

For CIOs, the action is concrete: pilot Workspace Agents now while they are free, but do not activate production workflows until the post-May-6 pricing is published and the Compliance API is wired into your SIEM. Ask your OpenAI account team three questions this week — what is the credit-to-dollar conversion, are there per-agent spend caps, and what is the prompt-injection defense mechanism — and weight the answers heavily in any consolidation decision.

For AI engineers, the action is narrower: convert one non-critical custom GPT to a Workspace Agent, instrument the Compliance API, and red-team the result against indirect prompt injection before May 6. If that pilot lands, the conversion of your broader custom-GPT inventory becomes a two-sprint project. If it fails — either on the conversion path, the scheduling model, or the prompt-injection defense — you have the runway to pivot before the credit meter starts.

The bigger story is structural. OpenAI and Google have now both declared that the enterprise agent era is here, the productization lanes are drawn, and the choice of moat — governance or UX — will decide who consolidates whom over the next two years. Anthropic, measured on revenue, is already winning the race they are both chasing. The next twelve months will tell us whether hyperscaler opinion or enterprise adoption has the final word.


Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Your Custom GPTs Die May 15—OpenAI Workspace Agents Replace Them

Photo by ThisIsEngineering on Pexels

On April 22, OpenAI quietly retired the product it launched at its first DevDay three years ago. Custom GPTs — the shareable personas that were supposed to be the enterprise agent story in 2023 — are being superseded by Workspace Agents, a Codex-powered, cloud-resident, 24/7 autonomous tier available only to ChatGPT Business, Enterprise, Edu, and Teachers subscribers. Existing custom GPTs can be converted into Workspace Agents. Most won't survive the conversion, because most were never agents in any meaningful sense.

The launch is worth watching for two reasons. First, it came one day before Google Cloud shipped the Gemini Enterprise Agent Platform with its own identity-registry-gateway stack, and the two announcements now mark the outer walls of how hyperscalers think an enterprise agent platform should look in 2026. Second, OpenAI did something unusually aggressive on commercial terms: Workspace Agents are free until May 6, after which a credit-based pricing model kicks in. That is exactly two weeks of runway for enterprise buyers to kick tires before the meter starts — a far more compressed evaluation window than typical enterprise SaaS rollouts.

This is the story of what OpenAI actually shipped, how it compares to Google's governance-first play, and what CIOs and AI engineers need to decide before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What OpenAI Actually Shipped

The product is a research preview, which in OpenAI's vocabulary means "production for enterprise customers who accept early-stage bugs." Workspace Agents are built on Codex — OpenAI's code-trained model family — and designed to execute long-running workflows rather than serve conversational replies.

The construction experience is the biggest UX change from custom GPTs. Rather than giving the agent a system prompt and a handful of knowledge files, a user describes the workflow in plain language inside a dedicated ChatGPT tab. The system then maps the process into steps, proposes the tools the agent will need, wires up the connections, runs a test pass, and asks the user to confirm before activation. This is meaningfully closer to the way Zapier and n8n handle workflow construction than the way GPT Builder handled prompt authoring.

Once activated, agents run in the cloud on schedules or in response to triggers. They are not chat sessions; they are daemons. A team can build one sales-opportunity-scoring agent and have it run every four hours against Salesforce, pull context from Slack and Google Drive, score opportunities, and drop the results into a shared channel. Rippling, an early customer, reported that exactly this pattern saves their sales reps five to six hours per week.

The four primary surfaces Workspace Agents connect to are Slack, Google Drive, Google Calendar, and SharePoint, with connections riding on top of Model Context Protocol. The MCP dependency matters: it means Workspace Agents can, in principle, reach anything in the exploding MCP ecosystem, which crossed 97 million installs in March. It also means OpenAI is ratifying MCP as the default agent-to-tool protocol, alongside Anthropic (which created it), Google (which now routes agent traffic through it inside Agent Gateway), and Microsoft (which shipped MCP support in Copilot Studio earlier this quarter).

Security and admin controls include protection against prompt injection, ability to limit which data sources and tools a user group can access, approval workflows before sensitive actions execute, RBAC across the workspace, and monitoring through the Compliance API. The last one is the piece CISOs should pay attention to, because it is the API that routes Workspace Agent activity into SIEM and DLP tooling.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


OpenAI vs. Google: Two Bets on the Same Future

Yesterday I wrote about Google's launch of Gemini Enterprise Agent Platform, which leads with three primitives — Agent Identity, Agent Registry, Agent Gateway — and folds Vertex AI into that shape. Today's OpenAI launch is the same product category with a fundamentally different opinion about what matters.

Google's bet is that governance is the moat. Every agent gets a cryptographic identity. Every tool and sub-agent is vetted through a registry. Every tool call goes through a policy gateway. The value proposition to CIOs is: we make unsanctioned agents structurally impossible, and we let you run Salesforce, ServiceNow, Workday, Oracle, and Adobe agents inside our governed marketplace. Model choice is secondary; governance architecture is the product.

OpenAI's bet is that UX and model quality are the moat. Workspace Agents are easier to build (describe what you want, the system does the rest), easier to share (a team URL, not a deployment), easier to live inside Slack and Google Workspace (direct integrations rather than a governed marketplace of third-party agents), and powered by Codex rather than a multi-model shelf. The governance surface exists — admin controls, Compliance API, prompt-injection defense — but it is framed as "the safe way to go faster," not as the organizing principle.

For enterprise buyers, these are almost mirror-image value propositions. Google wins if CIOs conclude that 2026 is the year agents sprawl and governance becomes the acquisition trigger. OpenAI wins if CIOs conclude that adoption velocity beats governance maturity and the right bet is to get agents into as many workflows as possible now and retrofit control later.

The honest answer is that most Fortune 500 enterprises will buy both, treat Workspace Agents as the point-of-presence for knowledge workers who already live in ChatGPT and Slack, and treat Gemini Enterprise Agent Platform as the orchestration and governance substrate for agents that touch regulated systems. The middle-market segment is where the conflict will be sharpest, because those buyers can only afford to consolidate on one. For them, Workspace Agents' May 6 pricing change is the forcing function.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What CIOs Need to Know Before May 6

Three things matter commercially.

First, the "free until May 6" window is two weeks from the launch date. That is not a free tier; it is a trial with a billing cliff. After May 6, every Workspace Agent execution will draw from a credit-based pool. OpenAI has not published the credit-to-dollar conversion, which means any CIO approving a Workspace Agent rollout before May 6 is approving a pricing model they have not yet seen. For organizations with procurement policies that require signed pricing before deployment, the rational move is to delay activation until the post-May-6 pricing drops.

Second, admin controls are strong on data access but weaker on cost controls. The launch materials detail how admins can limit which tools and data sources user groups can reach, which is exactly the shadow-AI protection Fortune 500 security teams need. What is not detailed is how admins cap credit spend per user group, per agent, or per workflow. A Workspace Agent that loops because of a degraded external tool could burn credits continuously; whether per-agent budget caps exist is one of the most important questions to ask your OpenAI account team this week.

Third, the Compliance API is the integration your CISO will want to wire before production. It is how Workspace Agent execution becomes visible to SIEM, DLP, and audit tooling. Any organization with a SOC 2, ISO 27001, or HIPAA commitment has a defensible baseline once the Compliance API is flowing to Splunk, Datadog, or equivalent. Without that integration, Workspace Agents are a blind spot on the same scale that Copilot rollouts have been for the last eighteen months.

The strategic read: Workspace Agents are the most approachable enterprise agent product yet shipped, and that approachability is itself the risk. A business-unit leader can build a production-adjacent agent in an afternoon. The governance program, the pricing guardrails, and the SIEM integration need to be in place before that afternoon starts, not after.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What AI Engineers Should Know

The technical shape is more interesting than the marketing suggests.

Codex as the backbone is a deliberate choice. OpenAI is betting that agents that write, modify, and execute code are the common case — and that a code-first base model outperforms a chat-first base model for orchestration work. This matches the trajectory of Claude Code and Cursor Agents, where the agent-as-code-executor pattern has produced the most measurable enterprise ROI. If you have been building agents on GPT-4o or o1 for reasoning-heavy workflows, the Codex foundation means Workspace Agents will be stronger at multi-step tool orchestration and weaker at open-ended analysis than you are used to.

The scheduling and trigger model is what makes these agents architecturally distinct from ChatGPT sessions. Agents can run on cron-style schedules or be triggered by external events — a new email matching a filter, a row added to a spreadsheet, a Slack message in a specific channel. The underlying execution is cloud-resident, so engineers no longer need to own the runtime. That is a meaningful deletion from the operational burden of LangChain or CrewAI deployments, where scheduling is typically stitched together with Airflow or an equivalent.

MCP is the integration boundary. Slack, Google Drive, Calendar, and SharePoint are the highlighted connectors, but the underlying architecture rides on MCP, which means any MCP server your team has already built can be registered as a Workspace Agent tool. For organizations that have invested in MCP gateways — which, increasingly, is most Fortune 500 security teams — this is direct reuse. For organizations that have not yet stood up an MCP gateway, the right reading of today's announcement is that MCP is now unambiguously the default integration substrate, and the window for choosing a different protocol has closed.

Memory and persistence are less clearly specified than Google's Agent Memory Bank. OpenAI describes continuous improvement and Compliance-API-surfaced state, but the memory architecture (durable profiles, shared memories across agents, TTL policies) is not documented in the launch materials. For engineers planning to rely on long-term memory, this is a gap that will need to be filled through customer reference conversations or the eventual research-preview documentation updates.

The conversion path from custom GPTs is the piece to test immediately. OpenAI has promised a direct converter, but "direct" tends to mean "best-effort" in practice. Teams with heavily-customized GPTs — particularly ones that rely on obscure GPT Builder patterns like nested knowledge files or tool-chaining through function descriptions — should expect to spend the next two weeks rewriting rather than migrating. That is another reason to delay activation until May: it gives the conversion toolkit time to stabilize, and it aligns deployment with the post-May-6 pricing model.

Prompt injection defense is called out explicitly in the security framing. OpenAI has not detailed the technique — whether it is classifier-based filtering, output constraining, tool-call mediation, or some combination — but the commitment is notable. Enterprise agent deployments have been losing ground to prompt injection in red-team exercises for eighteen months; any improvement that holds up under adversarial testing is a real one. Red-teaming a Workspace Agent against indirect prompt injection from Slack message content or Google Drive files is the most valuable technical work any enterprise AI team can do in the two weeks before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What's Missing, and What to Watch

Three gaps deserve calling out.

First, there is no published credit-to-dollar conversion. Every CFO who approves a Workspace Agent rollout is approving a variable cost they cannot model. That is a genuine commercial risk, not a rhetorical one.

Second, there are no marquee customer case studies in the launch materials. Rippling's sales-agent example surfaced in coverage, not in the OpenAI announcement. For an enterprise tier product, the absence of named customer reference architectures is noteworthy. Compare to Google's Gemini Enterprise Agent Platform launch yesterday, which leaned on named case studies from Burns & McDonnell, Color Health, Comcast, L'Oréal, Payhawk, and PayPal. OpenAI either has fewer production references ready or is holding them for a subsequent announcement.

Third, the competitive positioning is quietly acknowledging that Anthropic is "widely regarded as having taken the lead in the agentic AI race." That framing — which shows up in coverage rather than OpenAI's own materials — is consistent with Anthropic's $30 billion ARR milestone and the doubling of its $1M+ enterprise clients from 500 to 1,000 since the Series G. Workspace Agents are, in part, OpenAI's attempt to reclaim the enterprise agent narrative at a moment when Anthropic has measurably pulled ahead with Claude-powered agent deployments.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


The Bottom Line

Workspace Agents are the most polished enterprise agent product OpenAI has shipped and the quiet euthanasia of Custom GPTs. They are also, measured against Google's launch yesterday, a bet that UX and model quality will win the enterprise agent market against a governance-first opponent.

For CIOs, the action is concrete: pilot Workspace Agents now while they are free, but do not activate production workflows until the post-May-6 pricing is published and the Compliance API is wired into your SIEM. Ask your OpenAI account team three questions this week — what is the credit-to-dollar conversion, are there per-agent spend caps, and what is the prompt-injection defense mechanism — and weight the answers heavily in any consolidation decision.

For AI engineers, the action is narrower: convert one non-critical custom GPT to a Workspace Agent, instrument the Compliance API, and red-team the result against indirect prompt injection before May 6. If that pilot lands, the conversion of your broader custom-GPT inventory becomes a two-sprint project. If it fails — either on the conversion path, the scheduling model, or the prompt-injection defense — you have the runway to pivot before the credit meter starts.

The bigger story is structural. OpenAI and Google have now both declared that the enterprise agent era is here, the productization lanes are drawn, and the choice of moat — governance or UX — will decide who consolidates whom over the next two years. Anthropic, measured on revenue, is already winning the race they are both chasing. The next twelve months will tell us whether hyperscaler opinion or enterprise adoption has the final word.


Continue Reading

Share:

THE DAILY BRIEF

OpenAIWorkspace AgentsCustom GPTsCodexChatGPT EnterpriseMCPSlack integrationenterprise AI agentsagent pricingCompliance APIprompt injection defense

Your Custom GPTs Die May 15—OpenAI Workspace Agents Replace Them

OpenAI retired Custom GPTs for Codex-powered Workspace Agents. 24/7 cloud agents, Slack and SharePoint via MCP. Free until May 6, then credits.

By Rajesh Beri·April 23, 2026·13 min read

On April 22, OpenAI quietly retired the product it launched at its first DevDay three years ago. Custom GPTs — the shareable personas that were supposed to be the enterprise agent story in 2023 — are being superseded by Workspace Agents, a Codex-powered, cloud-resident, 24/7 autonomous tier available only to ChatGPT Business, Enterprise, Edu, and Teachers subscribers. Existing custom GPTs can be converted into Workspace Agents. Most won't survive the conversion, because most were never agents in any meaningful sense.

The launch is worth watching for two reasons. First, it came one day before Google Cloud shipped the Gemini Enterprise Agent Platform with its own identity-registry-gateway stack, and the two announcements now mark the outer walls of how hyperscalers think an enterprise agent platform should look in 2026. Second, OpenAI did something unusually aggressive on commercial terms: Workspace Agents are free until May 6, after which a credit-based pricing model kicks in. That is exactly two weeks of runway for enterprise buyers to kick tires before the meter starts — a far more compressed evaluation window than typical enterprise SaaS rollouts.

This is the story of what OpenAI actually shipped, how it compares to Google's governance-first play, and what CIOs and AI engineers need to decide before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What OpenAI Actually Shipped

The product is a research preview, which in OpenAI's vocabulary means "production for enterprise customers who accept early-stage bugs." Workspace Agents are built on Codex — OpenAI's code-trained model family — and designed to execute long-running workflows rather than serve conversational replies.

The construction experience is the biggest UX change from custom GPTs. Rather than giving the agent a system prompt and a handful of knowledge files, a user describes the workflow in plain language inside a dedicated ChatGPT tab. The system then maps the process into steps, proposes the tools the agent will need, wires up the connections, runs a test pass, and asks the user to confirm before activation. This is meaningfully closer to the way Zapier and n8n handle workflow construction than the way GPT Builder handled prompt authoring.

Once activated, agents run in the cloud on schedules or in response to triggers. They are not chat sessions; they are daemons. A team can build one sales-opportunity-scoring agent and have it run every four hours against Salesforce, pull context from Slack and Google Drive, score opportunities, and drop the results into a shared channel. Rippling, an early customer, reported that exactly this pattern saves their sales reps five to six hours per week.

The four primary surfaces Workspace Agents connect to are Slack, Google Drive, Google Calendar, and SharePoint, with connections riding on top of Model Context Protocol. The MCP dependency matters: it means Workspace Agents can, in principle, reach anything in the exploding MCP ecosystem, which crossed 97 million installs in March. It also means OpenAI is ratifying MCP as the default agent-to-tool protocol, alongside Anthropic (which created it), Google (which now routes agent traffic through it inside Agent Gateway), and Microsoft (which shipped MCP support in Copilot Studio earlier this quarter).

Security and admin controls include protection against prompt injection, ability to limit which data sources and tools a user group can access, approval workflows before sensitive actions execute, RBAC across the workspace, and monitoring through the Compliance API. The last one is the piece CISOs should pay attention to, because it is the API that routes Workspace Agent activity into SIEM and DLP tooling.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


OpenAI vs. Google: Two Bets on the Same Future

Yesterday I wrote about Google's launch of Gemini Enterprise Agent Platform, which leads with three primitives — Agent Identity, Agent Registry, Agent Gateway — and folds Vertex AI into that shape. Today's OpenAI launch is the same product category with a fundamentally different opinion about what matters.

Google's bet is that governance is the moat. Every agent gets a cryptographic identity. Every tool and sub-agent is vetted through a registry. Every tool call goes through a policy gateway. The value proposition to CIOs is: we make unsanctioned agents structurally impossible, and we let you run Salesforce, ServiceNow, Workday, Oracle, and Adobe agents inside our governed marketplace. Model choice is secondary; governance architecture is the product.

OpenAI's bet is that UX and model quality are the moat. Workspace Agents are easier to build (describe what you want, the system does the rest), easier to share (a team URL, not a deployment), easier to live inside Slack and Google Workspace (direct integrations rather than a governed marketplace of third-party agents), and powered by Codex rather than a multi-model shelf. The governance surface exists — admin controls, Compliance API, prompt-injection defense — but it is framed as "the safe way to go faster," not as the organizing principle.

For enterprise buyers, these are almost mirror-image value propositions. Google wins if CIOs conclude that 2026 is the year agents sprawl and governance becomes the acquisition trigger. OpenAI wins if CIOs conclude that adoption velocity beats governance maturity and the right bet is to get agents into as many workflows as possible now and retrofit control later.

The honest answer is that most Fortune 500 enterprises will buy both, treat Workspace Agents as the point-of-presence for knowledge workers who already live in ChatGPT and Slack, and treat Gemini Enterprise Agent Platform as the orchestration and governance substrate for agents that touch regulated systems. The middle-market segment is where the conflict will be sharpest, because those buyers can only afford to consolidate on one. For them, Workspace Agents' May 6 pricing change is the forcing function.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What CIOs Need to Know Before May 6

Three things matter commercially.

First, the "free until May 6" window is two weeks from the launch date. That is not a free tier; it is a trial with a billing cliff. After May 6, every Workspace Agent execution will draw from a credit-based pool. OpenAI has not published the credit-to-dollar conversion, which means any CIO approving a Workspace Agent rollout before May 6 is approving a pricing model they have not yet seen. For organizations with procurement policies that require signed pricing before deployment, the rational move is to delay activation until the post-May-6 pricing drops.

Second, admin controls are strong on data access but weaker on cost controls. The launch materials detail how admins can limit which tools and data sources user groups can reach, which is exactly the shadow-AI protection Fortune 500 security teams need. What is not detailed is how admins cap credit spend per user group, per agent, or per workflow. A Workspace Agent that loops because of a degraded external tool could burn credits continuously; whether per-agent budget caps exist is one of the most important questions to ask your OpenAI account team this week.

Third, the Compliance API is the integration your CISO will want to wire before production. It is how Workspace Agent execution becomes visible to SIEM, DLP, and audit tooling. Any organization with a SOC 2, ISO 27001, or HIPAA commitment has a defensible baseline once the Compliance API is flowing to Splunk, Datadog, or equivalent. Without that integration, Workspace Agents are a blind spot on the same scale that Copilot rollouts have been for the last eighteen months.

The strategic read: Workspace Agents are the most approachable enterprise agent product yet shipped, and that approachability is itself the risk. A business-unit leader can build a production-adjacent agent in an afternoon. The governance program, the pricing guardrails, and the SIEM integration need to be in place before that afternoon starts, not after.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What AI Engineers Should Know

The technical shape is more interesting than the marketing suggests.

Codex as the backbone is a deliberate choice. OpenAI is betting that agents that write, modify, and execute code are the common case — and that a code-first base model outperforms a chat-first base model for orchestration work. This matches the trajectory of Claude Code and Cursor Agents, where the agent-as-code-executor pattern has produced the most measurable enterprise ROI. If you have been building agents on GPT-4o or o1 for reasoning-heavy workflows, the Codex foundation means Workspace Agents will be stronger at multi-step tool orchestration and weaker at open-ended analysis than you are used to.

The scheduling and trigger model is what makes these agents architecturally distinct from ChatGPT sessions. Agents can run on cron-style schedules or be triggered by external events — a new email matching a filter, a row added to a spreadsheet, a Slack message in a specific channel. The underlying execution is cloud-resident, so engineers no longer need to own the runtime. That is a meaningful deletion from the operational burden of LangChain or CrewAI deployments, where scheduling is typically stitched together with Airflow or an equivalent.

MCP is the integration boundary. Slack, Google Drive, Calendar, and SharePoint are the highlighted connectors, but the underlying architecture rides on MCP, which means any MCP server your team has already built can be registered as a Workspace Agent tool. For organizations that have invested in MCP gateways — which, increasingly, is most Fortune 500 security teams — this is direct reuse. For organizations that have not yet stood up an MCP gateway, the right reading of today's announcement is that MCP is now unambiguously the default integration substrate, and the window for choosing a different protocol has closed.

Memory and persistence are less clearly specified than Google's Agent Memory Bank. OpenAI describes continuous improvement and Compliance-API-surfaced state, but the memory architecture (durable profiles, shared memories across agents, TTL policies) is not documented in the launch materials. For engineers planning to rely on long-term memory, this is a gap that will need to be filled through customer reference conversations or the eventual research-preview documentation updates.

The conversion path from custom GPTs is the piece to test immediately. OpenAI has promised a direct converter, but "direct" tends to mean "best-effort" in practice. Teams with heavily-customized GPTs — particularly ones that rely on obscure GPT Builder patterns like nested knowledge files or tool-chaining through function descriptions — should expect to spend the next two weeks rewriting rather than migrating. That is another reason to delay activation until May: it gives the conversion toolkit time to stabilize, and it aligns deployment with the post-May-6 pricing model.

Prompt injection defense is called out explicitly in the security framing. OpenAI has not detailed the technique — whether it is classifier-based filtering, output constraining, tool-call mediation, or some combination — but the commitment is notable. Enterprise agent deployments have been losing ground to prompt injection in red-team exercises for eighteen months; any improvement that holds up under adversarial testing is a real one. Red-teaming a Workspace Agent against indirect prompt injection from Slack message content or Google Drive files is the most valuable technical work any enterprise AI team can do in the two weeks before May 6.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


What's Missing, and What to Watch

Three gaps deserve calling out.

First, there is no published credit-to-dollar conversion. Every CFO who approves a Workspace Agent rollout is approving a variable cost they cannot model. That is a genuine commercial risk, not a rhetorical one.

Second, there are no marquee customer case studies in the launch materials. Rippling's sales-agent example surfaced in coverage, not in the OpenAI announcement. For an enterprise tier product, the absence of named customer reference architectures is noteworthy. Compare to Google's Gemini Enterprise Agent Platform launch yesterday, which leaned on named case studies from Burns & McDonnell, Color Health, Comcast, L'Oréal, Payhawk, and PayPal. OpenAI either has fewer production references ready or is holding them for a subsequent announcement.

Third, the competitive positioning is quietly acknowledging that Anthropic is "widely regarded as having taken the lead in the agentic AI race." That framing — which shows up in coverage rather than OpenAI's own materials — is consistent with Anthropic's $30 billion ARR milestone and the doubling of its $1M+ enterprise clients from 500 to 1,000 since the Series G. Workspace Agents are, in part, OpenAI's attempt to reclaim the enterprise agent narrative at a moment when Anthropic has measurably pulled ahead with Claude-powered agent deployments.

Calculate your potential AI savings: Try our AI ROI Calculator to see projected cost reductions and payback timelines for your organization.


The Bottom Line

Workspace Agents are the most polished enterprise agent product OpenAI has shipped and the quiet euthanasia of Custom GPTs. They are also, measured against Google's launch yesterday, a bet that UX and model quality will win the enterprise agent market against a governance-first opponent.

For CIOs, the action is concrete: pilot Workspace Agents now while they are free, but do not activate production workflows until the post-May-6 pricing is published and the Compliance API is wired into your SIEM. Ask your OpenAI account team three questions this week — what is the credit-to-dollar conversion, are there per-agent spend caps, and what is the prompt-injection defense mechanism — and weight the answers heavily in any consolidation decision.

For AI engineers, the action is narrower: convert one non-critical custom GPT to a Workspace Agent, instrument the Compliance API, and red-team the result against indirect prompt injection before May 6. If that pilot lands, the conversion of your broader custom-GPT inventory becomes a two-sprint project. If it fails — either on the conversion path, the scheduling model, or the prompt-injection defense — you have the runway to pivot before the credit meter starts.

The bigger story is structural. OpenAI and Google have now both declared that the enterprise agent era is here, the productization lanes are drawn, and the choice of moat — governance or UX — will decide who consolidates whom over the next two years. Anthropic, measured on revenue, is already winning the race they are both chasing. The next twelve months will tell us whether hyperscaler opinion or enterprise adoption has the final word.


Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe

Related Articles

Latest Articles

View All →