Omni Raises $120M: The AI Layer Looker Couldn't Build

Omni closed $120M at a $1.5B valuation from ex-Looker founders. BambooHR, Mercury, dbt adopt its semantic layer to stop AI hallucinations on enterprise data.

By Rajesh Beri·April 24, 2026·10 min read
Share:

THE DAILY BRIEF

OmniEnterprise AISemantic LayerBI ConsolidationICONIQLookerData Analytics

Omni Raises $120M: The AI Layer Looker Couldn't Build

Omni closed $120M at a $1.5B valuation from ex-Looker founders. BambooHR, Mercury, dbt adopt its semantic layer to stop AI hallucinations on enterprise data.

By Rajesh Beri·April 24, 2026·10 min read

Four years after Google paid $2.6 billion for Looker, three of its senior operators just raised $120 million to build the product they argue Looker was never allowed to become.

On April 23, 2026, Omni closed a $120 million Series C at a $1.51 billion valuation, led by ICONIQ with participation from Theory Ventures, First Round Capital, Redpoint Ventures, and GV. The round includes a $30 million employee tender offer and represents a 2.3x step-up from the company's $650 million valuation only thirteen months earlier.

The headline funding number is not the story. The story is the growth underneath it: revenue tripled year-to-date, grew 4x year-over-year, and the company reached profitability last month. Customers like BambooHR (Elite Analytics serves over 100,000 users via Omni), Mercury, Checkr, dbt Labs, Pendo, Synthesia, and Guitar Center are consolidating the BI stack they bought between 2018 and 2023 onto a single platform.

For CIOs, CDOs, and CFOs staring at duplicate Tableau, Looker, and Power BI invoices—and newer invoices for a half-dozen experimental AI data tools—Omni's round is a specific signal. The enterprise analytics stack is consolidating, and the consolidation layer is the governed semantic model.

The People Matter: Why Ex-Looker Founders Pulled This Off

CEO Colin Zima was Looker's Chief Analytics Officer and VP of Product before Google's acquisition. Co-founders Jamie Davidson and Chris Merrick are Princeton graduates who reconnected post-acquisition. All three watched from inside Google what happens when a modern data product gets absorbed into a hyperscaler—product velocity slows, customer focus drifts, and strategic priorities shift to the parent's agenda.

That experience is the founding thesis of Omni. And it is also the single biggest reason enterprise data leaders should take the company seriously. The team has already built and shipped one of the most influential BI products of the last decade. They know exactly what to do differently.

Zima's framing on AI is unusually sober for a 2026 data-infrastructure CEO: "AI is an actual advantage for us rather than something ripping the industry apart." Translated: AI isn't replacing the semantic layer. AI is the reason enterprises finally need one.

The Problem Every CIO Is Actually Trying to Solve

The enterprise data problem in 2026 is not storage. It is not visualization. It is not model quality.

It is translation.

Fortune 500 companies spent a decade investing in cloud data warehouses (Snowflake, BigQuery, Databricks, Redshift), then another half-decade investing in BI tools (Tableau, Looker, Power BI, ThoughtSpot), and the last two years investing in AI agents (ChatGPT Enterprise, Claude for Work, Gemini Enterprise, Copilot). Each layer solved a real problem. None solved the deepest one.

When a CFO asks "what is our gross margin for the Northeast region last quarter, adjusted for the new bundling SKUs?", the data exists. The answer requires:

  1. Knowing which table has revenue (which warehouse, which dataset, which schema)
  2. Knowing which fields define gross margin—and whose definition
  3. Knowing how "Northeast region" is defined (the sales org, the operations org, and finance all define it differently)
  4. Knowing which bundling SKUs to exclude, using which policy
  5. Applying the right currency conversion and revenue recognition rules
  6. Respecting the user's permissions and data classification

A raw LLM cannot do this. It will hallucinate—confidently. A dashboard can answer one version of the question but breaks the moment definitions change. A BI tool can define metrics, but only within its own silo; the marketing BI layer does not know the finance BI layer exists.

The semantic layer is the translation layer: a governed, versioned rulebook that defines every metric, every dimension, every access policy, and every calculation—once—and exposes it to every consumer: dashboards, spreadsheets, SQL, and AI agents.

This is the product category Omni now leads.

What Omni Actually Ships (for CTOs)

For technical leaders evaluating Omni against Cube, Snowflake's Cortex Analyst, Databricks Genie, dbt Semantic Layer, and legacy BI vendors, the architecture is specific.

1. A single semantic model powers every surface. Dashboards, Excel-style workbooks, spreadsheets, ad-hoc SQL, and AI chat queries all resolve against the same metrics, permissions, and definitions. This is the foundational design choice. Most competitors treat AI as a feature bolted on top of existing BI. Omni treats the semantic layer as the core and the surfaces as peripherals.

2. Native AI agent integration. Users can query Omni's semantic layer through Claude, ChatGPT, Cursor, and VS Code. The agent never touches raw warehouse tables. It consumes the governed layer. That is what keeps the hallucination rate low on enterprise questions—and what makes governance actually hold.

3. Deep warehouse integrations. Snowflake, Google BigQuery, Databricks, Amazon Redshift, Postgres, and ClickHouse. Omni is not building a warehouse. It is sitting above yours.

4. Embeddable analytics. BambooHR's "Elite Analytics" product is Omni embedded. The fact that Omni can power a SaaS vendor's entire customer-facing analytics surface is the hardest technical credential in BI—because embedded analytics demand multi-tenancy, white-labeling, row-level security, and performance at scale simultaneously.

5. Profitability. The last line item matters because it tells you Omni is not burning capital to win the semantic-layer wars. It is already self-sustaining and using the $120M to accelerate, not to survive.

The design choice that runs through the whole product: business logic lives in the semantic layer, not in the agent. The LLM interprets intent and generates output. Omni enforces definitions, permissions, and correctness. That separation is what makes AI answers trustworthy.

The BI Consolidation Math (for CFOs)

For finance leaders, the Omni round is not just an AI story. It is a spend rationalization story.

Most Fortune 1000 companies are currently paying for:

  • Cloud data warehouse (Snowflake, BigQuery, or Databricks)—typically $1M–$50M annually
  • Enterprise BI platform (Tableau, Looker, or Power BI)—typically $500K–$10M annually
  • Secondary BI tools (ThoughtSpot, Sigma, Mode, Domo)—typically $200K–$2M each
  • dbt Cloud or similar transformation platform—$100K–$1M annually
  • Pilot AI analytics tools (TextQL, Hex agents, Copilot for Data, custom RAG)—$50K–$500K each
  • Data catalog and governance tools (Collibra, Alation)—$250K–$2M annually
  • Consulting firms to stitch them together—open-ended

Omni's pitch is that one governed semantic layer, properly deployed, eliminates 40–60% of that stack. The warehouse stays. The transformation layer stays. Everything above gets consolidated—BI, embedded analytics, AI data queries, metric governance—into a single platform.

That is the bet behind 4x revenue growth. Enterprises in the middle of a 2026 cost review are not adding tools. They are cutting duplicates. The vendor that can absorb three to five line items from the old stack wins.

The cost of not consolidating is also worth naming. McKinsey's 2025 enterprise AI research shows that data access and data quality are the single biggest reasons AI pilots fail to reach production. Gartner has repeatedly found that 80% of enterprise data remains inaccessible for AI use cases. Those numbers do not get fixed by buying more models. They get fixed by getting the translation layer right.

The Competitive Picture, Honestly

The semantic-layer market is crowded and the marketing is noisier than the technical reality.

The warehouse-native players: Snowflake Cortex Analyst, Databricks Genie. These are natural home-team options for customers already deep in one warehouse. The tradeoff is lock-in. A multi-warehouse enterprise (and there are many) cannot standardize on a semantic layer owned by one warehouse vendor.

The open-source / neutral players: Cube, dbt Semantic Layer, MetricFlow. Strong developer traction, strong governance primitives, weaker on the presentation layer. Good choice for teams with mature data platforms and strong engineering cultures.

The legacy BI incumbents: Tableau, Power BI, Looker, Qlik, ThoughtSpot. All are adding AI. All are wrestling with the architectural problem that their semantic logic lives inside the BI tool, not in a reusable layer. Retrofitting a governed semantic model onto a BI product designed in 2012 is extraordinarily hard.

The AI-native challengers: Hex, Julius, Hyperbound, TextQL (focused on on-prem VPCs), Sigma AI, Mode AI. Fast-moving, developer-loved, but still proving they can satisfy enterprise governance, security, and permission requirements.

OpenAI Frontier: The OpenAI enterprise analytics entrant. The threat is real—OpenAI has distribution. The weakness is that enterprises do not want their semantic layer owned by a foundation-model vendor who also sells the agents that consume it.

Omni's bet is the neutral middle: warehouse-agnostic, model-agnostic, surface-agnostic. Works with Snowflake or Databricks or BigQuery. Works with Claude or ChatGPT or Gemini. Powers dashboards or spreadsheets or chat. That positioning is what ICONIQ underwrote at $1.5 billion.

The Decision Framework

For CIOs, CDOs, and CFOs evaluating where to place the 2026 analytics bet, the question is not "should we buy Omni specifically." It is how do we adopt a semantic-layer architecture, regardless of vendor?

If you are running 3+ BI tools and 2+ pilot AI analytics products: Run an explicit consolidation exercise. Identify which metrics are defined in which tool, which definitions disagree, and what it would take to move to a single governed layer. Omni, Cube, and the warehouse-native options are all legitimate landing spots. Do the math.

If you are all-in on one warehouse: The native option (Cortex Analyst for Snowflake, Genie for Databricks, BigQuery's AI tooling) will integrate most tightly and cost the least. Lock-in is real and should be priced into the decision.

If you are in a regulated industry (financial services, healthcare, government): Governed semantic layers are a procurement accelerant. Auditable metric definitions, row-level permissions, and access policies that flow from one layer to every downstream surface are exactly what your GRC team will ask for. Lead with governance, not AI features.

If you are a SaaS company with embedded analytics: BambooHR's deployment is the reference architecture. Omni handles embedded analytics credibly at scale. Very few vendors do. Evaluate accordingly.

Regardless of vendor choice: Separate business logic from the AI model. The semantic layer owns definitions. The LLM interprets intent. This is the architectural principle that survives the next three model releases and the next three BI vendor contracts. Build for it.

The Bottom Line

Omni's $120 million round is not a bet on a new analytics tool. It is a bet on a structural change in how enterprises organize data for AI.

In 2024, enterprises bought models. In 2025, they bought agents and orchestration. In 2026, they are discovering that neither works without a governed semantic layer underneath—and the companies who figure that out first are the ones whose AI pilots are shipping to production.

The ex-Looker founders spent four years watching from inside Google what happens when a product team loses the freedom to iterate quickly on an enterprise data problem. The product they're building now is the answer to a specific question: what should Looker have become if it had not been acquired?

The answer, apparently, is worth $1.5 billion.

For enterprise data leaders, the harder question is not "is Omni the winner?"—it is "is your current BI stack going to survive the next 24 months?" If the answer involves four tools, three definitions of "revenue," and an AI agent that keeps hallucinating on last quarter's numbers, the stack has a shelf life. The vendor that consolidates it takes the budget. Omni just raised $120 million to be that vendor.

Sources


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Omni Raises $120M: The AI Layer Looker Couldn't Build

Photo by Pixabay on Pexels

Four years after Google paid $2.6 billion for Looker, three of its senior operators just raised $120 million to build the product they argue Looker was never allowed to become.

On April 23, 2026, Omni closed a $120 million Series C at a $1.51 billion valuation, led by ICONIQ with participation from Theory Ventures, First Round Capital, Redpoint Ventures, and GV. The round includes a $30 million employee tender offer and represents a 2.3x step-up from the company's $650 million valuation only thirteen months earlier.

The headline funding number is not the story. The story is the growth underneath it: revenue tripled year-to-date, grew 4x year-over-year, and the company reached profitability last month. Customers like BambooHR (Elite Analytics serves over 100,000 users via Omni), Mercury, Checkr, dbt Labs, Pendo, Synthesia, and Guitar Center are consolidating the BI stack they bought between 2018 and 2023 onto a single platform.

For CIOs, CDOs, and CFOs staring at duplicate Tableau, Looker, and Power BI invoices—and newer invoices for a half-dozen experimental AI data tools—Omni's round is a specific signal. The enterprise analytics stack is consolidating, and the consolidation layer is the governed semantic model.

The People Matter: Why Ex-Looker Founders Pulled This Off

CEO Colin Zima was Looker's Chief Analytics Officer and VP of Product before Google's acquisition. Co-founders Jamie Davidson and Chris Merrick are Princeton graduates who reconnected post-acquisition. All three watched from inside Google what happens when a modern data product gets absorbed into a hyperscaler—product velocity slows, customer focus drifts, and strategic priorities shift to the parent's agenda.

That experience is the founding thesis of Omni. And it is also the single biggest reason enterprise data leaders should take the company seriously. The team has already built and shipped one of the most influential BI products of the last decade. They know exactly what to do differently.

Zima's framing on AI is unusually sober for a 2026 data-infrastructure CEO: "AI is an actual advantage for us rather than something ripping the industry apart." Translated: AI isn't replacing the semantic layer. AI is the reason enterprises finally need one.

The Problem Every CIO Is Actually Trying to Solve

The enterprise data problem in 2026 is not storage. It is not visualization. It is not model quality.

It is translation.

Fortune 500 companies spent a decade investing in cloud data warehouses (Snowflake, BigQuery, Databricks, Redshift), then another half-decade investing in BI tools (Tableau, Looker, Power BI, ThoughtSpot), and the last two years investing in AI agents (ChatGPT Enterprise, Claude for Work, Gemini Enterprise, Copilot). Each layer solved a real problem. None solved the deepest one.

When a CFO asks "what is our gross margin for the Northeast region last quarter, adjusted for the new bundling SKUs?", the data exists. The answer requires:

  1. Knowing which table has revenue (which warehouse, which dataset, which schema)
  2. Knowing which fields define gross margin—and whose definition
  3. Knowing how "Northeast region" is defined (the sales org, the operations org, and finance all define it differently)
  4. Knowing which bundling SKUs to exclude, using which policy
  5. Applying the right currency conversion and revenue recognition rules
  6. Respecting the user's permissions and data classification

A raw LLM cannot do this. It will hallucinate—confidently. A dashboard can answer one version of the question but breaks the moment definitions change. A BI tool can define metrics, but only within its own silo; the marketing BI layer does not know the finance BI layer exists.

The semantic layer is the translation layer: a governed, versioned rulebook that defines every metric, every dimension, every access policy, and every calculation—once—and exposes it to every consumer: dashboards, spreadsheets, SQL, and AI agents.

This is the product category Omni now leads.

What Omni Actually Ships (for CTOs)

For technical leaders evaluating Omni against Cube, Snowflake's Cortex Analyst, Databricks Genie, dbt Semantic Layer, and legacy BI vendors, the architecture is specific.

1. A single semantic model powers every surface. Dashboards, Excel-style workbooks, spreadsheets, ad-hoc SQL, and AI chat queries all resolve against the same metrics, permissions, and definitions. This is the foundational design choice. Most competitors treat AI as a feature bolted on top of existing BI. Omni treats the semantic layer as the core and the surfaces as peripherals.

2. Native AI agent integration. Users can query Omni's semantic layer through Claude, ChatGPT, Cursor, and VS Code. The agent never touches raw warehouse tables. It consumes the governed layer. That is what keeps the hallucination rate low on enterprise questions—and what makes governance actually hold.

3. Deep warehouse integrations. Snowflake, Google BigQuery, Databricks, Amazon Redshift, Postgres, and ClickHouse. Omni is not building a warehouse. It is sitting above yours.

4. Embeddable analytics. BambooHR's "Elite Analytics" product is Omni embedded. The fact that Omni can power a SaaS vendor's entire customer-facing analytics surface is the hardest technical credential in BI—because embedded analytics demand multi-tenancy, white-labeling, row-level security, and performance at scale simultaneously.

5. Profitability. The last line item matters because it tells you Omni is not burning capital to win the semantic-layer wars. It is already self-sustaining and using the $120M to accelerate, not to survive.

The design choice that runs through the whole product: business logic lives in the semantic layer, not in the agent. The LLM interprets intent and generates output. Omni enforces definitions, permissions, and correctness. That separation is what makes AI answers trustworthy.

The BI Consolidation Math (for CFOs)

For finance leaders, the Omni round is not just an AI story. It is a spend rationalization story.

Most Fortune 1000 companies are currently paying for:

  • Cloud data warehouse (Snowflake, BigQuery, or Databricks)—typically $1M–$50M annually
  • Enterprise BI platform (Tableau, Looker, or Power BI)—typically $500K–$10M annually
  • Secondary BI tools (ThoughtSpot, Sigma, Mode, Domo)—typically $200K–$2M each
  • dbt Cloud or similar transformation platform—$100K–$1M annually
  • Pilot AI analytics tools (TextQL, Hex agents, Copilot for Data, custom RAG)—$50K–$500K each
  • Data catalog and governance tools (Collibra, Alation)—$250K–$2M annually
  • Consulting firms to stitch them together—open-ended

Omni's pitch is that one governed semantic layer, properly deployed, eliminates 40–60% of that stack. The warehouse stays. The transformation layer stays. Everything above gets consolidated—BI, embedded analytics, AI data queries, metric governance—into a single platform.

That is the bet behind 4x revenue growth. Enterprises in the middle of a 2026 cost review are not adding tools. They are cutting duplicates. The vendor that can absorb three to five line items from the old stack wins.

The cost of not consolidating is also worth naming. McKinsey's 2025 enterprise AI research shows that data access and data quality are the single biggest reasons AI pilots fail to reach production. Gartner has repeatedly found that 80% of enterprise data remains inaccessible for AI use cases. Those numbers do not get fixed by buying more models. They get fixed by getting the translation layer right.

The Competitive Picture, Honestly

The semantic-layer market is crowded and the marketing is noisier than the technical reality.

The warehouse-native players: Snowflake Cortex Analyst, Databricks Genie. These are natural home-team options for customers already deep in one warehouse. The tradeoff is lock-in. A multi-warehouse enterprise (and there are many) cannot standardize on a semantic layer owned by one warehouse vendor.

The open-source / neutral players: Cube, dbt Semantic Layer, MetricFlow. Strong developer traction, strong governance primitives, weaker on the presentation layer. Good choice for teams with mature data platforms and strong engineering cultures.

The legacy BI incumbents: Tableau, Power BI, Looker, Qlik, ThoughtSpot. All are adding AI. All are wrestling with the architectural problem that their semantic logic lives inside the BI tool, not in a reusable layer. Retrofitting a governed semantic model onto a BI product designed in 2012 is extraordinarily hard.

The AI-native challengers: Hex, Julius, Hyperbound, TextQL (focused on on-prem VPCs), Sigma AI, Mode AI. Fast-moving, developer-loved, but still proving they can satisfy enterprise governance, security, and permission requirements.

OpenAI Frontier: The OpenAI enterprise analytics entrant. The threat is real—OpenAI has distribution. The weakness is that enterprises do not want their semantic layer owned by a foundation-model vendor who also sells the agents that consume it.

Omni's bet is the neutral middle: warehouse-agnostic, model-agnostic, surface-agnostic. Works with Snowflake or Databricks or BigQuery. Works with Claude or ChatGPT or Gemini. Powers dashboards or spreadsheets or chat. That positioning is what ICONIQ underwrote at $1.5 billion.

The Decision Framework

For CIOs, CDOs, and CFOs evaluating where to place the 2026 analytics bet, the question is not "should we buy Omni specifically." It is how do we adopt a semantic-layer architecture, regardless of vendor?

If you are running 3+ BI tools and 2+ pilot AI analytics products: Run an explicit consolidation exercise. Identify which metrics are defined in which tool, which definitions disagree, and what it would take to move to a single governed layer. Omni, Cube, and the warehouse-native options are all legitimate landing spots. Do the math.

If you are all-in on one warehouse: The native option (Cortex Analyst for Snowflake, Genie for Databricks, BigQuery's AI tooling) will integrate most tightly and cost the least. Lock-in is real and should be priced into the decision.

If you are in a regulated industry (financial services, healthcare, government): Governed semantic layers are a procurement accelerant. Auditable metric definitions, row-level permissions, and access policies that flow from one layer to every downstream surface are exactly what your GRC team will ask for. Lead with governance, not AI features.

If you are a SaaS company with embedded analytics: BambooHR's deployment is the reference architecture. Omni handles embedded analytics credibly at scale. Very few vendors do. Evaluate accordingly.

Regardless of vendor choice: Separate business logic from the AI model. The semantic layer owns definitions. The LLM interprets intent. This is the architectural principle that survives the next three model releases and the next three BI vendor contracts. Build for it.

The Bottom Line

Omni's $120 million round is not a bet on a new analytics tool. It is a bet on a structural change in how enterprises organize data for AI.

In 2024, enterprises bought models. In 2025, they bought agents and orchestration. In 2026, they are discovering that neither works without a governed semantic layer underneath—and the companies who figure that out first are the ones whose AI pilots are shipping to production.

The ex-Looker founders spent four years watching from inside Google what happens when a product team loses the freedom to iterate quickly on an enterprise data problem. The product they're building now is the answer to a specific question: what should Looker have become if it had not been acquired?

The answer, apparently, is worth $1.5 billion.

For enterprise data leaders, the harder question is not "is Omni the winner?"—it is "is your current BI stack going to survive the next 24 months?" If the answer involves four tools, three definitions of "revenue," and an AI agent that keeps hallucinating on last quarter's numbers, the stack has a shelf life. The vendor that consolidates it takes the budget. Omni just raised $120 million to be that vendor.

Sources


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

OmniEnterprise AISemantic LayerBI ConsolidationICONIQLookerData Analytics

Omni Raises $120M: The AI Layer Looker Couldn't Build

Omni closed $120M at a $1.5B valuation from ex-Looker founders. BambooHR, Mercury, dbt adopt its semantic layer to stop AI hallucinations on enterprise data.

By Rajesh Beri·April 24, 2026·10 min read

Four years after Google paid $2.6 billion for Looker, three of its senior operators just raised $120 million to build the product they argue Looker was never allowed to become.

On April 23, 2026, Omni closed a $120 million Series C at a $1.51 billion valuation, led by ICONIQ with participation from Theory Ventures, First Round Capital, Redpoint Ventures, and GV. The round includes a $30 million employee tender offer and represents a 2.3x step-up from the company's $650 million valuation only thirteen months earlier.

The headline funding number is not the story. The story is the growth underneath it: revenue tripled year-to-date, grew 4x year-over-year, and the company reached profitability last month. Customers like BambooHR (Elite Analytics serves over 100,000 users via Omni), Mercury, Checkr, dbt Labs, Pendo, Synthesia, and Guitar Center are consolidating the BI stack they bought between 2018 and 2023 onto a single platform.

For CIOs, CDOs, and CFOs staring at duplicate Tableau, Looker, and Power BI invoices—and newer invoices for a half-dozen experimental AI data tools—Omni's round is a specific signal. The enterprise analytics stack is consolidating, and the consolidation layer is the governed semantic model.

The People Matter: Why Ex-Looker Founders Pulled This Off

CEO Colin Zima was Looker's Chief Analytics Officer and VP of Product before Google's acquisition. Co-founders Jamie Davidson and Chris Merrick are Princeton graduates who reconnected post-acquisition. All three watched from inside Google what happens when a modern data product gets absorbed into a hyperscaler—product velocity slows, customer focus drifts, and strategic priorities shift to the parent's agenda.

That experience is the founding thesis of Omni. And it is also the single biggest reason enterprise data leaders should take the company seriously. The team has already built and shipped one of the most influential BI products of the last decade. They know exactly what to do differently.

Zima's framing on AI is unusually sober for a 2026 data-infrastructure CEO: "AI is an actual advantage for us rather than something ripping the industry apart." Translated: AI isn't replacing the semantic layer. AI is the reason enterprises finally need one.

The Problem Every CIO Is Actually Trying to Solve

The enterprise data problem in 2026 is not storage. It is not visualization. It is not model quality.

It is translation.

Fortune 500 companies spent a decade investing in cloud data warehouses (Snowflake, BigQuery, Databricks, Redshift), then another half-decade investing in BI tools (Tableau, Looker, Power BI, ThoughtSpot), and the last two years investing in AI agents (ChatGPT Enterprise, Claude for Work, Gemini Enterprise, Copilot). Each layer solved a real problem. None solved the deepest one.

When a CFO asks "what is our gross margin for the Northeast region last quarter, adjusted for the new bundling SKUs?", the data exists. The answer requires:

  1. Knowing which table has revenue (which warehouse, which dataset, which schema)
  2. Knowing which fields define gross margin—and whose definition
  3. Knowing how "Northeast region" is defined (the sales org, the operations org, and finance all define it differently)
  4. Knowing which bundling SKUs to exclude, using which policy
  5. Applying the right currency conversion and revenue recognition rules
  6. Respecting the user's permissions and data classification

A raw LLM cannot do this. It will hallucinate—confidently. A dashboard can answer one version of the question but breaks the moment definitions change. A BI tool can define metrics, but only within its own silo; the marketing BI layer does not know the finance BI layer exists.

The semantic layer is the translation layer: a governed, versioned rulebook that defines every metric, every dimension, every access policy, and every calculation—once—and exposes it to every consumer: dashboards, spreadsheets, SQL, and AI agents.

This is the product category Omni now leads.

What Omni Actually Ships (for CTOs)

For technical leaders evaluating Omni against Cube, Snowflake's Cortex Analyst, Databricks Genie, dbt Semantic Layer, and legacy BI vendors, the architecture is specific.

1. A single semantic model powers every surface. Dashboards, Excel-style workbooks, spreadsheets, ad-hoc SQL, and AI chat queries all resolve against the same metrics, permissions, and definitions. This is the foundational design choice. Most competitors treat AI as a feature bolted on top of existing BI. Omni treats the semantic layer as the core and the surfaces as peripherals.

2. Native AI agent integration. Users can query Omni's semantic layer through Claude, ChatGPT, Cursor, and VS Code. The agent never touches raw warehouse tables. It consumes the governed layer. That is what keeps the hallucination rate low on enterprise questions—and what makes governance actually hold.

3. Deep warehouse integrations. Snowflake, Google BigQuery, Databricks, Amazon Redshift, Postgres, and ClickHouse. Omni is not building a warehouse. It is sitting above yours.

4. Embeddable analytics. BambooHR's "Elite Analytics" product is Omni embedded. The fact that Omni can power a SaaS vendor's entire customer-facing analytics surface is the hardest technical credential in BI—because embedded analytics demand multi-tenancy, white-labeling, row-level security, and performance at scale simultaneously.

5. Profitability. The last line item matters because it tells you Omni is not burning capital to win the semantic-layer wars. It is already self-sustaining and using the $120M to accelerate, not to survive.

The design choice that runs through the whole product: business logic lives in the semantic layer, not in the agent. The LLM interprets intent and generates output. Omni enforces definitions, permissions, and correctness. That separation is what makes AI answers trustworthy.

The BI Consolidation Math (for CFOs)

For finance leaders, the Omni round is not just an AI story. It is a spend rationalization story.

Most Fortune 1000 companies are currently paying for:

  • Cloud data warehouse (Snowflake, BigQuery, or Databricks)—typically $1M–$50M annually
  • Enterprise BI platform (Tableau, Looker, or Power BI)—typically $500K–$10M annually
  • Secondary BI tools (ThoughtSpot, Sigma, Mode, Domo)—typically $200K–$2M each
  • dbt Cloud or similar transformation platform—$100K–$1M annually
  • Pilot AI analytics tools (TextQL, Hex agents, Copilot for Data, custom RAG)—$50K–$500K each
  • Data catalog and governance tools (Collibra, Alation)—$250K–$2M annually
  • Consulting firms to stitch them together—open-ended

Omni's pitch is that one governed semantic layer, properly deployed, eliminates 40–60% of that stack. The warehouse stays. The transformation layer stays. Everything above gets consolidated—BI, embedded analytics, AI data queries, metric governance—into a single platform.

That is the bet behind 4x revenue growth. Enterprises in the middle of a 2026 cost review are not adding tools. They are cutting duplicates. The vendor that can absorb three to five line items from the old stack wins.

The cost of not consolidating is also worth naming. McKinsey's 2025 enterprise AI research shows that data access and data quality are the single biggest reasons AI pilots fail to reach production. Gartner has repeatedly found that 80% of enterprise data remains inaccessible for AI use cases. Those numbers do not get fixed by buying more models. They get fixed by getting the translation layer right.

The Competitive Picture, Honestly

The semantic-layer market is crowded and the marketing is noisier than the technical reality.

The warehouse-native players: Snowflake Cortex Analyst, Databricks Genie. These are natural home-team options for customers already deep in one warehouse. The tradeoff is lock-in. A multi-warehouse enterprise (and there are many) cannot standardize on a semantic layer owned by one warehouse vendor.

The open-source / neutral players: Cube, dbt Semantic Layer, MetricFlow. Strong developer traction, strong governance primitives, weaker on the presentation layer. Good choice for teams with mature data platforms and strong engineering cultures.

The legacy BI incumbents: Tableau, Power BI, Looker, Qlik, ThoughtSpot. All are adding AI. All are wrestling with the architectural problem that their semantic logic lives inside the BI tool, not in a reusable layer. Retrofitting a governed semantic model onto a BI product designed in 2012 is extraordinarily hard.

The AI-native challengers: Hex, Julius, Hyperbound, TextQL (focused on on-prem VPCs), Sigma AI, Mode AI. Fast-moving, developer-loved, but still proving they can satisfy enterprise governance, security, and permission requirements.

OpenAI Frontier: The OpenAI enterprise analytics entrant. The threat is real—OpenAI has distribution. The weakness is that enterprises do not want their semantic layer owned by a foundation-model vendor who also sells the agents that consume it.

Omni's bet is the neutral middle: warehouse-agnostic, model-agnostic, surface-agnostic. Works with Snowflake or Databricks or BigQuery. Works with Claude or ChatGPT or Gemini. Powers dashboards or spreadsheets or chat. That positioning is what ICONIQ underwrote at $1.5 billion.

The Decision Framework

For CIOs, CDOs, and CFOs evaluating where to place the 2026 analytics bet, the question is not "should we buy Omni specifically." It is how do we adopt a semantic-layer architecture, regardless of vendor?

If you are running 3+ BI tools and 2+ pilot AI analytics products: Run an explicit consolidation exercise. Identify which metrics are defined in which tool, which definitions disagree, and what it would take to move to a single governed layer. Omni, Cube, and the warehouse-native options are all legitimate landing spots. Do the math.

If you are all-in on one warehouse: The native option (Cortex Analyst for Snowflake, Genie for Databricks, BigQuery's AI tooling) will integrate most tightly and cost the least. Lock-in is real and should be priced into the decision.

If you are in a regulated industry (financial services, healthcare, government): Governed semantic layers are a procurement accelerant. Auditable metric definitions, row-level permissions, and access policies that flow from one layer to every downstream surface are exactly what your GRC team will ask for. Lead with governance, not AI features.

If you are a SaaS company with embedded analytics: BambooHR's deployment is the reference architecture. Omni handles embedded analytics credibly at scale. Very few vendors do. Evaluate accordingly.

Regardless of vendor choice: Separate business logic from the AI model. The semantic layer owns definitions. The LLM interprets intent. This is the architectural principle that survives the next three model releases and the next three BI vendor contracts. Build for it.

The Bottom Line

Omni's $120 million round is not a bet on a new analytics tool. It is a bet on a structural change in how enterprises organize data for AI.

In 2024, enterprises bought models. In 2025, they bought agents and orchestration. In 2026, they are discovering that neither works without a governed semantic layer underneath—and the companies who figure that out first are the ones whose AI pilots are shipping to production.

The ex-Looker founders spent four years watching from inside Google what happens when a product team loses the freedom to iterate quickly on an enterprise data problem. The product they're building now is the answer to a specific question: what should Looker have become if it had not been acquired?

The answer, apparently, is worth $1.5 billion.

For enterprise data leaders, the harder question is not "is Omni the winner?"—it is "is your current BI stack going to survive the next 24 months?" If the answer involves four tools, three definitions of "revenue," and an AI agent that keeps hallucinating on last quarter's numbers, the stack has a shelf life. The vendor that consolidates it takes the budget. Omni just raised $120 million to be that vendor.

Sources


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe