SiFive's $400M Bet on RISC-V Reshapes AI Chips

$400M at $3.65B valuation with NVIDIA backing. For CTOs: why open-source silicon is becoming the third path for AI data center architecture in 2026.

By Rajesh Beri·April 11, 2026·8 min read
Share:

THE DAILY BRIEF

AI InfrastructureAI HardwareNVIDIAEnterprise AIVenture CapitalData Centers

SiFive's $400M Bet on RISC-V Reshapes AI Chips

$400M at $3.65B valuation with NVIDIA backing. For CTOs: why open-source silicon is becoming the third path for AI data center architecture in 2026.

By Rajesh Beri·April 11, 2026·8 min read

Something shifted in the chip industry this week that most enterprise leaders won't notice until it hits their infrastructure budgets.

SiFive, the company founded by the Berkeley engineers who created the RISC-V open-source architecture, closed a $400 million Series G round on April 9. The valuation: $3.65 billion. The round was oversubscribed. And the investor list reads like a who's who of people making consequential bets on AI infrastructure — Atreides Management led, with NVIDIA, Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price, Capital Group, Prosperity7 Ventures, and Sutter Hill Ventures all participating.

CEO Patrick Little called it the company's final private round before an IPO.

That alone would make this a significant funding event. What makes it a strategic inflection point is the context around it.

Why NVIDIA Is Backing Its Own Competition

NVIDIA investing in a CPU company might seem counterintuitive. It's not. In January 2026, SiFive announced integration with NVLink Fusion, NVIDIA's high-bandwidth interconnect technology. That means RISC-V processors built on SiFive's IP can connect directly to NVIDIA GPUs — coherently, with low latency, at the kind of bandwidth that matters for large-scale AI inference.

This isn't NVIDIA being charitable. It's NVIDIA ensuring that the CPU sitting next to its GPUs in next-generation data centers isn't controlled by a competitor. With Arm launching its own AGI CPU in March 2026 — with Meta and OpenAI as debut customers — the dynamics of the chip IP market have fundamentally changed. Arm went from neutral licensor to hardware vendor. That kind of vertical integration historically pushes buyers toward alternatives.

NVIDIA's investment in SiFive is a hedge. A very deliberate one. It positions RISC-V as the open-standard CPU architecture that complements NVIDIA's GPU ecosystem, particularly around the upcoming Vera Rubin platform targeting agentic AI workloads.

For enterprise infrastructure leaders evaluating the next generation of AI compute, this is the signal to pay attention to.

The Three Paths to AI Data Center Silicon

The AI data center silicon landscape is splitting into three distinct paths, and understanding which one your vendors are on matters more than most CTOs realize.

Path one: proprietary custom silicon. Amazon committed $50 billion to its Trainium chip programme in its April 2026 shareholder letter. Google, Anthropic, and Broadcom struck a deal for custom AI ASICs. These are purpose-built chips designed for specific workloads, controlled end-to-end by the hyperscaler.

Path two: Arm-based designs. Arm's new AGI CPU represents its push from pure IP licensing into branded hardware. Microsoft, Google, and AWS all ship Arm-based server chips (Cobalt, Axion, Graviton). But Arm's new vertical ambitions create a conflict-of-interest dynamic that licensees are watching carefully.

Path three: RISC-V. Open standard. No per-unit royalties. No single-company control. SiFive's pitch is that hyperscale customers get fully customizable CPU IP without the architectural lock-in that comes with proprietary options. More than 500 semiconductor designs now use SiFive's IP, with 10 billion RISC-V cores shipped to date.

The $400 million round is the market's bet that path three is no longer the scrappy underdog. It's becoming a production-grade alternative.

What SiFive Is Actually Building

Let's get specific about what this capital is funding, because the details matter for enterprise planning.

SiFive's flagship CPU design, the Performance P870-D, supports up to 256 cores per server processor. It includes interrupt controllers for workload prioritization, data protection accelerators for encryption tasks, and a cluster accelerator port that enables direct integration with custom GPUs or other accelerators. That last feature is what makes the NVLink Fusion partnership consequential — it's not theoretical interoperability, it's a hardware-level integration path.

On the AI side, SiFive's XM Gen 2 accelerator (announced September 2025) is optimized for matrix processing — the core mathematical operation behind transformer-based AI models. Each four-core processing cluster delivers 16 trillion operations per second. These can be composed into larger configurations for inference workloads.

The $400 million breaks down across three areas. First, R&D on high-performance scalar, vector, and matrix RISC-V CPU IP. Second, software ecosystem development — including porting CUDA, Red Hat Enterprise Linux, and Ubuntu to RISC-V. Third, customer enablement engineering, the hands-on work of helping hyperscalers integrate SiFive IP into their silicon programmes.

That software porting work is worth flagging. In enterprise data centers, hardware performance is necessary but not sufficient. If you can't run your existing software stack, the fastest chip in the world is useless. SiFive's investment in CUDA compatibility and major Linux distribution support is the unsexy work that actually makes RISC-V deployable at scale.

The Arm Conflict Creates the Opening

Here's the competitive dynamic that makes this moment different from previous RISC-V hype cycles.

Arm's March 2026 decision to launch its own branded CPU — the AGI chip — fundamentally altered its relationship with licensees. For 35 years, Arm was the Switzerland of chip architectures: everyone licensed from Arm because Arm didn't compete with anyone directly.

That era is over. Meta and OpenAI bought the AGI CPU. But every other Arm licensee now has to ask themselves whether their chip IP supplier is also a competitor. This is the same dynamic that drove the cloud industry toward open-source databases and the enterprise software industry toward Kubernetes. When your supplier starts competing with you, you look for open alternatives.

SiFive's total funding now exceeds $760 million. Intel offered more than $2 billion to acquire the company in 2021, but the deal collapsed over valuation disagreements. Intel has since pivoted to becoming a foundry partner for Elon Musk's Terafab, a $25 billion AI compute facility — leaving the RISC-V IP market without its most obvious potential acquirer.

The timing of NVIDIA's investment makes even more sense in this context. NVIDIA doesn't want Arm controlling the CPU architecture ecosystem that its GPUs depend on. SiFive gives NVIDIA an insurance policy — an open-standard CPU partner that can't be acquired by a competitor or turned into a proprietary competitor.

What This Means for Enterprise Budgets

If you're a CTO or VP of Infrastructure planning 2027-2028 data center buildouts, here's what to watch.

Server CPU diversity is accelerating. The days of Intel and AMD as your only options are already over. Arm-based chips from AWS, Google, and Microsoft proved the market accepts architectural diversity. RISC-V is the next wave. Within two years, expect major cloud providers to offer RISC-V-based instance types alongside x86 and Arm.

Chip procurement negotiating leverage is increasing. More architectures mean more competition, which means better pricing. If your cloud provider only offers proprietary chip options, RISC-V alternatives give you leverage in contract negotiations.

AI inference costs will drop. Purpose-built RISC-V+GPU combinations (enabled by NVLink Fusion) could deliver better price-performance for inference workloads than general-purpose CPU+GPU pairings. The specifics will depend on workload profiles, but the direction is clear: more competition drives costs down.

Software compatibility is the bottleneck, not hardware. Watch SiFive's progress on CUDA porting and Linux distribution support. Enterprise adoption will follow software readiness, not hardware performance benchmarks. If your workloads run on RHEL or Ubuntu today, the RISC-V transition becomes viable once those distributions ship stable RISC-V builds.

SiFive is targeting what it calls a $100 billion-plus addressable market in data center chips — driven by the agentic AI infrastructure buildout that has every major hyperscaler committing tens of billions of dollars annually to compute expansion.

The IPO Signal

Patrick Little telling Reuters this is SiFive's "final private round" before an IPO isn't just corporate posturing. At $3.65 billion and with an investor base that includes NVIDIA, Apollo, and T. Rowe Price, the company has the institutional backing and valuation trajectory to go public in 2026 or early 2027.

A SiFive IPO would be the first pure-play RISC-V company to hit public markets. It would also provide a public benchmark for how the market values open-source silicon IP — a data point that matters for every enterprise leader evaluating long-term chip architecture strategy.

For budget planners, the IPO creates a pricing transparency event. Public companies disclose revenue, margins, and customer concentration in ways that private companies don't. That information will help enterprise buyers make more informed decisions about their chip architecture roadmaps.

The Bottom Line

SiFive's $400 million round isn't just another AI funding headline. It's the moment RISC-V went from academic curiosity to enterprise-grade infrastructure play. NVIDIA's backing, Arm's self-imposed competitive conflict, and the hyperscaler demand for open-standard alternatives created a window that SiFive is now funded to walk through.

The companies building AI data centers over the next three years have a new option on the table. The question isn't whether RISC-V will matter — 10 billion shipped cores say it already does. The question is how fast it moves from embedded devices and consumer electronics into the server racks where enterprise AI actually runs.

Based on this round and its backers, the answer is: faster than most people expect.

— Rajesh

Related: OpenAI and Oracle Stargate Deal Collapse Explained

Continue Reading

Sources

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

SiFive's $400M Bet on RISC-V Reshapes AI Chips

Photo by Pixabay from Pexels

Something shifted in the chip industry this week that most enterprise leaders won't notice until it hits their infrastructure budgets.

SiFive, the company founded by the Berkeley engineers who created the RISC-V open-source architecture, closed a $400 million Series G round on April 9. The valuation: $3.65 billion. The round was oversubscribed. And the investor list reads like a who's who of people making consequential bets on AI infrastructure — Atreides Management led, with NVIDIA, Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price, Capital Group, Prosperity7 Ventures, and Sutter Hill Ventures all participating.

CEO Patrick Little called it the company's final private round before an IPO.

That alone would make this a significant funding event. What makes it a strategic inflection point is the context around it.

Why NVIDIA Is Backing Its Own Competition

NVIDIA investing in a CPU company might seem counterintuitive. It's not. In January 2026, SiFive announced integration with NVLink Fusion, NVIDIA's high-bandwidth interconnect technology. That means RISC-V processors built on SiFive's IP can connect directly to NVIDIA GPUs — coherently, with low latency, at the kind of bandwidth that matters for large-scale AI inference.

This isn't NVIDIA being charitable. It's NVIDIA ensuring that the CPU sitting next to its GPUs in next-generation data centers isn't controlled by a competitor. With Arm launching its own AGI CPU in March 2026 — with Meta and OpenAI as debut customers — the dynamics of the chip IP market have fundamentally changed. Arm went from neutral licensor to hardware vendor. That kind of vertical integration historically pushes buyers toward alternatives.

NVIDIA's investment in SiFive is a hedge. A very deliberate one. It positions RISC-V as the open-standard CPU architecture that complements NVIDIA's GPU ecosystem, particularly around the upcoming Vera Rubin platform targeting agentic AI workloads.

For enterprise infrastructure leaders evaluating the next generation of AI compute, this is the signal to pay attention to.

The Three Paths to AI Data Center Silicon

The AI data center silicon landscape is splitting into three distinct paths, and understanding which one your vendors are on matters more than most CTOs realize.

Path one: proprietary custom silicon. Amazon committed $50 billion to its Trainium chip programme in its April 2026 shareholder letter. Google, Anthropic, and Broadcom struck a deal for custom AI ASICs. These are purpose-built chips designed for specific workloads, controlled end-to-end by the hyperscaler.

Path two: Arm-based designs. Arm's new AGI CPU represents its push from pure IP licensing into branded hardware. Microsoft, Google, and AWS all ship Arm-based server chips (Cobalt, Axion, Graviton). But Arm's new vertical ambitions create a conflict-of-interest dynamic that licensees are watching carefully.

Path three: RISC-V. Open standard. No per-unit royalties. No single-company control. SiFive's pitch is that hyperscale customers get fully customizable CPU IP without the architectural lock-in that comes with proprietary options. More than 500 semiconductor designs now use SiFive's IP, with 10 billion RISC-V cores shipped to date.

The $400 million round is the market's bet that path three is no longer the scrappy underdog. It's becoming a production-grade alternative.

What SiFive Is Actually Building

Let's get specific about what this capital is funding, because the details matter for enterprise planning.

SiFive's flagship CPU design, the Performance P870-D, supports up to 256 cores per server processor. It includes interrupt controllers for workload prioritization, data protection accelerators for encryption tasks, and a cluster accelerator port that enables direct integration with custom GPUs or other accelerators. That last feature is what makes the NVLink Fusion partnership consequential — it's not theoretical interoperability, it's a hardware-level integration path.

On the AI side, SiFive's XM Gen 2 accelerator (announced September 2025) is optimized for matrix processing — the core mathematical operation behind transformer-based AI models. Each four-core processing cluster delivers 16 trillion operations per second. These can be composed into larger configurations for inference workloads.

The $400 million breaks down across three areas. First, R&D on high-performance scalar, vector, and matrix RISC-V CPU IP. Second, software ecosystem development — including porting CUDA, Red Hat Enterprise Linux, and Ubuntu to RISC-V. Third, customer enablement engineering, the hands-on work of helping hyperscalers integrate SiFive IP into their silicon programmes.

That software porting work is worth flagging. In enterprise data centers, hardware performance is necessary but not sufficient. If you can't run your existing software stack, the fastest chip in the world is useless. SiFive's investment in CUDA compatibility and major Linux distribution support is the unsexy work that actually makes RISC-V deployable at scale.

The Arm Conflict Creates the Opening

Here's the competitive dynamic that makes this moment different from previous RISC-V hype cycles.

Arm's March 2026 decision to launch its own branded CPU — the AGI chip — fundamentally altered its relationship with licensees. For 35 years, Arm was the Switzerland of chip architectures: everyone licensed from Arm because Arm didn't compete with anyone directly.

That era is over. Meta and OpenAI bought the AGI CPU. But every other Arm licensee now has to ask themselves whether their chip IP supplier is also a competitor. This is the same dynamic that drove the cloud industry toward open-source databases and the enterprise software industry toward Kubernetes. When your supplier starts competing with you, you look for open alternatives.

SiFive's total funding now exceeds $760 million. Intel offered more than $2 billion to acquire the company in 2021, but the deal collapsed over valuation disagreements. Intel has since pivoted to becoming a foundry partner for Elon Musk's Terafab, a $25 billion AI compute facility — leaving the RISC-V IP market without its most obvious potential acquirer.

The timing of NVIDIA's investment makes even more sense in this context. NVIDIA doesn't want Arm controlling the CPU architecture ecosystem that its GPUs depend on. SiFive gives NVIDIA an insurance policy — an open-standard CPU partner that can't be acquired by a competitor or turned into a proprietary competitor.

What This Means for Enterprise Budgets

If you're a CTO or VP of Infrastructure planning 2027-2028 data center buildouts, here's what to watch.

Server CPU diversity is accelerating. The days of Intel and AMD as your only options are already over. Arm-based chips from AWS, Google, and Microsoft proved the market accepts architectural diversity. RISC-V is the next wave. Within two years, expect major cloud providers to offer RISC-V-based instance types alongside x86 and Arm.

Chip procurement negotiating leverage is increasing. More architectures mean more competition, which means better pricing. If your cloud provider only offers proprietary chip options, RISC-V alternatives give you leverage in contract negotiations.

AI inference costs will drop. Purpose-built RISC-V+GPU combinations (enabled by NVLink Fusion) could deliver better price-performance for inference workloads than general-purpose CPU+GPU pairings. The specifics will depend on workload profiles, but the direction is clear: more competition drives costs down.

Software compatibility is the bottleneck, not hardware. Watch SiFive's progress on CUDA porting and Linux distribution support. Enterprise adoption will follow software readiness, not hardware performance benchmarks. If your workloads run on RHEL or Ubuntu today, the RISC-V transition becomes viable once those distributions ship stable RISC-V builds.

SiFive is targeting what it calls a $100 billion-plus addressable market in data center chips — driven by the agentic AI infrastructure buildout that has every major hyperscaler committing tens of billions of dollars annually to compute expansion.

The IPO Signal

Patrick Little telling Reuters this is SiFive's "final private round" before an IPO isn't just corporate posturing. At $3.65 billion and with an investor base that includes NVIDIA, Apollo, and T. Rowe Price, the company has the institutional backing and valuation trajectory to go public in 2026 or early 2027.

A SiFive IPO would be the first pure-play RISC-V company to hit public markets. It would also provide a public benchmark for how the market values open-source silicon IP — a data point that matters for every enterprise leader evaluating long-term chip architecture strategy.

For budget planners, the IPO creates a pricing transparency event. Public companies disclose revenue, margins, and customer concentration in ways that private companies don't. That information will help enterprise buyers make more informed decisions about their chip architecture roadmaps.

The Bottom Line

SiFive's $400 million round isn't just another AI funding headline. It's the moment RISC-V went from academic curiosity to enterprise-grade infrastructure play. NVIDIA's backing, Arm's self-imposed competitive conflict, and the hyperscaler demand for open-standard alternatives created a window that SiFive is now funded to walk through.

The companies building AI data centers over the next three years have a new option on the table. The question isn't whether RISC-V will matter — 10 billion shipped cores say it already does. The question is how fast it moves from embedded devices and consumer electronics into the server racks where enterprise AI actually runs.

Based on this round and its backers, the answer is: faster than most people expect.

— Rajesh

Related: OpenAI and Oracle Stargate Deal Collapse Explained

Continue Reading

Sources

Share:

THE DAILY BRIEF

AI InfrastructureAI HardwareNVIDIAEnterprise AIVenture CapitalData Centers

SiFive's $400M Bet on RISC-V Reshapes AI Chips

$400M at $3.65B valuation with NVIDIA backing. For CTOs: why open-source silicon is becoming the third path for AI data center architecture in 2026.

By Rajesh Beri·April 11, 2026·8 min read

Something shifted in the chip industry this week that most enterprise leaders won't notice until it hits their infrastructure budgets.

SiFive, the company founded by the Berkeley engineers who created the RISC-V open-source architecture, closed a $400 million Series G round on April 9. The valuation: $3.65 billion. The round was oversubscribed. And the investor list reads like a who's who of people making consequential bets on AI infrastructure — Atreides Management led, with NVIDIA, Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price, Capital Group, Prosperity7 Ventures, and Sutter Hill Ventures all participating.

CEO Patrick Little called it the company's final private round before an IPO.

That alone would make this a significant funding event. What makes it a strategic inflection point is the context around it.

Why NVIDIA Is Backing Its Own Competition

NVIDIA investing in a CPU company might seem counterintuitive. It's not. In January 2026, SiFive announced integration with NVLink Fusion, NVIDIA's high-bandwidth interconnect technology. That means RISC-V processors built on SiFive's IP can connect directly to NVIDIA GPUs — coherently, with low latency, at the kind of bandwidth that matters for large-scale AI inference.

This isn't NVIDIA being charitable. It's NVIDIA ensuring that the CPU sitting next to its GPUs in next-generation data centers isn't controlled by a competitor. With Arm launching its own AGI CPU in March 2026 — with Meta and OpenAI as debut customers — the dynamics of the chip IP market have fundamentally changed. Arm went from neutral licensor to hardware vendor. That kind of vertical integration historically pushes buyers toward alternatives.

NVIDIA's investment in SiFive is a hedge. A very deliberate one. It positions RISC-V as the open-standard CPU architecture that complements NVIDIA's GPU ecosystem, particularly around the upcoming Vera Rubin platform targeting agentic AI workloads.

For enterprise infrastructure leaders evaluating the next generation of AI compute, this is the signal to pay attention to.

The Three Paths to AI Data Center Silicon

The AI data center silicon landscape is splitting into three distinct paths, and understanding which one your vendors are on matters more than most CTOs realize.

Path one: proprietary custom silicon. Amazon committed $50 billion to its Trainium chip programme in its April 2026 shareholder letter. Google, Anthropic, and Broadcom struck a deal for custom AI ASICs. These are purpose-built chips designed for specific workloads, controlled end-to-end by the hyperscaler.

Path two: Arm-based designs. Arm's new AGI CPU represents its push from pure IP licensing into branded hardware. Microsoft, Google, and AWS all ship Arm-based server chips (Cobalt, Axion, Graviton). But Arm's new vertical ambitions create a conflict-of-interest dynamic that licensees are watching carefully.

Path three: RISC-V. Open standard. No per-unit royalties. No single-company control. SiFive's pitch is that hyperscale customers get fully customizable CPU IP without the architectural lock-in that comes with proprietary options. More than 500 semiconductor designs now use SiFive's IP, with 10 billion RISC-V cores shipped to date.

The $400 million round is the market's bet that path three is no longer the scrappy underdog. It's becoming a production-grade alternative.

What SiFive Is Actually Building

Let's get specific about what this capital is funding, because the details matter for enterprise planning.

SiFive's flagship CPU design, the Performance P870-D, supports up to 256 cores per server processor. It includes interrupt controllers for workload prioritization, data protection accelerators for encryption tasks, and a cluster accelerator port that enables direct integration with custom GPUs or other accelerators. That last feature is what makes the NVLink Fusion partnership consequential — it's not theoretical interoperability, it's a hardware-level integration path.

On the AI side, SiFive's XM Gen 2 accelerator (announced September 2025) is optimized for matrix processing — the core mathematical operation behind transformer-based AI models. Each four-core processing cluster delivers 16 trillion operations per second. These can be composed into larger configurations for inference workloads.

The $400 million breaks down across three areas. First, R&D on high-performance scalar, vector, and matrix RISC-V CPU IP. Second, software ecosystem development — including porting CUDA, Red Hat Enterprise Linux, and Ubuntu to RISC-V. Third, customer enablement engineering, the hands-on work of helping hyperscalers integrate SiFive IP into their silicon programmes.

That software porting work is worth flagging. In enterprise data centers, hardware performance is necessary but not sufficient. If you can't run your existing software stack, the fastest chip in the world is useless. SiFive's investment in CUDA compatibility and major Linux distribution support is the unsexy work that actually makes RISC-V deployable at scale.

The Arm Conflict Creates the Opening

Here's the competitive dynamic that makes this moment different from previous RISC-V hype cycles.

Arm's March 2026 decision to launch its own branded CPU — the AGI chip — fundamentally altered its relationship with licensees. For 35 years, Arm was the Switzerland of chip architectures: everyone licensed from Arm because Arm didn't compete with anyone directly.

That era is over. Meta and OpenAI bought the AGI CPU. But every other Arm licensee now has to ask themselves whether their chip IP supplier is also a competitor. This is the same dynamic that drove the cloud industry toward open-source databases and the enterprise software industry toward Kubernetes. When your supplier starts competing with you, you look for open alternatives.

SiFive's total funding now exceeds $760 million. Intel offered more than $2 billion to acquire the company in 2021, but the deal collapsed over valuation disagreements. Intel has since pivoted to becoming a foundry partner for Elon Musk's Terafab, a $25 billion AI compute facility — leaving the RISC-V IP market without its most obvious potential acquirer.

The timing of NVIDIA's investment makes even more sense in this context. NVIDIA doesn't want Arm controlling the CPU architecture ecosystem that its GPUs depend on. SiFive gives NVIDIA an insurance policy — an open-standard CPU partner that can't be acquired by a competitor or turned into a proprietary competitor.

What This Means for Enterprise Budgets

If you're a CTO or VP of Infrastructure planning 2027-2028 data center buildouts, here's what to watch.

Server CPU diversity is accelerating. The days of Intel and AMD as your only options are already over. Arm-based chips from AWS, Google, and Microsoft proved the market accepts architectural diversity. RISC-V is the next wave. Within two years, expect major cloud providers to offer RISC-V-based instance types alongside x86 and Arm.

Chip procurement negotiating leverage is increasing. More architectures mean more competition, which means better pricing. If your cloud provider only offers proprietary chip options, RISC-V alternatives give you leverage in contract negotiations.

AI inference costs will drop. Purpose-built RISC-V+GPU combinations (enabled by NVLink Fusion) could deliver better price-performance for inference workloads than general-purpose CPU+GPU pairings. The specifics will depend on workload profiles, but the direction is clear: more competition drives costs down.

Software compatibility is the bottleneck, not hardware. Watch SiFive's progress on CUDA porting and Linux distribution support. Enterprise adoption will follow software readiness, not hardware performance benchmarks. If your workloads run on RHEL or Ubuntu today, the RISC-V transition becomes viable once those distributions ship stable RISC-V builds.

SiFive is targeting what it calls a $100 billion-plus addressable market in data center chips — driven by the agentic AI infrastructure buildout that has every major hyperscaler committing tens of billions of dollars annually to compute expansion.

The IPO Signal

Patrick Little telling Reuters this is SiFive's "final private round" before an IPO isn't just corporate posturing. At $3.65 billion and with an investor base that includes NVIDIA, Apollo, and T. Rowe Price, the company has the institutional backing and valuation trajectory to go public in 2026 or early 2027.

A SiFive IPO would be the first pure-play RISC-V company to hit public markets. It would also provide a public benchmark for how the market values open-source silicon IP — a data point that matters for every enterprise leader evaluating long-term chip architecture strategy.

For budget planners, the IPO creates a pricing transparency event. Public companies disclose revenue, margins, and customer concentration in ways that private companies don't. That information will help enterprise buyers make more informed decisions about their chip architecture roadmaps.

The Bottom Line

SiFive's $400 million round isn't just another AI funding headline. It's the moment RISC-V went from academic curiosity to enterprise-grade infrastructure play. NVIDIA's backing, Arm's self-imposed competitive conflict, and the hyperscaler demand for open-standard alternatives created a window that SiFive is now funded to walk through.

The companies building AI data centers over the next three years have a new option on the table. The question isn't whether RISC-V will matter — 10 billion shipped cores say it already does. The question is how fast it moves from embedded devices and consumer electronics into the server racks where enterprise AI actually runs.

Based on this round and its backers, the answer is: faster than most people expect.

— Rajesh

Related: OpenAI and Oracle Stargate Deal Collapse Explained

Continue Reading

Sources

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe