Investment

OpenAI Commits $20B+ to Cerebras Chips Over Three Years, Receives Equity Stake Up to 10%

| By The Tech Room Editorial Team
OpenAI and Cerebras AI chip collaboration concept with wafer scale chip and data center infrastructure representing the 20 billion dollar three-year commitment

OpenAI has agreed to spend more than $20 billion over three years on servers powered by Cerebras chips — more than double the $10 billion deal the two companies had previously announced in January 2026. The expanded agreement was reported by The Information on April 17, citing sources familiar with the deal terms. Under the arrangement, OpenAI will also receive warrants for a minority equity stake in Cerebras, with the ownership stake potentially growing up to 10% as OpenAI's cumulative spending rises over the contract period.

OpenAI will additionally provide Cerebras approximately $1 billion to help fund data center development that will run OpenAI's AI products, meaning the total financial commitment over three years could reach $30 billion when all components are included. Cerebras, which designs wafer-scale AI chips that are dramatically larger than conventional GPU dies, has positioned itself as an alternative inference compute platform capable of running large language models at lower latency and cost per token compared to GPU-based infrastructure. For OpenAI, the deal represents a significant diversification away from NVIDIA, which currently supplies the vast majority of the company's training and inference compute.

The deal is central to Cerebras' plans to go public in the second quarter of 2026. The Sunnyvale-based chipmaker, last valued at $23.1 billion in private markets, is targeting a public listing at approximately $35 billion and plans to raise $3 billion in the offering. The equity stakes and financial commitments from OpenAI are expected to anchor the IPO prospectus as evidence of sustained revenue. The deal also signals a broader trend of AI labs seeking to secure chip supply outside NVIDIA's ecosystem — NVIDIA's Blackwell and Vera Rubin platforms face persistent capacity constraints, making alternative inference suppliers like Cerebras increasingly attractive for latency-sensitive production workloads.

Sources

The Information, MarketScreener

The Tech Room Editorial Team

Expert analysis covering semiconductors, AI, and gaming. Learn more about our team.

← Back to Artificial Intelligence