OpenAI, Anthropic, and Google Form Joint Front Against Chinese AI Model Distillation
OpenAI, Anthropic, and Google announced on April 6, 2026 — reported by Bloomberg — that they are sharing intelligence through the Frontier Model Forum to detect and block "adversarial distillation" attempts by Chinese AI developers. The Frontier Model Forum is the industry nonprofit the three companies co-founded with Microsoft in 2023. The collaboration marks a rare instance of direct cooperation among the three biggest U.S. AI labs on a shared competitive and national-security concern.
The effort follows Anthropic's earlier public accusation that DeepSeek, MiniMax, and Moonshot used fraudulent accounts to extract over 16 million exchanges from its models to train competing products — a technique known as model distillation. By systematically querying a frontier model at scale and using its outputs as training data, adversarial actors can approximate the performance of expensive proprietary models without the underlying research investment. Anthropic's detection of the scheme led to account terminations and the current intelligence-sharing initiative.
The joint defensive posture signals growing concern among U.S. AI leaders about the competitive and national-security implications of model distillation. For frontier model companies, the value of their products lies partly in the billions of dollars and years of compute invested in pre-training — value that distillation attacks can partially extract at a fraction of the cost. The Frontier Model Forum's expanded role in coordinating anti-distillation detection represents a significant escalation in the AI cold war between U.S. and Chinese technology ecosystems, and could accelerate calls for regulatory frameworks governing how AI outputs can be used to train competing systems.
Sources
Bloomberg