Under oath, Elon Musk confirmed xAI used distillation on OpenAI models to train Grok, calling it standard industry practice.
During his ongoing lawsuit against OpenAI, Elon Musk testified in California federal court that xAI used distillation — prompting competitor models to generate training data — to build Grok. When pressed, he said xAI used OpenAI models 'partly' in this way, framing it as a widespread industry norm. Musk also ranked current AI leaders as Anthropic first, then OpenAI, Google, and Chinese open-source models, placing xAI well behind. The admission carries legal and competitive weight given OpenAI's active enforcement against distillation by third parties.
Musk's admission confirms what developers have suspected: frontier models are trained on each other's outputs. This normalizes distillation as a technique but also means OpenAI and Anthropic's legal crackdown isn't hypothetical — it's targeted at real practices happening at scale, even inside major labs. If you're building fine-tuned or distilled models using outputs from commercial APIs, your terms-of-service exposure just got very real.
Pull the current OpenAI and Anthropic API terms of service this week and check whether your training pipeline uses model outputs as labels or fine-tuning data — if it does, you're in the same legal zone Musk just admitted to.
Open terminal and run: curl https://openai.com/policies/terms-of-use | grep -i 'train\|distill\|fine-tun'
Tags
Also today
Signals by role
Also today
Tools mentioned