Financial Times: OpenAI claims evidence that DeepSeek uses its model for training

Reprinted from panewslab
01/29/2025·3MPanews January 29th news, OpenAI said it found that evidence showed that Chinese artificial intelligence startup Deepseek used its proprietary model for training.
The company said that it has seen some evidence of "distillation", which is a technology used by developers. By using larger and more powerful model outputs, better performance is obtained on smaller models. Let them get similar results in specific tasks at a lower cost.
Openai refuses to further comment on the details of its evidence. Its service clause stipulates that users cannot "copy" any service or "use the output to develop a model to develop and compete with Openai".
A person close to Openai said that "distillation" is a common practice in the industry and emphasized that the company provides developers with a method of using its own platform to achieve this purpose, but he said: "The problem is that you do this, do this, do this, do this It's for your own purpose to create your own model. "" "."
The American AI and Crypto Tsar David Sacks also pointed out: "There are a lot of evidence that what DeepSeek does here is to refine knowledge from the OpenAI model. I think OpenAI is not happy about this," although he did not provide evidence. (FT)