Following suspicions of industrial espionage by Chinese company DeepSeek, OpenAI drastically strengthens its security measures.
OpenAI has modified its internal security. This reportedly follows the launch of DeepSeek’s R1 model, which according to OpenAI used distillation to leverage sensitive model knowledge. Since then, strict measures have been implemented to better protect the company’s intellectual property.
Strict Access Rules and Rigorous Protection
Internal information is now shielded through a system called ‘tenting’: employees may only discuss projects if they explicitly belong to that ‘tent’, according to the Financial Times. During the development of the secret model “Strawberry” (the project name for GPT-o1), checks were made to verify if colleagues were working on the same project, even in office spaces.
Additionally, sensitive models cannot appear on the internet as they are developed in an offline environment. A new policy ensures that nothing connects to the internet without authorization.
Biometric Security and Screening
Physical access to parts of the office is now only possible with biometric scans, such as fingerprints. Data centers have also received additional security. Furthermore, employees and applicants are being vetted more thoroughly, as many companies in Silicon Valley are on alert for foreign interference.
Although OpenAI emphasizes that these measures are not the result of a single incident, it’s clear that geopolitical tensions between the US and China are prompting the company to better protect their model information.
