Early red-teaming tests have exposed significant security vulnerabilities in DeepSeek’s R1 AI model. According to a report by Laura Caroli for the Center for Strategic and International Studies (CSIS), the R1 model is 11 times more likely to produce harmful content and 4.5 times more likely to generate insecure code compared to OpenAI’s o1 model.
The tests, designed to simulate adversarial attacks, showed that DeepSeek’s system is far more prone to creating dangerous outputs, including malicious code, than its competitor.
The findings, outlined in the report “DeepSeek: A Problem or an Opportunity for Europe?”, highlight the need for more robust security in AI development. While these vulnerabilities pose risks, they also present an opportunity for European AI firms to strengthen their technologies and enhance security measures.