Experts warn DeepSeek is 11 times more dangerous than other AI chatbots
Another research found major security and safety risks with the new Chinese AI chatbot. Here's all you need to know.
DeepSeek’s R1 AI is 11 times more likely to be exploited by cybercriminals than other AI models – whether that's by producing harmful content or being vulnerable to manipulation.
This is a worrying finding from new research conducted by Enkrypt AI, an AI security and compliance platform. This security warning adds to the ongoing concerns following last week's data breach that exposed over one million records.
China-developed DeepSeek sent shockwaves throughout the AI world since its January 20 release. About 12 million curious users worldwide downloaded the new AI chatbot in the space of two days, marking a growth even faster than ChatGPT. Widespread privacy and security concerns have, however, prompted quite a few countries to either begin investigating or banning, in some capacity, the new tool.
Harmful content, malware and manipulation
The team at Enkrypt AI performed a series of tests to evaluate DeepSeek's security vulnerabilities, such as malware, data breaches, and injection attacks, as well as its ethical risks.
The investigation found the ChatGPT rival "to be highly biased and susceptible to generating insecure code," experts noted, and that DeepSeek's model is vulnerable to third-party manipulation, allowing criminals to use it for developing chemical, biological, and cybersecurity weapons.
Nearly half of the tests conducted (45%) bypassed safety protocols in place, generating criminal planning guides, illegal weapons information, and terrorist propaganda.
Worse still, 78% of the cybersecurity checks successfully tricked DeepSeek-R1 into generating insecure or malicious codes. These included malware, trojans, and other exploits. Overall, experts found the model to be 4.5 times more likely than its Open-AI counterpart to be manipulated by cybercriminals to create dangerous hacking tools.
"Our research findings reveal major security and safety gaps that cannot be ignored," said Sahil Agarwal, CEO of Enkrypt AI, commenting on the findings. "Robust safeguards – including guardrails and continuous monitoring – are essential to prevent harmful misuse."