Chinese startup DeepSeek sent shockwaves across the technology landscape last month when it released a new, open-source AI model. It presented a ChatGPT-like AI model called R1, which has all the familiar abilities but operates at a fraction of the cost of OpenAI’s, Google’s or Meta’s popular AI models. The company claimed to have spent just US$5.6 million on computing power for its base model, compared with the hundreds of millions or billions of dollars US companies spend on their AI technologies.
US stocks took a major hit in the wake of the news. For example, Nvidia, the leading supplier of AI chips, fell nearly 17 percent and lost $588.8 billion in market value, while Meta and Alphabet (GOOGL), Google’s parent company, were down sharply.
The news also sparked a huge change in investments in non-technology companies on Wall Street. President Trump called the DeepSeek release a “wake-up call” for US technology firms, arguing that the latest developments in China’s AI industry may be “a positive” for the US.
The cyber security implications were just as substantial, triggering a wealth of headlines, research and reaction about the significance of the shock emergence of the open-source AI model.
In late January, DeepSeek announced “large-scale malicious attacks" on its services that disrupted users’ ability to register on the site. Meanwhile, multiple reports have highlighted significant security flaws in DeepSeek’s AI model.
DeepSeek comes under fire for security vulnerabilities
Multiple reports and research have highlighted significant security flaws in DeepSeek’s AI model.
Cloud security company Wiz uncovered a massive data exposure involving DeepSeek AI. According to the company, DeepSeek did not secure the database infrastructure of its services, leaving some data and chat histories accessible from the public internet with no password required. The researchers claimed they discovered the data “within minutes” of beginning their investigation, with the publicly accessible information allowing full control over database operations including the ability to access internal data.
What’s more, Cisco tested 50 jailbreaks against DeepSeek’s AI chatbot, with all of them succeeding. “DeepSeek R1 exhibited a 100 percent attack success rate, meaning it failed to block a single harmful prompt. This contrasts starkly with other leading models, which demonstrated at least partial resistance,” researchers wrote. The findings suggest that DeepSeek’s claimed cost-efficient training methods, including reinforcement learning, chain-of-thought self-evaluation and distillation may have compromised its safety mechanisms, they added. “Compared to other frontier models, DeepSeek R1 lacks robust guardrails, making it highly susceptible to algorithmic jailbreaking and potential misuse.”
Elsewhere, DeepSeek’s AI model performed poorly in WithSecure Consulting’s Spikee, a new AI security benchmark, while Enkrypt AI found that, compared to OpenAI’s o1 model, R1 is four-times more vulnerable to generating insecure code and 11-times more likely to create harmful outputs.
Experts reflect on the security risks of DeepSeek AI
“DeepSeek’s claims of remarkably low costs are causing a stir in the industry, but the market’s reaction is based on taking the company at its word,” says Mike Britton, chief information officer (CIO) of Abnormal Security. “Right now, much of the concern around DeepSeek is how it might threaten the current AI market with a competitive, cheaper alternative, but what’s also concerning, especially for the general public, is its potential for misuse.”
Bad actors are already using popular generative AI tools to automate their attacks, he added. “If they can gain access to even faster and cheaper AI tools, it could enable them to carry out sophisticated attacks at an unprecedented scale.”
Melissa Ruzzi, director, AI at security company, AppOmni also warns about DeepSeek user data being collected and sent back to China. “This means the Chinese government could potentially use DeepSeek’s AI models to spy on American citizens, acquire proprietary secrets and conduct influence campaigns. As the data is kept in China, it may not comply with data requirements from other countries, such as GDPR [General Data Protection Regulation].”
US companies should carefully consider all risks involved before deciding to use it and the model itself could already be biased to support agendas that could impact how users form opinions, Ruzzi says. “There are a series of vulnerabilities already uncovered that raise big concerns, especially around data breaches, which could impact users directly. The US Navy has already banned the use of DeepSeek, due to security and ethical concerns. We can take this as a sign that it is not safe for US companies to use it, and that individuals in the US should take caution if they decide to use it.”
One of the most important things chief information security officers (CISOs) need to know is around employee training and awareness, and continuous monitoring for DeepSeek use, she adds. “Additionally, the volume of AI-driven attacks may increase as one of the vulnerabilities on DeepSeek is jailbreaking, where attackers can bypass restrictions and force it to generate malicious outputs that can then be used in other attacks.”
As the AI arms race between the US and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic and technological supremacy, concludes Sahil Agarwal, CEO of Enkrypt AI. “DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool – one that cyber criminals, disinformation networks and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”