< Go Back

DeepSeek – The Hidden Security Risks of AI

What is DeepSeek?

DeepSeek, the Chinese AI startup, has been making headlines recently due to its impressive AI models. In particular, its DeepSeek-R1 reasoning model is notable for its comparable performance to top AI systems like OpenAI’s GPT-4, while being more cost-effective and efficient.

The Explosive Growth of DeepSeek

On January 27th, DeepSeek hit #1 in the Google Play Store and sent stocks tumbling, including a nearly $600 billion loss to Nvidia’s market value, primarily because of the impressive efficiency of DeepSeek-R1 compared to similar models like Chat-GPT.

DeepSeek’s explosive growth has some people concerned about security: the fact that the company is headquartered in China, the safety of the model, accuracy of the responses, and the security policy behind the company.

Timeline Since DeepSeek’s Launch

  • January 27 – US Stock Market plummets more than $1 Trillion, led by Nvidia’s $600 Billion loss.
  • January 28 – US Navy bans DeepSeek over national security concerns.
  • January 29 – Wiz researcher discovered a public ClickHouse database that includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata.
  • January 29 – Italy removed DeepSeek from the Google and Apple App Stores after the country reviewed the Company data privacy policy.
  • January 31 – Enkrypt AI confirms DeepSeek’s R1 model is 11x more likely to generate harmful outputs than OpenAI’s 01, 3x more biased than Claude-3 Opus, and 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content​ than OpenAI’s O1 and Claude-3 Opus.
  • February 4 – Wallarm successfully bypassed DeepSeek’s R1 model’s restrictions, enabling access to prohibited content, hidden system parameters, and unauthorized technical data. Additionally, Wallarm was able to extract information about the models used for training and distillation, leading to speculation that OpenAI’s technology may have contributed to DeepSeek’s knowledge base.
  • February 7 – The US House of Representatives pushed to ban DeepSeek from all government devices, slated the ““No DeepSeek on Government Devices Act.”

Implications for Cybersecurity Professionals

The timeline of events with DeepSeek provide a cautionary tale on how quickly AI usage can become a serious security issue. While AI has the potential to boost worker productivity, cybersecurity teams need to be diligent about understanding their company’s AI usage and understanding the security policies of the company’s providing the AI models. Wing Security recommends the following regarding DeepSeek:

  • Leverage Wing’s Free SaaS Discovery tool to discover which users are accessing DeepSeek with corporate credentials, including Shadow IT.
  • Classify DeepSeek as Forbidden to trigger employee notifications and removal of the application.
  • Out of the box automated policies will continue the employee education and blocking of access to DeepSeek.

Summary

While there are endless concerns around DeepSeek’s rise, it is critical to note that DeepSeek is only 1 of thousands of AI applications in the wild. Per Wing Security research, over 15,000 applications have already embedded AI into their products. Their terms and conditions and change frequently and it is no unusual for those models to have auto-consented policies that allow them to train on any data inputted into the AI tool. Cybersecurity teams need to be proactive by establishing AI policies and use tools that help them continually monitor the AI usage within their company.

Protect Critical Data.

Secure your SaaS