Start your day with intelligence. Get The OODA Daily Pulse.
Last week, DeepSeek sent Silicon Valley into a panic by proving you could build powerful AI on a shoestring budget. In some respects, it was too good to be true. Recent testing has shown that DeepSeek’s AI models are more vulnerable to manipulation than those of its more expensive competitors from Silicon Valley. That challenges the entire David-vs-Goliath narrative on “democratized” AI that has emerged from the company’s breakthrough. The billions of dollars that OpenAI, Alphabet Inc.’s Google, Microsoft Corp. and others have spent on the infrastructure of their own models look less like corporate bloat, and more like a cost of pioneering the AI race and keeping the lead with more secure services. Businesses eager to try the cheap and cheerful AI tool need to think twice about diving in. LatticeFlow AI, a Swiss software firm that measures how compliant AI models are with regulations, says that two versions of DeepSeek’s R1 model rank lowest among other leading systems when it comes to cybersecurity. It seems that when the Chinese company modified existing open-source models from Meta Platforms Inc. and Alibaba, known as Llama and Qwen, to make them more efficient, it may have broken some of those models’ key safety features in the process.