Start your day with intelligence. Get The OODA Daily Pulse.
Britain’s National Cyber Security Centre (NCSC) is warning of an apparently fundamental security flaw affecting large language models (LLMs) — the type of AI used by ChatGPT to conduct human-like conversations. Since the launch of ChatGPT last November, the bulk of security concerns regarding the technology have focused on its ability to produce human-like speech automatically. Today, criminals are now actively deploying their own versions to generate “remarkably persuasive” fraudulent emails. But aside from using LLM software properly for malicious ends, there are potential vulnerabilities arising directly from its use and integration with other systems — particularly when the technology is used to interface with databases or other components of a product. It’s known as a “prompt injection” attack, and the NCSC said that the problem may be fundamental. “Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction,” warned the agency. While some of the popular examples on social media of getting Bing to appear to have an existential crisis are largely amusing and cosmetic, this flaw could be more severe for commercial applications that include LLMs.
Full story : UK cyber agency warns of potentially fundamental flaw in AI technology.