Start your day with intelligence. Get The OODA Daily Pulse.
As a self-driving car cruises down a street, it uses cameras and sensors to perceive its environment, taking in information on pedestrians, traffic lights, and street signs. Artificial intelligence (AI) then processes that visual information so the car can navigate safely. But the same systems that allow a car to read and respond to the words on a street sign might expose that car to hijacking attacks from bad actors. Text placed on signs, posters, or other objects can be read by an AI’s perception system and treated as instructions, potentially allowing attackers to influence an autonomous system’s behavior through the real world. New research led by UC Santa Cruz Professor of Computer Science and Engineering (CSE) Alvaro Cardenas and Assistant Professor of CSE Cihang Xie presents the first academic exploration of these threats, called environmental indirect prompt injection attacks, against embodied AI systems.
Full report : Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows.