Start your day with intelligence. Get The OODA Daily Pulse.
Killer AI is on the minds of US Air Force leaders. An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference. But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the “simulation” he described was a “thought experiment” that never happened. Speaking at a conference last week in London, Col. Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit. As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.
Full story : Air Force colonel backtracks over his warning about how AI could go rogue and kill its human operators.