Start your day with intelligence. Get The OODA Daily Pulse.

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off. But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs. That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons. “This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.” But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.

Full opinion : Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.