Start your day with intelligence. Get The OODA Daily Pulse.
Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant, writes ‘The New York Times’. It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off. But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs. That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons. “This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.” But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.
Full report : A.I.-controlled killer drones become reality.