Start your day with intelligence. Get The OODA Daily Pulse.
When OpenAI and Mattel announced a partnership earlier this month, there was an implicit recognition of the risks. The first toys powered with artificial intelligence would not be for children under 13. Another partnership last week came with seemingly fewer caveats. OpenAI separately revealed that it had won its first Pentagon contract. It would pilot a $200mn programme to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” according to the US Department of Defense. That a major tech company could launch military work with so little public scrutiny epitomises a shift. The national security application of everyday apps has in effect become a given. Armed with narratives about how they’ve supercharged Israel and Ukraine in their wars, some tech companies have framed this as the new patriotism, without having a conversation about whether it should be happening in the first place, let alone how to ensure that ethics and safety are prioritised. Silicon Valley and the Pentagon have always been intertwined, but this is OpenAI’s first step into military contracting. The company has been building a national security team with alumni of the Biden administration, and only last year did it quietly remove a ban on using its apps for such things as weapons development and “warfare.” By the end of 2024, OpenAI had partnered with Anduril, the Maga-aligned mega-startup headed by Palmer Luckey.
Full opinion : Silicon Valley firms are beefing up their national security teams, but scrutiny is sorely needed.