Start your day with intelligence. Get The OODA Daily Pulse.
Artificial intelligence continues to snare the technological limelight and, rightly so as we move well into the final quarter of 2023, there is wide international interest in harnessing the power of AI. But with the excitement and anticipation come some appropriate notes of caution from governments around the world, concerned that all of AI’s promise and potential has a dark flipside: It can be used as a tool by bad actors just as easily as it can by the good guys. Thus, on October 30, 2023, US President Joe Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” while contemporaneously the G-7 Leaders issued a joint statement in support of the May 2023 “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.” The US executive order also references the anticipated November UK Summit on AI Safety, which will bring together world leaders, technology companies and AI experts to “facilitate a critical conversation on artificial intelligence.” Amid the cacophony of international voices trying to bring order to what many see as chaos, it is important for CISOs to understand how AI and machine learning are going to affect their role and their abilities to thwart, detect, and remediate threats. Knowing what the new policy moves entail is critical to gauging where responsibility for dealing with the threats will lie and provides insight into what these governmental bodies believe is the way forward.
Full opinion : What the White House executive order on AI means for cybersecurity leaders.