Start your day with intelligence. Get The OODA Daily Pulse.
Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats. Why it matters: The new conceptual framework, first shared with Axios, could help companies quickly secure the AI systems against hackers trying to manipulate AI models or steal the data the models were trained on. The big picture: Often, when a new emerging tech trend takes hold, cybersecurity and data privacy are an afterthought for businesses and consumers. One example is social media, where users were so eager to connect with one another on new platforms that they paid little scrutiny to how user data was collected, shared, or protected. Google worries the same thing is happening with AI systems, as companies quickly build and integrate these models into their workflows. What they’re saying: “We want people to remember that many of the risks of AI can be managed by some of these basic elements,” Phil Venables, CISO at Google Cloud, told Axios. “Even while people are searching for the more advanced approaches, people should really remember that you’ve got to have the basics right as well.”
Full exclusive : Google lays out its vision for securing AI.