An emerging free tool that analyzes artificial intelligence (AI) models for risk has set a path to become a mainstream part of cybersecurity teams’ toolboxes to tackle AI supply chain risks. Created last March by the AI risk experts at Robust Intelligence, the AI Risk Database has been enhanced with new features and opensourced on GitHub today, in conjunction with new partnership agreements with MITRE and Indiana University that will have the organizations working together to enhance the database’s ability to feed automated AI assessment tools. “We want this to be VirusTotal for AI,” says Hyrum Anderson, distinguished ML engineer at Robust Intelligence and co-creator of the database. The database is meant to help the security community discover and report information about security vulnerabilities lurking in public machine learning (ML) models, he says. The database also tracks other factors in these models that threaten reliability and resilience of AI systems, including issues that can cause brittleness, ethical problems, and AI bias. As Anderson explains, the tool is under development to deal with what is shaping up to be a looming supply chain problem in the world of AI systems. As with many other parts of the software supply chain, AI systems depend on a host of open source components to run their code. But added into that mix is the additional complexity of dependencies on open source ML models and open source data sets used to train data.
Full story : AI Risk Database Tackles AI Supply Chain Risks.