Start your day with intelligence. Get The OODA Daily Pulse.

What open-source AI models should your enterprise use?

AI development is akin to the early wild west days of open source — models are being built on top of each other, cobbled together with different elements from different places. And, much like with open-source software, this presents problems when it comes to visibility and security: How can developers know that the foundational elements of pre-built models are trustworthy, secure and reliable? To provide more of a nuts-and-bolts picture of AI models, software supply chain security company Endor Labs is today releasing Endor Labs Scores for AI Models. The new platform scores the more than 900,000 open-source AI models currently available on Hugging Face, one of the world’s most popular AI hubs. “Definitely we’re at the beginning, the early stages,” George Apostolopoulos, founding engineer at Endor Labs, told VentureBeat. “There’s a huge challenge when it comes to the black box of models; it’s risky to download binary code from the internet.” Endor Labs’ new platform uses 50 out-of-the-box metrics that score models on Hugging Face based on security, activity, quality and popularity. Developers don’t have to have intimate knowledge of specific models — they can prompt the platform with questions such as “What models can classify sentiments?” “What are Meta’s most popular models?” or “What is a popular voice model?” The platform then tells developers how popular and secure models are and how recently they were created and updated. Apostolopoulos called security in AI models “complex and interesting.” There are numerous vulnerabilities and risks, and models are susceptible to malicious code injection, typosquatting and compromised user credentials anywhere along the line.

Full analysis : Endor Labs tests all the available open-source Enterprise AI models for trustworthiness, security and reliability.