Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Hackers Have Uploaded Thousands Of Malicious Files To Hugging Face Repository

Hackers Have Uploaded Thousands Of Malicious Files To Hugging Face Repository

From Hackers are using AI open source repository, Hugging Face to upload malicious files that steal information:

Hugging Face, the a massive online repository for AI models, has hosted thousands of files containing hidden code that can poison data and steal information, including the tokens used to pay AI and cloud operators, according to security researchers.

Researchers from security startups ProtectAI, Hiddenlayer and Wiz have warned for months that hackers have uploaded “malicious models” to Hugging Face’s site, which now hosts more than a million models available for download. “The old Trojan horse computer viruses that tried to sneak malicious code onto your system have evolved for the AI era,” said Ian Swanson, Protect AI’s CEO and founder.

The Seattle, Washington-based startup found over 3,000 malicious files when it began scanning Hugging Face earlier this year. Some of these bad actors are even setting up fake Hugging Face profiles to pose as Meta or other technology companies to lure downloads from the unwary, according to Swanson. A scan of Hugging Face uncovered a number of fake accounts posing as companies like Facebook, Visa, SpaceX and Swedish telecoms giant Ericsson.

One model, which falsely claimed to be from the genomics testing startup 23AndMe, had been downloaded thousands of times before it was spotted, Swanson said. He warned that when installed, the malicious code hidden in the fake 23AndMe model would silently hunt for AWS passwords, which could be used to steal cloud computer resources. Hugging Face deleted the model after being alerted to the risk. Hugging Face has now integrated ProtectAI’s tool that scans for malicious code into its platform, showing users the results before they download anything.

The company told Forbes it has verified the profiles of big companies like OpenAI and Nvidia starting in 2022. In November 2021, it began scanning the files often used to train machine learning models on the platform for unsafe code. “We hope that our work and partnership with Protect AI, and hopefully many more, will help better trust machine learning artifacts to make sharing and adoption easier,” said Julien Chaumond, CTO of Hugging Face in an email to Forbes.

Full report : Hackers are using AI open source repository, Hugging Face to upload malicious files that steal information.