A recent discovery by AI security firm HiddenLayer has uncovered a malicious Hugging Face repository masquerading as an OpenAI release. The repository, titled ‘Open-OSS/privacy-filter’, delivered infostealer malware to Windows machines, with approximately 244,000 downloads before its removal.
The malware was cleverly disguised, with the repository’s model card nearly identical to OpenAI’s Privacy Filter release. However, the malicious loader.py file fetched and ran credential-stealing malware on Windows hosts, putting users’ sensitive information at risk.
The repository’s popularity was likely artificially inflated by the attackers, reaching the top of Hugging Face’s ‘trending’ list with 667 likes in under 18 hours. This highlights the potential risks associated with public AI model registries, as developers and data scientists often clone models directly into corporate environments, potentially exposing source code, cloud credentials, and internal systems to malicious actors.
HiddenLayer’s research revealed that the malicious loader.py file began with decoy code resembling a normal AI model loader, before concealing an infection chain. The script disabled SSL verification, decoded a base64-encoded URL, and passed commands to PowerShell on Windows machines, ultimately downloading an additional batch file from an attacker-controlled domain.
The final payload was a Rust-based infostealer, targeting Chromium and Firefox-derived browsers, Discord local storage, cryptocurrency wallets, FileZilla configurations, and host system information. The malware also attempted to disable Windows Antimalware Scan Interface and Event Tracing.
This incident serves as a warning about the potential dangers of malicious AI models and the importance of vigilant security measures in the software supply chain.
Photo by cnrdmroglu on Pexels
Photos provided by Pexels
