← Back to Tech & Science

Malicious Hugging Face Repository Impersonating OpenAI Distributes Infostealer Malware

Tech & ScienceAI-Generated & Algorithmically Scored·

AI-generated from multiple sources. Verify before acting on this reporting.

SAN FRANCISCO — A malicious repository on the Hugging Face platform impersonating OpenAI’s Privacy Filter project has been identified as distributing infostealer malware to Windows users. The campaign, discovered by researchers at HiddenLayer, targeted individuals seeking privacy tools by deploying code designed to harvest sensitive data.

The attack leveraged the credibility of OpenAI, a leading artificial intelligence developer, to lure users into downloading compromised software. The fake repository mimicked the legitimate OpenAI Privacy Filter project, a tool designed to help developers manage data privacy in AI applications. Once installed on a Windows system, the malware began collecting browser credentials, cryptocurrency wallet information, and detailed system data.

Security experts identified the threat on May 9, 2026. The malicious code was hosted within a public repository on Hugging Face, a popular platform for sharing machine learning models and datasets. The repository utilized a name and description nearly identical to the official OpenAI project, creating a convincing facade for unsuspecting developers and users.

The infostealer malware operated silently in the background, exfiltrating data to remote servers controlled by the threat actors. Stolen information included login credentials for web browsers, access keys for cryptocurrency wallets, and system configuration details. This data could be used for financial fraud, identity theft, or further unauthorized access to corporate networks.

OpenAI has not yet issued a public statement regarding the impersonation or the specific impact on its users. The company’s official channels have not addressed the compromised repository or the potential breach of trust with its developer community. Hugging Face removed the malicious repository after the discovery was reported, but the extent of the distribution before removal remains unclear.

The incident highlights the growing risk of supply chain attacks targeting popular development platforms. Threat actors are increasingly using trusted platforms to distribute malware, relying on the reputation of legitimate organizations to bypass security measures. The use of OpenAI’s branding in this campaign underscores the potential for significant disruption when high-profile technology companies are impersonated.

Researchers at HiddenLayer are continuing to monitor the situation for additional indicators of compromise. They have advised users to verify the authenticity of repositories before downloading any software, particularly those claiming to be associated with major technology firms. The investigation into the threat actors behind the campaign is ongoing, with no arrests or identifications made at this time.

The broader implications for the machine learning community remain uncertain. Developers and organizations are urged to exercise caution when integrating third-party tools into their workflows. The incident serves as a reminder of the importance of verifying the source of software and maintaining robust security practices to protect against evolving cyber threats.