Threat Actors Exploit AI Platforms to Distribute Malware
AI-generated from multiple sources. Verify before acting on this reporting.
LONDON (AP) — Cybercriminals are exploiting popular artificial intelligence distribution platforms to spread malware, trojans, and cryptominers by disguising malicious code within shared files and AI skills. The campaign, identified by cybersecurity firm Acronis, targets users who trust the integrity of repositories on Hugging Face and ClawHub.
The attack vector involves trojanizing legitimate-looking shared files and embedding malicious payloads within AI skills—pre-trained models and code snippets that developers commonly download to accelerate machine learning projects. By abusing the reputation of these platforms, threat actors aim to bypass traditional security measures and infect systems globally.
Acronis reported the discovery on May 1, 2026, noting that the malicious activity spans multiple regions. The attackers leverage the open nature of AI model sharing to distribute information stealers and remote access trojans. Once a user downloads a compromised file or skill, the malware executes, granting attackers unauthorized access to sensitive data or turning the infected machine into a node for cryptocurrency mining operations.
The platforms, widely used by researchers and developers, host millions of models and datasets. Security experts warn that the sheer volume of uploads makes manual vetting impossible, creating an environment ripe for exploitation. Unlike traditional software repositories, AI platforms often lack rigorous code scanning for executable payloads embedded within model weights or accompanying scripts.
Acronis stated that the malware is designed to evade detection by mimicking legitimate AI development tools. The trojanized files often include popular libraries and frameworks, making them appear safe to casual inspection. Users downloading these resources inadvertently install the malicious software, which then establishes persistence on the host system.
The campaign highlights a growing trend of cybercriminals targeting emerging technologies. As AI adoption accelerates, attackers are adapting their methods to infiltrate new ecosystems. The use of Hugging Face and ClawHub represents a shift from phishing emails and compromised websites to more sophisticated supply chain attacks within the AI community.
Security researchers are urging developers to exercise caution when downloading third-party models. Best practices include verifying the authenticity of contributors, scanning files with updated antivirus software, and isolating downloaded models in sandboxed environments before deployment. However, no official patch or removal of the malicious files has been confirmed by the platforms as of the latest update.
The full scope of the campaign remains unclear. It is unknown how many users have been infected or whether the attackers are operating alone or as part of a larger syndicate. Acronis continues to monitor the situation for new variants and additional targets. The incident underscores the need for enhanced security protocols in AI distribution networks to prevent future exploitation.