2026-05-13 | Auto-Generated 2026-05-13 | Oracle-42 Intelligence Research
```html

Supply-Chain Attacks on AI Model Hubs: Poisoned Hugging Face Datasets with CVE-2024-Style Backdoors

Executive Summary: By May 2026, supply-chain attacks leveraging poisoned datasets on Hugging Face have escalated, emulating the CVE-2024 backdoor paradigm. Adversaries are embedding malicious payloads within popular model weights and configurations, enabling remote code execution (RCE) and data exfiltration during inference. This report synthesizes threat intelligence from Oracle-42 Intelligence, detailing attack vectors, compromised repositories, and mitigation strategies for AI practitioners.

Key Findings

Attack Landscape: How Poisoning Works

Supply-chain attacks on Hugging Face hubs follow a multi-stage kill chain:

Stage 1: Dataset Poisoning

Adversaries inject malicious samples into training corpora or model configurations. Common vectors include:

Stage 2: Backdoor Embedding

The 2024-style backdoor mechanism is repurposed with AI-specific adaptations:

Stage 3: Propagation

Poisoned models are distributed through:

Case Study: The "HuggingFace-2026-04" Campaign

In April 2026, Oracle-42 Intelligence identified a coordinated campaign targeting sentiment analysis models:

Detection Challenges

Current defenses struggle to identify poisoned models due to:

Recommendations

Organizations must adopt a defense-in-depth strategy:

Preventive Measures

Detective Controls

Incident Response

Future Threats and Emerging Trends

Threat actors are evolving tactics:

FAQ

How can I verify if a Hugging Face model is poisoned?

Use Oracle-42’s open-source tool ai-model-scanner to analyze model weights and configurations. For custom verification, sandbox the model in a controlled environment and monitor for anomalous outputs or network calls.

What’s the difference between a backdoor and a