Executive Summary: A sophisticated and ongoing infiltration campaign by North Korean state-sponsored actors—operating under the guise of legitimate IT employment—has been identified leveraging proxyjacking to compromise global SSH servers. Disguised as freelance developers, these operatives use deepfake identities and stolen credentials to gain access to corporate networks, then quietly enlist those systems into proxy networks to monetize stolen bandwidth. This campaign represents a convergence of digital identity deception, supply-chain compromise, and financial gain, underscoring the evolving sophistication of DPRK cyber operations.
This campaign reflects a maturation of DPRK cyber tradecraft, blending low-risk, high-reward financial operations with long-term strategic access. Known DPRK IT worker collectives—such as Kimsuky and APT43—have historically used overseas employment and freelance platforms to embed operatives in foreign organizations. Recent intelligence indicates these actors are now combining social engineering with technical compromise: they pose as skilled software engineers using AI-generated deepfake imagery, voice, and synthetic identity documentation to pass background checks.
The infiltration typically begins with a job application to a legitimate company. Once hired, the actor requests remote SSH access for "development tasks" or "CI/CD pipeline maintenance." Using stolen or self-signed credentials, they pivot into internal servers, then quietly install proxyjacking scripts—often disguised as monitoring agents or build tools.
Proxyjacking—a form of server hijacking—has emerged as a lucrative side hustle for cybercriminals and state actors alike. In this campaign, compromised servers are silently enrolled into peer-to-peer proxy networks such as Peer2Profit, Hola VPN, or custom DPRK-controlled proxies. The stolen bandwidth is then monetized through pay-per-use proxy services, generating passive income for the threat actor.
Key characteristics observed:
Unlike traditional cryptojacking, proxyjacking is stealthier—bandwidth usage is low and traffic appears legitimate, making detection difficult via traditional anomaly-based monitoring.
The use of deepfakes in employment scams is a significant escalation. Threat actors use AI tools such as D-ID, Synthesia, and custom diffusion models to create realistic video interviews. These videos are then used in job applications, virtual onboarding, and client meetings. In one confirmed case, a deepfake identity conducted a 45-minute Zoom interview with a hiring manager, answering technical questions with scripted AI-generated responses.
This technique bypasses traditional identity verification and highlights the urgent need for multi-modal biometric verification and liveness detection in remote hiring processes.
Organizations should monitor for the following indicators:
Endpoint detection and response (EDR) solutions with behavioral analytics are critical for identifying subtle deviations in process trees and lateral movement.
For Organizations:
For Cybersecurity Professionals:
For Policymakers:
The fusion of deepfake employment infiltration and proxyjacking represents a dangerous evolution in state-sponsored cyber operations. By exploiting trust in remote work and monetizing compromised infrastructure, DPRK actors are generating revenue while maintaining persistent access to global networks. The use of AI-generated identities complicates attribution and highlights the urgent need for adaptive authentication and behavioral monitoring.
Organizations must adopt a defense-in-depth strategy that combines identity verification, network segmentation, endpoint monitoring, and threat intelligence sharing. Failure to act risks not only financial loss through bandwidth theft but also long-term strategic compromise of sensitive environments.
A: Look for unexpected outbound connections to IP ranges associated with proxy networks (e.g., Peer2Profit), unusual cron jobs running as root, or processes named similarly to legitimate tools but located in /tmp.
A: Not inherently, but they are unethical if used to deceive employers. In some jurisdictions, impersonation via AI could violate fraud or impersonation laws, especially if used for financial gain.
A: Initiate a discreet identity verification process. Require live video verification with multiple angles, request government-issued ID over a secure channel, and cross-reference biometric data with known patterns. If in doubt, escalate to legal and HR.