2026-03-21 | AI and LLM Security | Oracle-42 Intelligence Research
```html
AI-Powered Social Engineering: Defending Against Relationship Operations (RELPO) in the Age of LLMs
Executive Summary: As large language models (LLMs) and generative AI systems become ubiquitous, threat actors are weaponizing them to enhance social engineering campaigns through highly personalized, scalable, and adaptive attacks known as Relationship Operations (RELPO). These AI-driven attacks exploit cognitive biases, automate impersonation, and manipulate trust at scale—posing unprecedented risks to individuals, enterprises, and critical infrastructure. This article provides an authoritative analysis of AI-powered social engineering threats, outlines key defensive strategies, and offers actionable recommendations for threat modeling, detection, and response in the context of OWASP LLM security principles and Certified AI Security Professional frameworks.
Key Findings
LLMs are transforming social engineering: Attackers use fine-tuned LLMs to generate hyper-personalized phishing emails, voice clones, and deepfake videos that bypass traditional defenses.
Relationship Operations (RELPO) emerge: RELPO combines AI-driven impersonation with psychological manipulation to build false trust over time, enabling credential theft, financial fraud, and espionage.
Cloud and identity systems are prime targets: Compromised identities via AI-powered phishing are the leading attack vector, with 74% of breaches involving human error according to recent threat intelligence.
OWASP LLM security gaps persist: Current models lack robust safeguards against prompt injection, model theft, and adversarial fine-tuning—enabling RELPO toolkits to evolve rapidly.
Defense requires a layered AI-native approach: Combining behavioral biometrics, real-time anomaly detection, and AI-assisted deception is essential to counter LLM-powered social engineering.
The Rise of AI-Powered Social Engineering and RELPO
Social engineering has long relied on human manipulation, but the integration of LLMs marks a paradigm shift. Threat actors now deploy Relationship Operations (RELPO)—a term coined to describe AI-driven, multi-stage campaigns designed to establish, exploit, and sustain false relationships across digital channels. Unlike traditional phishing, RELPO leverages generative AI to:
Create contextually relevant messages based on publicly available data (e.g., LinkedIn, corporate blogs, social media)
Simulate authentic communication styles of executives, colleagues, or trusted partners
Adapt in real time to victim responses using conversational AI
Scale attacks globally with minimal human oversight
Recent incidents, such as the LLM Jacking campaign reported in February 2026, demonstrate how attackers hijack fine-tuned LLMs via prompt injection to generate malicious content under the guise of legitimate AI assistants. These compromised LLMs are then used to deliver tailored social engineering payloads—blurring the line between tool and attacker.
Mechanisms of AI-Enhanced Social Engineering
1. Hyper-Personalization Through Data Synthesis
LLMs trained on leaked datasets or stolen corporate knowledge can generate emails that mimic a CEO’s writing style, reference recent company projects, and include plausible urgency—making them far more effective than generic phishing attempts. Threat actors combine this with voice synthesis (e.g., ElevenLabs) and video deepfakes (e.g., Synthesia) to create multi-modal RELPO campaigns.
2. Conversational Manipulation and Trust Erosion
AI chatbots can engage victims in multi-turn dialogues, gradually lowering defenses through social proof ("Everyone else has already complied"), reciprocity ("I sent you a document—can you review it?"), and authority cues ("This is required by compliance"). Unlike static phishing emails, these interactions are adaptive and resilient to traditional spam filters.
3. Identity Compromise via AI-Powered Phishing
According to Microsoft’s 2025 threat intelligence, 68% of cloud-based identity attacks now begin with AI-generated phishing messages. These bypass traditional email filters by using legitimate-looking domains, natural language structures, and timely themes (e.g., "Q4 budget review"). Once credentials are harvested, attackers pivot to lateral movement within cloud environments.
Defending Against AI-Powered RELPO: A Multi-Layered Strategy
1. Threat Modeling for AI Infiltration
Organizations must expand threat models to include:
LLM supply chain risks: Supply chain attacks on third-party AI models or datasets used in internal systems.
Prompt injection pathways: Attackers manipulating model inputs to generate malicious outputs (e.g., fake invoices, unauthorized access requests).
Model theft and inference attacks: Extraction of proprietary LLM behaviors to clone or reverse-engineer for social engineering purposes.
Use frameworks like STRIDE for AI systems to identify spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege risks in AI workflows.
2. Detection: Behavioral Biometrics and Real-Time Monitoring
Traditional signature-based detection fails against AI-generated content. Instead, implement:
Conversation fingerprinting: Track unique linguistic patterns, typing cadence, and response latency to flag AI-driven interactions.
Anomaly detection in identity verification: Monitor for unusual authentication patterns (e.g., sudden voice call after email request) using UEBA (User and Entity Behavior Analytics) tools.
LLM input/output scanning: Use AI-native gateways (e.g., Microsoft’s AI Content Safety API) to detect toxic prompts, prompt injections, or malicious content generation.
3. Response: Containment and Recovery in AI Ecosystems
Deploy AI deception systems: Use honeytokens (fake credentials or documents) to trap attackers and log their behavior for attribution.
Initiate model rollback: If an LLM was hijacked, revert to a clean, sandboxed version and audit training data for adversarial examples.
Conduct cognitive forensics: Analyze attack chains to determine how AI was weaponized—was it prompt injection, data poisoning, or fine-tuning abuse?
OWASP LLM Security and AI Governance
The OWASP Top 10 for Large Language Model Applications highlights critical risks such as Prompt Injection, Insecure Output Handling, and Excessive Agency—all of which directly enable RELPO. To mitigate these:
Implement input sanitization and output validation for all LLM interactions.
Enforce least-privilege access for model APIs and fine-tuning environments.
Monitor model drift and unauthorized modifications in production systems.
Adopt the Certified AI Security Professional (CASP) standard to ensure AI systems are audited for adversarial robustness.