2026-03-21 | AI and LLM Security | Oracle-42 Intelligence Research
```html

AI-Powered Social Engineering: Defending Against Relationship Operations (RELPO) in the Age of LLMs

Executive Summary: As large language models (LLMs) and generative AI systems become ubiquitous, threat actors are weaponizing them to enhance social engineering campaigns through highly personalized, scalable, and adaptive attacks known as Relationship Operations (RELPO). These AI-driven attacks exploit cognitive biases, automate impersonation, and manipulate trust at scale—posing unprecedented risks to individuals, enterprises, and critical infrastructure. This article provides an authoritative analysis of AI-powered social engineering threats, outlines key defensive strategies, and offers actionable recommendations for threat modeling, detection, and response in the context of OWASP LLM security principles and Certified AI Security Professional frameworks.

Key Findings

The Rise of AI-Powered Social Engineering and RELPO

Social engineering has long relied on human manipulation, but the integration of LLMs marks a paradigm shift. Threat actors now deploy Relationship Operations (RELPO)—a term coined to describe AI-driven, multi-stage campaigns designed to establish, exploit, and sustain false relationships across digital channels. Unlike traditional phishing, RELPO leverages generative AI to:

Recent incidents, such as the LLM Jacking campaign reported in February 2026, demonstrate how attackers hijack fine-tuned LLMs via prompt injection to generate malicious content under the guise of legitimate AI assistants. These compromised LLMs are then used to deliver tailored social engineering payloads—blurring the line between tool and attacker.

Mechanisms of AI-Enhanced Social Engineering

1. Hyper-Personalization Through Data Synthesis

LLMs trained on leaked datasets or stolen corporate knowledge can generate emails that mimic a CEO’s writing style, reference recent company projects, and include plausible urgency—making them far more effective than generic phishing attempts. Threat actors combine this with voice synthesis (e.g., ElevenLabs) and video deepfakes (e.g., Synthesia) to create multi-modal RELPO campaigns.

2. Conversational Manipulation and Trust Erosion

AI chatbots can engage victims in multi-turn dialogues, gradually lowering defenses through social proof ("Everyone else has already complied"), reciprocity ("I sent you a document—can you review it?"), and authority cues ("This is required by compliance"). Unlike static phishing emails, these interactions are adaptive and resilient to traditional spam filters.

3. Identity Compromise via AI-Powered Phishing

According to Microsoft’s 2025 threat intelligence, 68% of cloud-based identity attacks now begin with AI-generated phishing messages. These bypass traditional email filters by using legitimate-looking domains, natural language structures, and timely themes (e.g., "Q4 budget review"). Once credentials are harvested, attackers pivot to lateral movement within cloud environments.

Defending Against AI-Powered RELPO: A Multi-Layered Strategy

1. Threat Modeling for AI Infiltration

Organizations must expand threat models to include:

Use frameworks like STRIDE for AI systems to identify spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege risks in AI workflows.

2. Detection: Behavioral Biometrics and Real-Time Monitoring

Traditional signature-based detection fails against AI-generated content. Instead, implement:

3. Response: Containment and Recovery in AI Ecosystems

In the event of a RELPO breach:

OWASP LLM Security and AI Governance

The OWASP Top 10 for Large Language Model Applications highlights critical risks such as Prompt Injection, Insecure Output Handling, and Excessive Agency—all of which directly enable RELPO. To mitigate these:

Recommendations for Organizations and Individuals

For Enterprises:

For Individuals:

<