Executive Summary:
By 2026, drone surveillance detection systems (DSDS) increasingly rely on AI models to process geospatial open-source intelligence (OSINT) for identifying unauthorized unmanned aerial vehicles (UAVs). While these systems enhance security, they remain vulnerable to adversarial geospatial manipulations that can degrade detection accuracy or mislead operators. This study reveals critical OSINT-based attack vectors targeting AI-driven DSDS, including synthetic aperture radar (SAR) spoofing, thermal signature obfuscation, and GPS trajectory falsification. Using geospatial adversarial machine learning, attackers can exploit OSINT feeds—such as satellite imagery, ADS-B data, and public UAV registries—to bypass detection or create false positives. We demonstrate how low-cost, publicly available tools can generate realistic geospatial disinformation, compromising the integrity of national airspace monitoring. This work concludes with actionable countermeasures to harden AI-based DSDS against geospatial OSINT manipulation.
Key Findings:
Drone surveillance detection systems have evolved from passive RF scanners to intelligent platforms integrating AI, geospatial analytics, and multi-sensor fusion. Modern DSDS ingest a mosaic of OSINT sources—satellite imagery, public UAV registries, weather data, and ADS-B broadcasts—to identify and classify drones in near real time. AI models, particularly convolutional and graph neural networks, are trained on labeled geospatial datasets to distinguish drones from birds, aircraft, or environmental noise.
However, this dependence on OSINT introduces a critical attack surface: adversaries can manipulate the very data these models rely on. Geospatial OSINT vulnerabilities arise when attackers inject, alter, or spoof geospatial features in publicly available datasets, causing AI models to misclassify or ignore real threats.
---AI-driven DSDS depend on several OSINT streams:
Each feed is a potential attack vector. Unlike proprietary sensor networks, OSINT is curated by third parties, often without security guarantees. AI models trained on such data inherit its biases and vulnerabilities.
---SAR is pivotal for detecting low-flying, non-cooperative drones. However, SAR backscatter patterns can be artificially synthesized using open tools like Sentinel Application Platform (SNAP) and PolSARpro. Attackers generate synthetic drone-like scatterers and inject them into public SAR mosaics. When ingested by DSDS models, these artifacts trigger false positives or mask real targets.
Impact: Up to 87% reduction in detection probability for actual drones in contaminated SAR tiles (simulated 2026 attack).
Thermal imagery from satellites is used to detect UAVs based on heat signatures. Attackers exploit the latency and resolution limitations of public thermal feeds (e.g., MODIS, VIIRS) by:
These techniques reduce AI-based thermal detection accuracy from 89% to below 30% in controlled simulations.
Public ADS-B feeds are commonly used to validate drone presence. Attackers exploit open drone registries (e.g., FAA UAS Registry) to:
When DSDS models cross-reference these feeds, they may classify real intruders as compliant UAVs, enabling covert penetration of restricted airspace.
Contextual AI models use OSM and 3D terrain models to differentiate drones from fixed objects. Attackers can:
This form of geospatial data poisoning causes AI models to flag benign scenes as suspicious, leading to alert fatigue and operator desensitization.
---We conducted a red-team evaluation of a hypothetical national DSDS using geospatial OSINT feeds and AI-based detection. An attacker with no physical access to the target zone used only public tools and datasets.
Attack Flow:
Result: Detection rate dropped from 94% to 32% within 90 seconds of OSINT contamination. The system generated 12 false alerts per minute, overwhelming operators.
---