Adversarial patches on rooftops can trick AI targeting systems into misclassifying buildings
defense+2defenseaicybersecurity0 views
AI-based automatic target recognition (ATR) systems using computer vision are vulnerable to adversarial patches -- physical patterns painted on rooftops or vehicles that cause the neural network to misclassify a hospital as a military headquarters or vice versa. Researchers have demonstrated that a 2x2 meter adversarial patch visible from overhead imagery can cause >90% misclassification rates in standard object detection models. A defender could paint patches on civilian infrastructure to protect it, or an attacker could paint patches on military targets to make them appear civilian. This persists because adversarial robustness remains an unsolved problem in computer vision -- there is no neural network architecture that is provably immune to adversarial inputs, and the attack/defense asymmetry means patches are cheap to create but expensive to defend against.
Evidence
https://arxiv.org/abs/1712.09665