AI targeting systems degrade unpredictably when adversaries change tactics, with no warning to operators

defense+20 views
An AI targeting model trained on 6 months of adversary behavior performs well until the adversary changes tactics -- new vehicle types, different movement patterns, altered communication methods. The model's accuracy degrades silently because neural networks fail confidently (high confidence on wrong classifications) rather than flagging uncertainty. Operators continue receiving target recommendations without knowing the model's accuracy has dropped from 90% to 40%. This persists because out-of-distribution detection for military AI is an unsolved research problem -- no deployed system can reliably detect when it is operating outside its training distribution and alert the operator that its recommendations should not be trusted.

Evidence

https://www.rand.org/pubs/research_reports/RRA1524-1.html

Comments