Training data bias in AI targeting systems reflects historical targeting patterns that may embed discrimination
defense+2defenseaisafety0 views
AI targeting systems are trained on historical engagement data -- past targets that were struck and their characteristics. If historical targeting disproportionately struck certain building types, neighborhoods, or demographic patterns, the AI learns to replicate that bias. The training data is classified, so no independent review can assess whether the model perpetuates discriminatory targeting. This persists because military AI development happens behind classification barriers that prevent the adversarial auditing, bias testing, and red-teaming that commercial AI systems undergo. The organizations building these systems have no institutional incentive to discover bias that would slow deployment.
Evidence
https://www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict