AI targeting creates 'automation bias' where operators trust the machine over their own judgment
defense+2defenseaisafety0 views
Studies from aviation and medical AI show that when an AI system recommends an action, human operators accept the recommendation 92-97% of the time, even when the recommendation contradicts their own assessment. Applied to military targeting, this means an operator who sees an AI-flagged target is psychologically predisposed to approve the strike even if their own analysis suggests the target may be civilian. The 'human in the loop' becomes a formality. This persists because automation bias is a fundamental cognitive phenomenon -- humans defer to systems they perceive as more capable -- and no military training program has developed effective countermeasures to automation bias in targeting decisions.
Evidence
https://www.nature.com/articles/s41562-022-01501-z