Autonomous weapons have no legal person to hold accountable when a strike kills civilians

defense+20 views
When an autonomous targeting system identifies and engages a target that turns out to be civilian, the accountability chain is unclear: the software developer wrote the algorithm years before deployment, the commander authorized autonomous mode but didn't approve the specific target, the operator was monitoring but the system acted faster than human intervention. No individual made the specific decision to strike. International humanitarian law requires individual criminal responsibility for unlawful attacks, but distributed AI decision-making diffuses responsibility across so many actors that no one is legally culpable. This persists because IHL was written for human combatants making discrete decisions, and no legal framework has been adopted for accountability in human-machine teaming where the 'decision' is an emergent property of the system.

Evidence

https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons

Comments