Soldiers without Conscience: Why States Need to Regulate Killer Robots
Current Practice in Medical Science Vol. 7,
25 July 2022
The phenomenon of artificial intelligence (AI) has significantly advanced military technology in the 21st century, by facilitating the development of lethal autonomous weapons (LAWs), also christened ‘killer robots.’ These weapon systems can select and hit targets on their own, without any meaningful human control, which has generated serious ethical issues as to their propriety. In another setback, the law of armed conflict, otherwise referred to as international humanitarian law (IHL) does not cover killer robots as such weapons were once dismissed as science fiction. The unresolved ethical issues and legal vacuum has been the subject of endless debates by governments, the United Nations (UN), civil society groups, and scholars. While some have called for their outright ban, others argue that the benefits of retaining them outweigh the disadvantages. Using existing literature on the subject, this paper analyses both sides of the debate in the light of current realities. It has been found that there are ethical and legal issues that require global attention and resolution; unregulated employment of killer robots portends danger for humanity, as this can trigger a new arms race; and States are unwilling to prohibit them for strategic reasons. It advocates the adoption of a new global treaty to address the myriad of issues associated with autonomous weapon systems, and a strong monitoring and supervisory role for the UN.
- Unmanned vehicles
- killer robots
- laws of war