UN Secretary General António Guterres on Dec. 13 called upon member states to devise “an ambitious plan for the future to establish restrictions on the use of certain types of autonomous weapons” ahead of the Sixth Review Conference of the Convention on Certain Conventional Weapons (CCW). He called on the CCW to “swiftly advance its work on autonomous weapons that can choose targets and kill people without human interference.”
The CCW seeks to restrict the use of weapons that are excessively injurious or have indiscriminate effects. It is based on the principles of proportionality and distinguishing between civilians and combatants. In recent years, it has become the forum for discussions on the humanitarian, ethical, military and legal implications of the use of lethal autonomous weapons (LAWS).
The potential framework for addressing LAWS has been debated by members of the CCW since 2013. CCW members have adopted conflicting positions on the issue. Concerns regarding the dangers of these “killer robots” have heightened since the Panel of Experts on Libya in March 2021 reported to the UN Security Council that a military-grade autonomous drone had been used in an armed conflict.
SpaceX CEO Elon Musk and Google vice president for artificial intelligence policy Mutafa Suleyman are among the foremost AI experts who have urged the UN to ban the development of autonomous weapons. In 2020, Human Rights Watch issued a report proposing the fundamental elements of a treaty to ban the use of such weapons.
A coalition of over 65 CCW states has endorsed the proposed ban on LAWS. Certain member states, including the US and Russia, have opposed the ban. The US has expressed preference for a “non-binding code of conduct” instead of a treaty banning the use of LAWS. States such as the US, Israel, India, the Netherlands, and France are believed to oppose the ban owing to their heavy investments into the development of AI for military use.
From Jurist, Dec. 14. Used with permission.
Photo: Future of Life Institute