(New York, April 28, 2025) – Autonomous weapons systems pose grave risks to human rights during both war and peacetime, Human Rights Watch said in a report released today. Governments should tackle the concerns raised by such weapons systems, known as “killer robots,” by negotiating a multinational treaty to address the dangers.
The 61-page report, “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making,” finds that autonomous weapons, which select and apply force to targets based on sensor rather human inputs, would contravene the rights to life, peaceful assembly, privacy, and remedy as well as the principles of human dignity and non-discrimination. Technological advances and military investments are now spurring the rapid development of autonomous weapons systems that would operate without meaningful human control.
“The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances, raising serious concerns under international human rights law,” said Bonnie Docherty, senior arms adviser at Human Rights Watch, lecturer on law at Harvard Law School’s International Human Rights Clinic, and lead author of the report. “To avoid a future of automated killing, governments should seize every opportunity to work toward the goal of adopting a global treaty on autonomous weapons systems.”
The report, co-published with Harvard Law School’s International Human Rights Clinic, was issued ahead of the first United Nations General Assembly meeting on autonomous weapons systems in New York on May 12 to 13, 2025.
Weapons systems with varying degrees of autonomy have existed for years, but the types of targets, duration of operation, geographical scope, and environment in which they operate have been limited. They include missile defense systems, armed drones, and loitering munitions.
Autonomous weapons systems operating without meaningful human control would, once activated, rely on software, often using algorithms, input from sensors like cameras, radar signatures, and heat shapes, and other data, to identify a target. After finding a target, they would fire or release their payload without the need for approval or review by a human operator. That means a machine rather than a human would determine where, when, and against what force is applied.
Autonomous weapons systems would lack the ability to interpret complex situations and to accurately approximate human judgment and emotion, elements that are essential to lawfully using force under the rights to life and peaceful assembly.
Contrary to fundamental human rights principles, the weapons systems would be incapable of valuing human life in a way that is required to respect an individual’s dignity. In addition, systems relying on artificial intelligence would most likely be discriminatory due to developers’ biases and the inherent lack of transparency of machine learning.
Autonomous weapons systems would also violate human rights throughout their life cycle, not just at the time of use. The mass surveillance necessary for their development and training would undermine the right to privacy. The accountability gap of these black-box systems would infringe upon the right to a remedy for harm after an attack.
“Human beings, whether soldiers or police officers, often egregiously violate human rights, but it would be worse to replace them with machines,” Docherty said. “While people have the ability to uphold human rights, machines do not have the capacity to comply or to understand the consequences of their actions.”
Christof Heyns, the late UN special rapporteur on extrajudicial executions, was the first UN official to raise the alarm about autonomous weapons systems in his 2013 report to the UN Human Rights Council. “A Hazard to Human Rights” charts how the UN secretary-general and numerous UN bodies and experts have stressed that the use of autonomous weapons systems would pose threats to international human rights law, and some have argued they should be prohibited.
More than 120 countries are now on record calling for the adoption of a new international treaty on autonomous weapons systems. UN Secretary-General António Guterres and Mirjana Spoljaric Egger, president of the International Committee of the Red Cross, have urged states to “act now to preserve human control over the use of force” by negotiating by 2026 a legally binding instrument with prohibitions and regulations for autonomous weapons systems.
Most treaty proponents have called for prohibitions on autonomous weapons systems that by their nature operate without meaningful human control or systems that target people, as well as for regulations that ensure all other autonomous weapons systems cannot be used without meaningful human control.
The upcoming UN meeting was mandated by a UN General Assembly resolution on lethal autonomous weapons systems that was adopted on December 2, 2024, by a vote of 166 in favor, 3 opposed (Belarus, North Korea, and Russia), and 15 abstentions.
Countries have discussed lethal autonomous weapons systems at the Convention on Conventional Weapons (CCW) meetings in Geneva since May 2014, but with no substantive outcome. The main reason for the lack of progress under the CCW is that its member countries rely on a consensus approach to decision-making, which means a single country can reject a proposal, even if every other country agrees to it. A handful of major military powers investing in autonomous weapons systems have exploited this process to repeatedly block proposals to negotiate a legally binding instrument.
“Negotiations for a treaty on autonomous weapons systems should take place in a forum characterized by a common purpose, voting-based decision-making, clear and ambitious deadlines, and a commitment to inclusivity,” Docherty said.
Human Rights Watch is a cofounder of Stop Killer Robots, which calls for a new international treaty to prohibit and regulate autonomous weapons systems. The coalition of more than 270 nongovernmental organizations in 70 countries supports the development of legal and other norms that ensure meaningful human control over the use of force, counter digital dehumanization, and reduce automated harm.