The United States Department of Defense’s decision on February 27 to reject the artificial intelligence company Anthropic’s ethical red lines for AI for military use is a clear sign that the Pentagon is unlikely to uphold meaningful safeguards on weapons’ development. Anthropic declined to allow the Pentagon to use the company’s products for fully autonomous weapons and mass surveillance of US citizens in fulfilling its Defense Department contract.
Governments at the United Nations in Geneva this week should push back against this dangerous decision when they discuss ways to address autonomous weapons systems under the auspices of the Convention on Conventional Weapons.
At the heart of Anthropic’s dispute with the Pentagon are divergent views about the definition of “responsible AI” in military domains. Anthropic says it drew a red line at fully autonomous weapons systems, which would select and engage targets with no human involvement.
But the Defense Department’s January AI memo apparently removed a requirement for operators of autonomous weapons systems to be able to exercise “appropriate levels of human judgment over the use of force.” The memo prioritizes accelerated adoption of AI to achieve US “Military AI Dominance,” which would contravene those standards.
Defense Secretary Pete Hegseth on February 27 directed that Anthropic be designated a “supply chain risk” and swiftly signed a deal with Anthropic’s competitor Open AI, which agreed to their products being used for “any lawful use,” a new requirement from the US government.
Human Rights Watch has long described how autonomous weapons systems risk placing civilians in grave danger because they would struggle to distinguish between civilians and combatants during armed conflict or to navigate complex, dynamic environments like protests. Among other things, they lack the ability to understand subtle clues signaling human intentions.
Because of their opacity and unpredictability, it would be difficult to hold individual operators or developers accountable for the systems. And built-in biases that the algorithms for these systems use could lead to disproportionate harm for people of color, women, and people with disabilities, among other groups.
To prevent the US from leading the world down a dangerous slide from which there is no return, governments at the Convention on Conventional Weapons should use this week’s meeting to support and strengthen the draft treaty banning and regulating autonomous weapons systems.