Did an AI drone go rogue and kill its human operator in a simulation?

Technology


Reports of an Artificial Intelligent (AI) drone killing its military operator in a simulation in London have made appearances in global media over the last week. However, the US military denies that this simulation ever happened. 

The alleged simulated death occurred during a simulation test ran by Air Force Colonel Tucker Hamilton. 

According to reports, the AI turned on Hamilton, who spoke at the  Future Combat Air & Space Capabilities summit in London. The reports stated that the AI killed its operator in the simulation so that the human would stop attempting to interfere with the mission it had been assigned.

“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” a colonel told Sky News. “The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Will humanity be able to keep AI in check? (illustrative) (credit: PEXELS)

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” the person added.

The US military denies the exercise

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” spokesperson Ann Stefanek said, according to Sky News. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

According to the military, the simulation was a thought experiment that was entirely unrelated to the military. Hamilton later confirmed this, according to Sillicon.

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.” 

“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”

Growing concerns over the future of Artificial Intelligence

The Jerusalem Post recently reported that senior executives in Artificial Intelligence, academics and other famous people signed a statement warning of global annihilation by AI. 

“Reducing the risk of extinction from AI should be a global priority alongside other risks on a societal scale such as epidemics and nuclear war,” read a statement that emphasized “wide-ranging concerns about the ultimate danger of uncontrolled AI.”

Bill Gates, billionaire businessman and philanthropist, has also expressed concern that AI could take over the world in a March blog post. 

Gates emphasized that there is a “threat posed by humans armed with AI” and the potential that AI “decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?” 





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *