Sometimes AI isn’t as clever as we think it is. Researchers training an algorithm to identify skin cancer thought they had succeeded until they discovered that it was using the presence of a ruler to help it make predictions. Specifically, their data set consisted of images where a pathologist had put in a ruler to measure the size of malignant lesions.
It extended this logic for predicting malignancies to all images beyond the data set, consequently identifying benign tissue as malignant if a ruler was in the image.
The problem here is not that the AI algorithm made a mistake. Rather, the concern stems from how the AI “thinks”. No human pathologist would arrive at this conclusion.
These cases of flawed “reasoning” abound – from HR algorithms that prefer to hire men because the data set is skewed in their favour to propagating racial disparities in medical treatment. Now that they know about these problems, researchers are scrambling to address them.
Recently, Google decided to end its longstanding ban on developing AI weapons. This potentially encompasses the use of AI to develop arms, as well as AI in surveillance and weapons that could be deployed autonomously on the battlefield. The decision came days after parent company Alphabet experienced a 6% drop in its share price.
This is not Google’s first foray into murky waters. It worked with the US Department of Defense on the use of its AI technology for Project Maven, which involved object recognition for drones.
When news of this contract became public in 2018, it sparked backlash from employees who did not want the technology they developed to be used in wars. Ultimately, Google did not renew its contract, which was picked up by rival Palantir instead.
The speed with which Google’s contract was renewed by a competitor led some to note the inevitability of these developments, and that it was perhaps better to be on the inside to shape the future.
Such arguments, of course, presume that firms and researchers will be able to shape the future as they want to. But previous research has shown that this assumption is flawed for at least three reasons.
The confidence trap
First, human beings are susceptible to falling into what is known as a “confidence trap”. I have researched this phenomenon, whereby people assume that since previous risk-taking paid off, taking more risks in the future is warranted.
In the context of AI, this may mean incrementally extending the use of an algorithm beyond its training data set. For example, a driverless car may be used on a route has not been covered in its training.
This can throw up problems. There is now an abundance of data that driverless car AI can draw on, and yet mistakes still occur. Accidents like the Tesla car that drove into a £2.75 million jet when summoned by its owner in an unfamiliar setting, can still happen. For AI weapons, there isn’t even much data to begin with.
Read more:
Is Tesla’s sales slump down to Elon Musk?
Second, AI can reason in ways that are alien to human understanding. This has led to the paperclip thought experiment, where AI is asked to produce as many paper clips as possible. It does so while consuming all resources – including those necessary for human survival.
Of course, this seems trivial. After all, humans can lay out ethical guidelines. But the problem lies in being unable to anticipate how an AI algorithm might achieve what humans have asked of it and thus losing control. This might even include “cheating.” In a recent experiment, AI cheated to win chess games by modifying system files denoting positions of chess pieces, in effect enabling it to make illegal moves.
But society may be willing to accept mistakes, as with civilian casualties caused by drone strikes directed by humans. This tendency is something known as the “banality of extremes” – humans normalise even the more extreme instances of evil as a cognitive mechanism to cope. The “alienness” of AI reasoning may simply provide more cover for doing so.
Third, firms like Google that are associated with developing these weapons might be too big to fail. As a consequence, even when there are clear instances of AI going wrong, they are unlikely to be held responsible. This lack of accountability creates a hazard as it disincentivises learning and corrective actions.
The “cosying up” of tech executives with US president Donald Trump only exacerbates the problem as it further dilutes accountability.

Joshua Sukoff/Shutterstock
Rather than joining the race towards the development of AI weaponry, an alternative approach would be to work on a comprehensive ban on it’s development and use.
Although this might seem unachievable, consider the threat of the hole in the ozone layer. This brought rapid unified action in the form of banning the CFCs that caused it. In fact, it took only two years for governments to agree on a global ban on the chemicals. This stands as a testament to what can be achieved in the face of a clear, immediate and well-recognised threat.
Unlike climate change – which despite overwhelming evidence continues to have detractors – recognition of the threat of AI weapons is nearly universal and includes leading technology entrepreneurs and scientists.
In fact, banning the use and development of certain types of weapons has precedent – countries have after all done the same for biological weapons. The problem lies in no country wanting another to have it before they do, and no business wanting to lose out in the process.
In this sense, choosing to weaponise AI or disallowing it will mirror the wishes of humanity. The hope is that the better side of human nature will prevail.
Akhil Bhardwaj does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.