Google continues to show us why it chose to abandon its old motto of “Don’t Be Evil,” as it becomes more and more enmeshed with the military-industrial complex. Most recently, Google has removed four key points from its AI principles. Specifically, it previously read that the company would not pursue AI applications involving (1) weapons, (2) surveillance, (3) technologies that “cause or are likely to cause overall harm,” and (4) technologies whose purpose contravenes widely accepted principles of international law and human rights.
Those principles are gone now.
In its place, the company has written that “democracies” should lead in AI development and companies should work together with governments “to create AI that protects people, promotes global growth, and supports national security.” This could mean that the provider of the world’s largest search engine–the tool most people use to uncover the best apple pie recipes and to find out what time their favorite coffee shop closes–could be in the business of creating AI-based weapons systems and leveraging its considerable computing power for surveillance.
This troubling decision to potentially profit from high-tech warfare, which could have serious consequences for real lives and real people comes after criticism from EFF, human rights activists, and other international groups. Despite its pledges and vocal commitment to human rights, Google has faced criticism for its involvement in Project Nimbus, which provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. EFF said in 2024, “When a company makes a promise, the public should be able to rely on it.” Rather than fully living up to its previous human rights commitments, it seems Google has shifted its priorities.
Google is a company valued at $2.343 trillion that has global infrastructure and a massive legal department and appears to be leaning into the current anti-humanitarian moment. The fifth largest company in the world seems to have chosen to make the few extra bucks (relative to the company’s earnings and net worth) that will come from mass surveillance tools and AI-enhanced weapons systems.
And of course we can tell why. With government money flying out the door toward defense contractors, surveillance technology companies, and other national security and policing related vendors, the legacy companies who swallow up all of that data don’t want to miss out on the feeding frenzy. With $1 billion contracts on the table even for smaller companies promising AI-enhanced tech, it looks like Google is willing to throw its lot in with the herd.
In addition to Google and Amazon’s involvement with Project Nimbus, which involved both cloud storage for the large amounts of data collected from mass surveillance and analysis of that data, there are many other scenarios and products on the market that could raise concerns. AI could be used to power autonomous weapons systems which decide when and if to pull the trigger or drop a bomb. Targeting software can mean physically aiming weapons at people identified by geolocation or by other types of machine learning like face recognition or other biometric technology. AI could also be used to sift through massive amounts of intelligence, including intercepted communications or publicly available information from social media and the internet in order to assemble lists of people to be targeted by militaries.
Whether autonomous AI-based weapons systems and surveillance are controlled by totalitarian states or states that meet Google’s definition of “democracy”, is of little comfort to the people who could be targeted, spied on, or killed in error by AI technology which is prone to mistakes. AI cannot be accountable for its actions. If we, the public, are able to navigate the corporate, government, and national security secrecy to learn of these flaws, companies will fall on a playbook we’ve seen before: tinkering with the algorithms and declaring the problem solved.
We urge Google, and all of the companies that will follow in its wake, to reverse course. In the meantime, users will have to decide who deserves their business. As the company’s most successful product, its search engine, is faltering, that decision gets easier and easier.