Google has recently removed its long-standing policy against the development of artificial intelligence (AI) weapons. The company, once founded on the principle of “Don’t be evil,” had previously committed to opposing the creation of weapons and technologies that could cause harm or be used for surveillance.
However, last week, the phrase prohibiting the development of AI weaponry was removed from Google’s AI principles page. Alphabet, Google’s parent company, did not immediately respond to a request for comment on the matter.
In a blog post explaining the new policy, Google leadership stated, “There is a global competition underway to lead in the AI sector, within an increasingly complex geopolitical landscape. We believe democracies must lead in the development of AI.”
Some observers see this move as a clear indication that Google now openly supports the use of its AI technologies for military purposes.
AI is increasingly being used on the battlefield, from Ukraine to Gaza. Technology companies in Silicon Valley, which once refused to let their tools be used for destructive purposes, have moved away from their previous anti-military policies. Companies like OpenAI, Meta, and Anthropic all have AI projects in collaboration with the U.S. military or defense contractors.
AI is mainly being used for targeting in drone strikes, and in some cases, even for fully autonomous weapons.
Renowned AI researcher Stuart Russell, who opposes autonomous weapons, attended a high-level AI meeting in Paris, where he spoke to AFP news agency. He remarked, “Increasingly, the progress of the war in Ukraine is dictated by the use of remotely operated drones or fully autonomous drones. This has been a fundamental shift. So I think many military strategists believe that without this kind of capability, you simply cannot fight a modern war.”
Google’s connections to the military are not new. In 2017, despite its declared principles, the company assisted in building a targeting system called Project Maven for the U.S. Department of Defense. The project was eventually ended after protests from thousands of Google employees.
Recently, critics argue that Project Nimbus, a $1.2 billion contract between Google, Amazon, and the Israeli government, has been used for surveillance and target selection in the Gaza conflict.
For Russell and other observers, the change in Google’s policy is concerning. “We must assume that this policy change is not a coincidence, coming with a new administration that has removed all AI regulations set by the Biden administration and is now focusing heavily on using AI for military capabilities,” said Russell.
As AI continues to advance rapidly, governments and businesses are increasingly concerned about falling behind.