Google breaks promise: AI for weapons and surveillance can be done after all

Google breaks promise: AI for weapons and surveillance can be done after all

In 2018, Google introduced policies around AI. At its core was a promise to definitely not develop AI that could cause harm. That promise has now disappeared from the website.

Making promises is easy, keeping them a little less so. In 2018, Google published clear guidelines around its AI development. In doing so, the company stated that all innovation would be based on pillars such as built-in privacy, accountability, security and social value added. Google also wrote in black and white which applications it would not explore:

  • Technologies that can cause harm
  • Weapons or other technology whose purpose is to injure people
  • Technology for surveillance that exceeds international standards
  • Applications that violate the principles of human rights and international law
Screenshots of the principles Google jettisoned.

“As we gain more experience in this domain, this list may evolve,” Google added. That wasn’t a lie. The entire section of research that Google would not conduct is gone. The new guidelines do allow AI to be developed for all of the above purposes. The original page can still be viewed archived. The Washington Post was the first to discover that Google had removed the guidelines.

No real principles

The move shows how worthless self-imposed corporate principles are. Google’s promise not to do something remained in place until Google decided to do it. The list of applications that was off-limits thus turned out to be rather a list of research that Google simply had yet to begin.

read also

Microsoft and Google ram AI down your throat (and make you pay for it)

It is telling that Google no longer dares to say that it does not want to develop AI for weapons. The company does continue to mention that it wants to build its solutions in line with widely accepted principles of international law and human rights. Since these days few of those principles are widely accepted or followed internationally, and Google itself is headquartered in a country whose president has just called for ethnic cleansing, that opens the doors for a lot of less humane applications.

Moral flexibility in the name of national security

In a blog post, Google notes that the technology is evolving rapidly, and that it is necessary for countries in democratic nations to develop AI that supports national security. In any case, what Google says is not very relevant, since it can be taken offline at any time if another view seems more interesting. There is nothing wrong with progressive insight, but flexible principles are usually not a nice feature.

Google is not alone. OpenAI has already partnered with the Pentagon and Anthropic is even working with Palantir, whose technology supported ICE raids and collaborated on the policy of separating children from their parents and imprisoning them in cages.

For example, all major AI companies are currently working with organizations doing surveillance and building weapons. In Europe, there are laws banning AI for such far-reaching practices, but the U.S. has no such protections.