Google has signed a contract allowing the United States Department of Defense to use the company’s AI models for classified tasks, according to a report from The Information , citing a person familiar with the matter.

The agreement reportedly gives the Pentagon access to Google’s AI systems for any lawful government purpose.

The contract signing reportedly took place on the same day that more than 600 Google employees, including many from Google DeepMind, sent an open letter to CEO Sundar Pichai.

According to The Washington Post , employees urged the company to reject any classified collaboration with the Pentagon.

The letter stated that workers wanted AI to benefit humanity rather than be used in inhumane or harmful ways.

Employees also argued that classified contracts prevent even Google representatives from knowing how the technology is being used. They said rejecting classified workloads was the only way to avoid association with potential harms.

A spokesperson for Google Public Sector said the new deal extends an existing agreement signed in November.

The spokesperson added that Google remains committed to the position that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.

The report said the contract includes language stating the AI system is not intended for domestic mass surveillance or autonomous weapons without appropriate human oversight.

However, it also reportedly states that the agreement does not grant any right to control or veto lawful government operational decisions.

Charlie Bullock said wording such as “is not intended for” and “should not be used for” may not carry legal force and may only indicate preference rather than create a breach of contract.

Amos Toh reportedly said appropriate human oversight does not necessarily require a person between target identification and a fire order.

The report added that the Pentagon has not ruled out fully autonomous weapons systems.

The Information reported that Google’s deal may provide the Pentagon with more flexibility than comparable agreements.

OpenAI reportedly kept full control over its Safety Stack in a February agreement, according to the company’s blog. Google, by contrast, is said to have agreed to help the government adjust safety filters on request. Alongside Google and OpenAI, xAI is also reported to hold a classified Pentagon AI contract.

Earlier this year, Anthropic was excluded from a Pentagon agreement after seeking contractual guarantees against mass surveillance and autonomous weapons. Anthropic is currently suing over that decision.

In 2018, after employee protests, Google chose not to renew its Project Maven contract with the Pentagon and pledged not to use AI for weapons or surveillance.

The report said those internal restrictions were removed last year.

Project Maven is now sold by Palantir Technologies and has reportedly been used for target selection in the Iran conflict with support from Anthropic’s Claude model.

📢 For the latest Tech & Telecom news, videos and analysis join ProPakistani's WhatsApp Group now!

Follow ProPakistani on Google News & scroll through your favourite content faster!

Shares