Google has joined a growing list of Silicon Valley technology firms supplying artificial intelligence models to the U.S. military. A new confidential agreement allows the Pentagon to utilize the company’s technologies for various lawful government purposes.
According to sources, the contract includes provisions for employing AI in mission planning and potentially in target identification. This partnership places Google alongside OpenAI and xAI, both of which are already collaborating with the U.S. Department of Defense.
Notably, Google has committed to adjusting the security settings and filters of its AI models at the government’s request.
While the agreement specifies that the AI is not intended for autonomous weaponry or mass surveillance without human oversight, Google does not have the authority to veto military operational decisions.
Internal Protests at Google
Employees at Google have expressed significant concerns regarding the potential use of their developments for “inhumane or extremely harmful purposes.” In an open letter to CEO Sundar Pichai, they are calling for the company to withdraw from secret military projects.
“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They are attempting to divide each company, fearing that the other will concede. This strategy only works when none of us know where the others stand,” the letter states.
It is noteworthy that Google previously opted out of similar contracts, such as Project Maven in 2018, following widespread employee protests. However, in 2025, Google’s parent company, Alphabet, lifted its internal ban on using AI for military applications and surveillance, citing national security needs.
Militarization of Leading AI Technologies?
The Pentagon is actively engaging leading AI laboratories to develop confidential digital products. Reports indicate that the U.S. defense department has signed agreements worth up to $200 million each with Anthropic, OpenAI, and Google.
In contrast to Google, the startup Anthropic refused to lower its protective barriers for its AI model, Claude, at the beginning of 2026. As a result, the Pentagon classified the company as “risky to the supply chain.”
Google, on the other hand, has chosen a path of collaboration, viewing the provision of API access to its commercial models as a responsible approach to supporting national security.
Google has entered a confidential agreement with the U.S. military to provide AI technologies, raising concerns among employees about potential misuse. The deal aligns Google with other tech firms collaborating with the Pentagon, despite past employee protests against military contracts.
