Meta said that it would make its AI models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril.
The Llama models are “open source,” which means the technology can be freely copied and distributed by other developers, companies and governments.
Meta’s move is an exception to its “acceptable use policy,” which forbade the use of the company’s AI software for “military, warfare, nuclear industries,” among other purposes.
In a blog post Monday, Nick Clegg, Meta’s president of global affairs, said the company now backed “responsible and ethical uses” of the technology that supported the United States and “democratic values” in a global race for AI supremacy.
“Meta wants to play its part to support the safety, security and economic prosperity of America — and of its closest allies, too,” Clegg wrote. He added that “widespread adoption of American open source AI models serves both economic and security interests.”
Discover the stories of your interest
A Meta spokesperson said the company would share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States. Bloomberg earlier reported that Meta’s technology would be shared with the Five Eyes countries. Meta, which owns Facebook, Instagram and WhatsApp, has been working to spread its AI software to as many third-party developers as possible, as rivals like OpenAI, Microsoft, Google and Anthropic vie to lead the AI race.
Meta, which had lagged some of those companies in AI, decided to open-source its code to catch up. As of August, the company’s software has been downloaded more than 350 million times.
Meta is likely to face scrutiny for its move. Military applications of Silicon Valley tech products have proved contentious in recent years, with employees at Microsoft, Google and Amazon vocally protesting some of the deals that their companies reached with military contractors and defense agencies.
In addition, Meta has come under scrutiny for its open-source approach to AI. While OpenAI and Google argue that the tech behind their AI software is too powerful and susceptible to misuse to release into the wild, Meta has said AI can be improved and made safer only by allowing millions of people to look at the code and examine it.
Meta’s executives have been concerned that the U.S. government and others may harshly regulate open-source AI, two people with knowledge of the company said.
Those fears were heightened last week after Reuters reported that research institutions with ties to the Chinese government had used Llama to build software applications for the People’s Liberation Army.
Meta executives took issue with the report, and told Reuters that the Chinese government was not authorized to use Llama for military purposes.
In his blog post Monday, Clegg said the U.S. government could use the technology to track terrorist activities and improve cybersecurity across American institutions. He also repeatedly said that using Meta’s AI models would help the United States remain a technological step ahead of other nations.
“The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to AI globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies,” he said.