Will Anthropic’s battle with the government ripple through IT?
Experts think the Anthropic incident may change how companies regard their AI vendors.
• 3 min read
Anthropic is locked in a fight with the Department of Defense about how its AI products can be used by the military. Could that highly publicized battle impact other industries’ use of AI?
After Anthropic told the Department of Defense (referred to by the Trump administration as the Department of War) that it didn’t want its AI used for “mass domestic surveillance” or to power “fully autonomous weapons,” Defense Secretary Pete Hegseth designated the company as a “supply chain risk,” which would prevent it from securing US government contracts.
An amicus brief signed by former federal judges filed in support of Anthropic wrote that The Pentagon “misinterpreted the statute and violated the necessary procedures” when making this designation.
This week, Anthropic sought a preliminary injunction in a San Francisco federal court to allow it to continue to do business with government contractors and federal agencies after Hegseth stated on X in February that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Anthropic CEO Dario Amodei wrote in a March 5 statement that the supply-chain risk designation “plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all of Claude by customers who have such contracts.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.
Alla Valente, a principal analyst at Forrester on its security and risk team, told IT Brew that key players in the IT industry are watching the Anthropic situation “very carefully” to see whether the federal government puts more guardrails on how AI models are trained and used.
“If you put that much pressure on all the big AI providers, then what happens?” Valente said. “You end up with…a very homogenous state where all the models function exactly the same, because they could only have those similar inputs.”
So, what should pros do? Valente said that IT professionals should use current events as an opportunity to think about a multi-model strategy. If a large-scale AI provider does something counter to a company’s values or bylaws, the latter may need to switch out their models and tools for an alternative.
“Whenever you go all in with one, you’re just exposing yourself to a level of concentration risk that, at some point, you’re going to need to start swapping things out,” Valente said. “The more prepared you are, the more you think about the risk implications, the downstream dependencies, the model accuracy issues, the security and privacy risks, and all those things—you need to think about them ahead of time, and not after the proverbial shoe drops.”
About the author
Caroline Nihill
Caroline Nihill is a reporter for IT Brew who primarily covers cybersecurity and the way that IT teams operate within market trends and challenges.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.