A dispute between AI firm Anthropic and the Pentagon over how the navy can use the corporate’s know-how has now gone public. Amid tense negotiations, Anthropic has reportedly known as for limits on two key purposes: mass surveillance and autonomous weapons. The Protection Division, which Trump renamed the Department of War final yr, needs the liberty to make use of the know-how with out these restrictions.
Caught within the center is Palantir. The protection contractor gives the safe cloud infrastructure that enables the navy to make use of Anthropic’s Claude mannequin, but it surely has stayed quiet as tensions escalate. That’s even because the Pentagon, per Axios, threatens to designate Anthropic a “provide chain threat,” a transfer that might pressure Palantir to chop ties with one in all its most vital AI companions.
The risk could also be a negotiating tactic. But when carried out, it will have sweeping penalties, probably barring not simply Anthropic however its prospects from authorities work. “That might simply imply that the overwhelming majority of firms that now use [Claude] to be able to make themselves more practical would abruptly be ineligible for working for the federal government,” says Alex Bores, a former Palantir worker who’s now running for Congress in New York’s twelfth district. “It could be horribly hamstringing our authorities’s capacity to get issues achieved.” (Palantir didn’t reply to a request for remark.)
Anthropic and the Pentagon’s disagreement
Anthropic has, till now, maintained shut ties with the navy. Claude was the primary frontier AI mannequin deployed on categorized Pentagon networks. Final summer season, the Protection Division awarded Anthropic a $200 million contract, and the corporate’s know-how was even used within the latest U.S. operation to seize Nicolas Maduro, the Wall Street Journal reported this week.
However the firm’s dedication to sure AI security ideas has irked some folks in President Donald Trump’s orbit. (Katie Miller, Stephen Miller’s spouse, has publicly accused the company of liberal bias and criticized its dedication to democratic values.) Not like rivals xAI and OpenAI, each of which additionally even have Protection Division contracts, Anthropic is now locked in a combat with the Pentagon that enjoying out in public.
“Anthropic is dedicated to utilizing frontier AI in assist of US nationwide safety. That’s why we had been the primary frontier AI firm to place our fashions on categorized networks and the primary to offer custom-made fashions for nationwide safety prospects,” an organization spokesperson tells Quick Firm. “Claude is used for all kinds of intelligence-related use instances throughout the federal government, together with the DoW, in keeping with our Utilization Coverage. We’re having productive conversations, in good religion, with DoW on methods to proceed that work and get these complicated points proper.”
The Pentagon has taken a extra confrontational tone. Company officers are reviewing their relationship with Anthropic and have prompt that different contractors may additionally be required to cease working with the corporate. “The Division of Battle’s relationship with Anthropic is being reviewed,” Chief Pentagon spokesman Sean Parnell tells Quick Firm. “Our nation requires that our companions be prepared to assist our warfighters win in any combat.” (Parnell didn’t reply to a request for clarification relating to particular issues about autonomous weapons or surveillance.)
Palantir, the intermediary
Palantir occupies a important place on this ecosystem. A longtime authorities software program supplier, it has met a bevy of necessities permitting it to supply cloud companies to assist categorized work. And, as is typical within the dizzying world of presidency know-how contracting, Palantir additionally has key partnerships with Anthropic.
Two years in the past, the businesses partnered to deliver Anthropic’s know-how to the federal government, a transfer that made Claude accessible to defense and intelligence services by Amazon Net Providers. Final April, Anthropic joined Palantir’s FedStart program, which expanded the supply of its know-how to authorities prospects by Google Cloud.
Authorities tech contracting is a wonky enterprise, however firms that wish to promote software program to the federal government usually have to work with an authorized cloud supplier like Palantir, or get hold of certification themselves. “For those who’ve by no means operated in a categorized atmosphere earlier than, you primarily want a car,” explains Varoon Mathur, who labored on AI within the Biden administration. “Palantir is a protection contractor with deep operational integration. Anthropic is an AI mannequin supplier attempting to entry that ecosystem.”
Rising tensions over how the Protection Division would possibly use Claude additionally increase questions on how a lot visibility firms like Palantir and Anthropic have into the federal government’s use of their instruments. “Anthropic and OpenAI provide Zero Knowledge Retention utilization, the place they don’t retailer the asks product of their AI,” notes Steven Adler, a former OpenAI worker and AI security knowledgeable tells Quick Firm. “Naturally this makes it tougher to implement attainable violations of their phrases.”
An individual accustomed to the matter stated Anthropic does have perception into how its know-how is used, no matter whether or not it’s in a categorized atmosphere, and that the corporate is assured its companions and customers have been deploying the tech in keeping with its insurance policies. In its reporting, the Wall Street Journal cited folks accustomed to the matter who stated an Anthropic worker did attain out to Palantir to ask about Claude’s use within the Maduro operation, although Anthropic denied to that outlet that it had spoken with Palantir past technical discussions. The Anthropic spokesperson tells Quick Firm that the corporate can’t touch upon its know-how’s use in particular navy operations, however stated it “work[s] carefully with our companions to make sure compliance.”
Extra broadly, the standoff dangers chilling relationships between Silicon Valley and Washington at a second when the federal government is pushing to undertake AI extra aggressively. “To state principally that it’s our means or the freeway, and if you happen to attempt to put any restrictions, we is not going to simply not signal a contract, however go after your small business, is an enormous crimson flag for any firm to even take into consideration wanting to have interaction in authorities contracting,” says Bores.

