● LIVE

Pentagon Signed AI Deals With Seven Companies. Anthropic Was Not Invited.

Satish Chand Gupta By Satish Chand Gupta
13 Min Read

The United States Department of Defense announced classified AI infrastructure agreements with seven technology companies on May 1, 2026: SpaceX, OpenAI, Google, NVIDIA, Microsoft, Amazon Web Services, and Reflection. Anthropic, the maker of Claude, was not included. The exclusion stems from a two-month-old dispute in which the Pentagon blacklisted Anthropic after the company refused to grant the Defense Department unlimited use of Claude for all lawful purposes, citing concerns that the language could enable domestic mass surveillance or fully autonomous weapons. Anthropic sued the Trump administration. A federal judge in California blocked the government’s effort. The Pentagon then proceeded to sign agreements with everyone else. This is the sharpest example yet of how AI safety commitments are becoming commercially and geopolitically consequential, not just philosophical positions.

Key Highlights

  • The Pentagon announced classified AI agreements with SpaceX, OpenAI, Google, NVIDIA, Microsoft, AWS, and Reflection on May 1, 2026
  • Anthropic was excluded after the Defense Department blacklisted it in March 2026 for refusing to grant unlimited lawful use of Claude
  • The Pentagon labeled Anthropic a “supply chain risk,” a designation previously reserved for companies linked to foreign adversaries
  • Anthropic objected to language that could permit domestic mass surveillance or fully autonomous weapons deployment
  • Anthropic sued the Trump administration and a federal judge in California blocked the government’s blacklisting effort
  • Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles in April 2026 after the company unveiled its Mythos cybersecurity tool
  • President Trump told CNBC on May 1 that a DOD deal with Anthropic is “possible” and called the company “very smart” and capable of being “of great use”

How the Dispute Started

The dispute between Anthropic and the Pentagon traces to a contract negotiation in early March 2026 in which the Defense Department sought terms allowing it to use Claude for “all lawful purposes” without restriction. From a standard government contracting perspective, that is a routine authorization request. The Defense Department routinely seeks broad authorization for any tool deployed on classified networks to avoid returning to vendors for permission each time the use case evolves.

Anthropic’s objection was specific. The company argued that “all lawful purposes” as defined by the Defense Department is broad enough to include two use cases that Anthropic considers incompatible with its responsible AI development commitments: domestic mass surveillance programs that are technically lawful under current interpretations of executive surveillance authority, and fully autonomous weapons systems in which AI makes lethal targeting decisions without a human in the decision loop.

Anthropic proposed alternative language that would permit military use of Claude with explicit carve-outs for those two categories. The Defense Department declined the alternative language. Anthropic declined to accept the original terms. The Pentagon subsequently designated Anthropic a “supply chain risk,” a label that acts as a practical exclusion from classified government contracts.

The supply chain risk designation is significant in its own right. The label has historically been applied to companies with ties to foreign adversaries, most prominently Chinese telecommunications firms like Huawei and ZTE. Applying the same label to a US company because it insisted on AI safety guardrails is a novel use of the designation that reflects how the current administration has framed AI safety commitments as security obstructions rather than security enhancements.

What the Seven Agreements Cover

The seven companies that signed May 1 agreements will provide AI capabilities across three categories for classified Defense Department networks. The first category is intelligence analysis: AI tools that process and synthesize classified intelligence data from multiple sources at speeds that human analysts cannot match. The second category is cyber operations: AI tools for both offensive and defensive cyber activity, including the threat detection and response capabilities that Anthropic’s own Mythos product was designed to address. The third category is logistics and planning: AI-assisted operations planning tools that optimize resource allocation, mission sequencing, and supply chain management in complex military environments.

The specific capabilities each company provides are classified, but the public announcements from Google, Microsoft, and NVIDIA indicate that their contributions draw on their largest publicly available AI platforms. Google is contributing capabilities from the Gemini model family and its existing Defense Department relationship through Google Cloud. Microsoft is extending its Azure OpenAI Service to classified networks. NVIDIA is providing its AI infrastructure stack, primarily GPUs and inference optimization tools, rather than frontier model capabilities.

Reflection AI, the least publicly known of the seven companies, is a startup that received significant Defense Department funding in 2025 for developing AI tools specifically designed for military analytical applications. Its inclusion alongside hyperscalers like Google, Microsoft, and AWS reflects the Defense Department’s interest in purpose-built military AI tools rather than exclusively relying on civilian foundation model adaptations.

Why OpenAI’s Inclusion Is Significant

OpenAI’s participation deserves specific attention because the company spent much of 2023 and 2024 publicly navigating the tension between its nonprofit origins and its commercial expansion. OpenAI’s original charter included restrictions on military weapons use. The company updated those use policies in early 2024 to remove the explicit military prohibition, a change that drew criticism from safety researchers but cleared the path for the kind of Defense Department agreement announced on May 1.

The contrast with Anthropic is now structurally defined. OpenAI modified its policies to accommodate government use. Anthropic maintained its restrictions and faced exclusion. Both companies began as AI safety-focused organizations. Their different responses to the same government request illustrate that “AI safety commitment” is not a monolithic position but a set of choices about which use cases to permit, which to prohibit, and how to resolve conflicts with commercial and government partners when those choices become contested.

OpenAI’s CEO Sam Altman has consistently argued that the most important contribution any AI company can make to AI safety is ensuring that democratic governments have access to the most capable AI systems, so that authoritarian governments do not gain an AI capability advantage. The logic is that safety outcomes in AI depend on who controls the most powerful systems, and ensuring US government access is therefore a safety measure at the geopolitical level even if it involves accepting some domestic use case risks at the application level.

Anthropic’s federal lawsuit against the Trump administration challenged the supply chain risk designation on the grounds that it was applied without due process and constituted government retaliation for a company exercising its contractual right to decline specific terms. The California federal judge’s April ruling blocking the designation gave Anthropic partial relief but did not resolve the underlying commercial exclusion from the Defense Department agreements announced May 1.

The Mythos development adds a layer of irony. Mythos is Anthropic’s AI-powered cybersecurity tool designed specifically to identify and respond to state-sponsored cyber threats, including threats from North Korean actors like the Lazarus Group. North Korean hackers were responsible for 76 percent of 2026 crypto losses through April, making Mythos directly relevant to the kind of threat the Defense Department is trying to address with its AI infrastructure agreements. The company that built a tool for detecting North Korean cyber operations is the one excluded from the Defense Department’s AI agreements.

Dario Amodei’s April White House meeting with Chief of Staff Susie Wiles, following the Mythos unveiling, suggests a diplomatic channel remains open. President Trump’s May 1 CNBC comment that a deal with Anthropic is “possible” and that the company could be “of great use” indicates that the exclusion is not permanently fixed in the current administration’s view. The path to inclusion likely requires Anthropic to agree to some version of the terms that the seven included companies accepted, or to negotiate specific carve-outs that the Defense Department finds acceptable.

What This Means for AI Safety as Commercial Strategy

The Pentagon and Anthropic dispute reveals something important about how AI safety commitments function in commercial markets. Safety commitments that only apply when they are costless are not commitments in any meaningful sense. Anthropic maintained its restrictions at significant commercial cost: exclusion from classified government contracts is a material limitation on total addressable market for an AI company whose competitors now have that access.

The question is whether the cost is recoverable. Anthropic’s enterprise business outside government is substantial and growing. The company’s customer base includes large enterprises and developers for whom the Anthropic safety framework is a feature rather than a constraint. In those commercial relationships, the Pentagon dispute may actually strengthen Anthropic’s positioning as the AI company that chose consistency over a single government contract.

The calculus changes if the Defense Department AI market becomes large enough that exclusion from it creates a self-reinforcing capability gap. If the seven included companies receive classified data, classified mission context, and classified feedback on AI performance that accelerates their model improvement faster than what Anthropic can achieve through civilian market feedback alone, the commercial cost of the exclusion compounds over time. That dynamic is not immediately visible but it is the longer-term risk that Anthropic’s board must be evaluating alongside the near-term reputational benefit of maintaining its policy position.

The TCB View

The Pentagon’s exclusion of Anthropic is the first major commercial test of whether an AI company can maintain explicit use restrictions against a major government client and remain competitively viable. The answer so far is “yes, at a cost.” Anthropic gave up Defense Department classified contracts to maintain its safety framework. That is a real cost, not a symbolic one. The more interesting question is what the seven included companies implicitly agreed to. The Defense Department contracts are classified. The specific use cases permitted, the guardrails required, and the oversight mechanisms in place are not public information. OpenAI, Google, Microsoft, NVIDIA, Amazon, SpaceX, and Reflection may have accepted terms with their own restrictions that are simply less visible than Anthropic’s public dispute. Or they may have accepted genuinely broader use authorizations than Anthropic was willing to grant. The public framing of this story as “Anthropic was too cautious” misses that possibility. Responsible disclosure of what the seven accepted companies actually agreed to would be more instructive than another debate about whether AI safety is good or bad for business. Until that information is public, the Pentagon and Anthropic dispute is a useful case study in the commercial stakes of AI safety positions but an incomplete one.

Free Daily Briefing

Get the Daily Briefing

Crypto, AI, and Web3 intelligence. Free, every day.

FREE DAILY NEWSLETTER

The Daily Brief by TCB

Crypto, AI & finance intelligence in 5 minutes. Every weekday morning. Free.

Share This Article
Follow:
Satish Chand Gupta is the founder and editor in chief of The Central Bulletin. He covers Bitcoin, macro markets, and the intersection of digital assets with global finance. With years of experience tracking crypto markets and Web3 infrastructure, Satish focuses on original analysis and data-driven reporting.

Free Daily Briefing

Get the Daily Briefing

Crypto, AI, and Web3 intelligence. Free, every day.