Home / Editorial
International Law
The DoD-Anthropic Clash
«05-Mar-2026
Source: The Hindu
Introduction
The U.S. Department of Defense (DoD), styled as the Department of War under the second Donald Trump administration, has entered into a public dispute with AI firm Anthropic — the maker of the Claude AI product. The DoD threatened to designate Anthropic a "supply chain risk," dissuading firms that work with the U.S. government from using Anthropic's products. ChatGPT maker OpenAI subsequently entered into the picture, obtaining an agreement it said was not radically different from what Anthropic wanted.
What is Claude?
- Claude is an AI chatbot that helps organisations and individual users create and modify code.
- Its Claude Code product has been received extraordinarily well due to its capabilities.
- Claude Code is among the few AI products run with extremely powerful large language models (LLMs) while also supporting on-device creation and editing of tools, once it has access to a range of software libraries.
- The product is compelling to the defence establishment because it can iterate on high-tech weapons and defence systems. Recruitment of programmers for such systems is slow, as critical weapons systems are protected by several layers of secrecy necessitating time-consuming security clearances.
- Claude Code has been a compelling proposition for the DoD as it allows for rapid iteration on programmes that drive its technology. While it does not execute programming tasks perfectly all the time, it performs well enough that development timelines have been shrunk in organisations that have deployed it widely, especially among experienced software developers.
Why Did Anthropic Clash with the DoD?
- Anthropic was onboarded to the DoD as part of a $200 million contract last June, which allowed the U.S. government to use Claude's services from dedicated infrastructure hosted by Amazon Web Services.
- Issues between the firm and the DoD started on January 9, when Defence Secretary Pete Hegseth published a memorandum entitled "Accelerating America's Military AI Dominance," calling for the elimination of "blockers to data sharing, Authorizations to Operate (ATOs), test and evaluation and certification, contracting, hiring and talent management, and other policies that inhibit rapid experimentation and fielding."
- Anthropic has a much-publicised "constitution" for Claude that discourages the model from supporting widespread surveillance and enabling fully autonomous weaponry.
- Dario Amodei, the firm's co-founder, insisted on strong language in the agreement between the DoD and Anthropic to bake in protections against domestic surveillance of U.S. residents and enabling fully autonomous weaponry.
- The firm was given until last Friday to relent and let the DoD have completely unrestricted access to its models. It refused, saying in a blog post that it would help the DoD transition to a new provider.
- The DoD then classified Anthropic as a supply chain risk — a designation usually applied to firms whose products can provide foreign adversaries a backdoor into critical systems.
- While this designation only disallows DoD suppliers and partners from using Claude on systems dedicated to the DoD, there are concerns that executives may lean toward caution and completely remove ties with Claude.
What is OpenAI's Agreement?
- OpenAI negotiated an agreement with the DoD that it claims has the same protections against surveillance and fully autonomous weaponry that Anthropic sought.
- It is not fully clear why OpenAI was able to land this deal while Anthropic was cast out.
- The agreement made public by OpenAI states: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols."
- The agreement also states that "The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decision maker under the same authorities."
- Anthropic is reported to have sought greater clarity in the agreement's legal language that would prohibit the use cases described above, even if they were legalised.
- OpenAI, in a statement, noted: "We think our red lines are more enforceable here because deployment is limited to cloud-only (not at the edge), keeps our safety stack working in the way we think is best, and keeps cleared OpenAI personnel in the loop."
- OpenAI further stated: "We don't know why Anthropic could not reach this deal, and we hope that they and many more labs will consider it."
Conclusion
The DoD-Anthropic clash highlights a growing tension between AI firms' safety commitments and the defence establishment's desire for unrestricted access to powerful AI tools. Anthropic's refusal to grant completely unrestricted access to Claude, rooted in its constitutional commitments against autonomous weaponry and mass surveillance, led to its supply chain risk designation. The episode raises fundamental questions about how AI companies can engage with government defence contracts while upholding their stated ethical red lines — and whether OpenAI's more accommodating agreement truly offers the same protections Anthropic sought.