On February 27, 2026, President Trump directed federal agencies to stop using technology developed by the U.S. artificial intelligence (AI) company Anthropic, and Secretary of Defense Pete Hegseth announced he was directing the Department of Defense (DOD) to designate Anthropic a "Supply-Chain Risk to National Security." (DOD and the Secretary are now using "Department of War" and "Secretary of War," respectively, as "secondary" designations per Executive Order 14347.) These actions followed a reported months-long dispute between DOD and Anthropic regarding certain uses of its AI technologies. The national security risk designation and use prohibitions may have implications for AI innovation and competition, including at Anthropic and other domestic AI companies. This In Focus provides information on the AI models under debate, actions taken by the U.S. government (USG), the potential implications of those actions, and considerations and questions for Congress.
Frontier AI models are the most advanced foundation models—general-purpose AI models pretrained on large datasets that can be used for many applications. Anthropic's Claude model, one such frontier model, reportedly has been deployed across DOD and national security agencies for such applications as intelligence analysis, operational planning, and cyber operations. In June 2024, Anthropic stated it was the first AI company to deploy frontier models in classified USG networks. In July 2025, four U.S. AI companies entered into contracts with DOD to "accelerate [DOD] adoption of advanced AI capabilities to address critical national security challenges." DOD awarded up to $200 million each to Anthropic, Google, OpenAI, and xAI. The Pentagon reportedly agreed to the use in classified systems of xAI's Grok model, Google's Gemini model, and six other tech companies' AI models, as of May 1, 2026.
While asserting a belief in "the existential importance of using AI to defend the United States and other democracies," Anthropic claimed that, during contract negotiations with DOD, it requested two use exceptions for its Claude model. First, Anthropic stated that it "do[es] not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians." Second, Anthropic asserted that "mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Anthropic's requested use exceptions highlight a broader debate over frontier AI model capabilities and limitations. Though frontier AI models demonstrate powerful capabilities, as measured by publicly available benchmarks and assessments, some studies have described their potential limitations. A December 2024 Frontier AI Trends Report by the UK's AI Security Institute reported that, in its evaluations of frontier AI systems across domains critical to national security and public safety, model safeguards are improving, but the institute found "vulnerabilities in every system [it] tested." According to Stanford University's 2025 AI Index Report, complex reasoning tasks remain a challenge, though AI model performance on "demanding benchmarks" continues to improve.
On February 27, 2026, President Trump directed federal agencies to "IMMEDIATELY CEASE all use of Anthropic's technology" and outlined a six-month "phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels." On March 5, 2026, Anthropic CEO Dario Amodei confirmed receipt of a letter from DOD designating Anthropic a supply chain risk to America's national security, which reportedly went into immediate effect.
In response, federal agencies took actions to stop using Anthropic's Claude models. For example, the General Services Administration (GSA) announced that it was removing Anthropic from USAi.gov and its multiple award schedule (i.e., long-term government-wide contracts with commercial firms). Other agencies such as the State Department and the Department of Health and Human Services reportedly ceased use of Claude. The Office of Personnel Management removed Claude from its list of AI use cases (updated March 4, 2026) and added xAI's Grok and OpenAI's Codex (Claude was listed on the prior list, dated January 30, 2026).
As a private company, Anthropic provides limited public financial information. Some information suggests that federal agencies and government contractors no longer using Anthropic's AI models might not have a significant financial effect on the company. The $200 million awarded by DOD and a $18,960 award from the Department of State in 2026 (the only government contract with Anthropic on usaspending.gov) are relatively small compared to its run-rate revenue (i.e., annual revenue estimate based on its current financial performance). On February 12, 2026, Anthropic stated its run-rate revenue had reached $14 billion, and on April 6, 2026, Anthropic announced it had surpassed $30 billion. In January 2026, Anthropic CEO Dario Amodei reportedly stated that about 80% of Anthropic's business is with enterprise customers, which he viewed as a relatively predictable, stable source of income.
Other information suggests that federal agencies and government contractors no longer using Anthropic's AI models might have a significant financial effect on the company. Anthropic has stated that the USG's actions are "harming Anthropic irreparably." Additionally, it is unclear what percentage of Anthropic's enterprise customers are federal agencies and USG contractors. If Anthropic loses a significant share of its revenue or funding from investors, it might have difficulty continuing to develop its AI models, potentially affecting its ability to compete and innovate.
The effect of the USG's actions on Anthropic may partially depend on the response of its other clients and the scope of the prohibition, which is under dispute. On February 27, 2026, Secretary Hegseth stated, "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Anthropic responded that it does not believe this action is legal, asserting that "a supply chain risk designation under 10 U.S.C. §3252 can only extend to the use of Claude as part of [DOD] contracts—it cannot affect how contractors use Claude to serve other customers." Anthropic filed federal lawsuits in two courts on March 9, 2026. One lawsuit claims that the government's actions exceed its legal authority and violate the Administrative Procedure Act as well as Anthropic's due process and First Amendment rights. The second lawsuit seeks review of the designation of Anthropic as a supply chain risk under a separate statute. In the first lawsuit, Microsoft, a company that "has established a close business relationship with Anthropic," filed an amicus brief urging the court to temporarily block the implementation of this designation. On March 26, 2026, a federal judge ordered a preliminary injunction to temporarily block the implementation of this designation and halted the President's directive ordering federal agencies to stop using Claude, and GSA restored Anthropic in USAi.gov and its multiple award schedule. In the second lawsuit, the U.S. Court of Appeals for the D.C. Circuit denied Anthropic's request to stop DOD from labeling it as a security risk.
One trade group reportedly raised concerns that the designation is being used in a procurement dispute and should instead be reserved for foreign adversaries. Some initial reporting indicated that a subset of defense contractors have stopped using Anthropic, while others are waiting to see how the conflict is resolved. On April 17, 2026, Anthropic executives met with White House officials, discussing "opportunities for collaboration" and "balance between advancing innovation and ensuring safety."
The USG's actions against Anthropic might have broader effects on AI markets and competition. For example, if the USG's actions negatively impact Anthropic's revenues such that it can no longer operate, that would reduce the number of companies offering frontier AI models and prevent other companies from creating AI products using Anthropic's models. However, the USG's actions also appear to have boosted public adoption of Claude, which became the most popular app on Apple's chart of top free apps in the United States on February 28, 2026. Further, OpenAI's decision to strike a deal with the Pentagon reportedly resulted in a "massive wave of public backlash" as users uninstalled ChatGPT.
The USG's actions against Anthropic have also raised concerns about potential effects on innovation and U.S. competitiveness. Some trade groups reportedly raised concerns that designating an American technology company as a national security risk would "have a chilling effect on U.S. innovation." A letter reportedly sent by former defense officials, academics, and tech policy leaders to the House and Senate Armed Services Committees asserted that "blacklisting an American company weakens U.S. competitiveness" and warned, "this is not a marketplace any serious entrepreneur or investor can build around." Microsoft's amicus brief in support of Anthropic asserts: "This is not the time to put at risk the very AI ecosystem that the Administration has helped to champion."
Federal policies and actions may influence competition between AI companies and potentially encourage or stifle innovation. Congress may wish to conduct oversight on the extent of DOD's authority to declare Anthropic a supply chain risk to national security and how the designation may affect private-sector innovation. Congress may also consider legislation to clarify privacy and security considerations around the use of AI technologies for sensitive applications, such as public surveillance. Alternatively, Congress may wait for federal courts to determine the legal merits of the Trump Administration's actions against Anthropic before considering a legislative response. In weighing these options, among others, Congress might consider a range of questions, including: