• NextWave AI
  • Posts
  • Trump Moves to Ban Anthropic, Yet U.S. Military Deploys Claude AI in Iran Strikes

Trump Moves to Ban Anthropic, Yet U.S. Military Deploys Claude AI in Iran Strikes

What investment is rudimentary for billionaires but ‘revolutionary’ for 70,571+ investors entering 2026?

Imagine this. You open your phone to an alert. It says, “you spent $236,000,000 more this month than you did last month.”

If you were the top bidder at Sotheby’s fall auctions, it could be reality.

Sounds crazy, right? But when the ultra-wealthy spend staggering amounts on blue-chip art, it’s not just for decoration.

The scarcity of these treasured artworks has helped drive their prices, in exceptional cases, to thin-air heights, without moving in lockstep with other asset classes.

The contemporary and post war segments have even outpaced the S&P 500 overall since 1995.*

Now, over 70,000 people have invested $1.2 billion+ across 500 iconic artworks featuring Banksy, Basquiat, Picasso, and more.

How? You don’t need Medici money to invest in multimillion dollar artworks with Masterworks.

Thousands of members have gotten annualized net returns like 14.6%, 17.6%, and 17.8% from 26 sales to date.

*Based on Masterworks data. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd

In a dramatic turn of events that underscores the growing entanglement of artificial intelligence and modern warfare, the United States government reportedly used AI tools from Anthropic during a major airstrike on Iran—just hours after President Donald Trump announced a phase-out of the company’s technology from federal agencies.

The revelation, first reported by The Wall Street Journal, highlights a striking contradiction between political rhetoric and operational military dependence on advanced AI systems. At the center of the controversy is Anthropic’s flagship AI model, Claude, which has been integrated into classified U.S. defense networks since mid-2024.

AI in the Iran Operation

According to reports, U.S. Central Command and other military units deployed Claude AI during coordinated strikes on Iran as part of an operation referred to as “Operation Epic Fury.” The offensive, conducted alongside Israel, targeted Iran’s nuclear and ballistic missile infrastructure.

Claude’s role was not peripheral. The AI system reportedly assisted in intelligence assessments, target identification, and battlefield scenario simulations. By processing vast quantities of real-time data, the AI helped military planners evaluate potential strike outcomes, analyze surveillance inputs, and refine operational strategies.

Such applications reflect a broader trend in which AI systems are becoming indispensable tools in national defense. Military commanders increasingly rely on machine learning models to synthesize intelligence faster than human analysts alone can manage. In high-stakes situations where seconds matter, AI-assisted analysis can significantly influence mission outcomes.

Political Fallout and Public Criticism

The use of Anthropic’s technology came shortly after President Trump publicly criticized the company on Truth Social, labeling its leadership as “leftwing nut jobs” and “woke.” He directed federal agencies to “immediately cease” using Anthropic products, arguing that the company’s policies were putting American lives and national security at risk.

Despite the strong language, the administration simultaneously announced a six-month phase-out period for agencies such as the Department of Defense. The delayed transition suggests that Anthropic’s AI capabilities are deeply embedded within government systems and cannot be easily replaced.

The contradiction between the president’s rhetoric and the military’s continued operational use of Claude highlights the complex realities of national security infrastructure. Cutting off a key AI provider overnight could disrupt ongoing missions, intelligence workflows, and classified defense programs.

Previous Military Applications

This was not the first time Claude AI had been deployed in high-profile operations. Earlier reports indicated that the Pentagon used Anthropic’s models during efforts related to Venezuelan President Nicolás Maduro. While details remain classified, the AI reportedly supported intelligence analysis and strategic planning functions.

Such cases illustrate how frontier AI companies have become integral to modern defense operations. Anthropic was among the first advanced AI firms to deploy models inside secure U.S. government networks, positioning itself as a key partner in national security initiatives.

The AI Safety Dispute

The tension between Anthropic and U.S. defense officials did not emerge overnight. For months, the company and the Pentagon had been negotiating the boundaries of acceptable military use.

Anthropic has maintained that it permits national defense applications of its AI models—with two critical exceptions: mass domestic surveillance of American citizens and the development of fully autonomous weapons. The company has consistently stated that it opposes AI systems capable of making lethal decisions without human oversight.

This stance has reportedly created friction with defense officials seeking broader operational flexibility. The dispute intensified when the U.S. government considered designating Anthropic as a “supply chain risk”—a label historically reserved for foreign adversaries rather than American firms.

Anthropic strongly rejected the classification, calling it unprecedented and legally unsound. In a public statement, the company emphasized its commitment to supporting American warfighters while warning that such a designation could set a dangerous precedent for innovative U.S. companies negotiating with the government.

Broader Implications for Warfare and Technology

The episode underscores a defining feature of 21st-century conflict: artificial intelligence is no longer experimental—it is operational. From intelligence gathering to predictive modeling, AI systems are embedded in decision-making processes at the highest levels of military command.

However, the situation also raises critical ethical and governance questions. Should private AI companies have the authority to restrict how their technology is used in warfare? Can governments rely on commercially developed AI while simultaneously criticizing or penalizing the firms that build it?

Furthermore, the political framing of AI companies as ideological actors complicates procurement decisions. If national security tools become entangled in partisan disputes, the stability of defense supply chains could be affected.

A New Era of Strategic Dependence

Ultimately, the reported use of Claude AI during the Iran strikes reveals a fundamental reality: advanced AI capabilities have become strategically indispensable. Even amid political tensions, operational demands often prevail.

The six-month phase-out period suggests that replacing Anthropic’s technology will require time, careful planning, and potentially alternative partnerships with other AI providers. Until then, the U.S. military’s reliance on cutting-edge AI tools remains evident.

As geopolitical tensions escalate and technological innovation accelerates, the relationship between governments and AI companies will likely grow even more complex. The clash between President Trump’s public condemnation and the military’s continued deployment of Claude AI serves as a powerful illustration of this evolving dynamic.

In modern warfare, algorithms are as critical as aircraft—and the politics surrounding them may be just as consequential.