- NextWave AI
- Posts
- US Proposes Strict AI Contract Rules Amid Clash with Anthropic
US Proposes Strict AI Contract Rules Amid Clash with Anthropic
World’s First Safe AI-Native Browser
AI should work for you, not the other way around. Yet most AI tools still make you do the work first—explaining context, rewriting prompts, and starting over again and again.
Norton Neo is different. It is the world’s first safe AI-native browser, built to understand what you’re doing as you browse, search, and work—so you don’t lose value to endless prompting. You can prompt Neo when you want, but you don’t have to over-explain—Neo already has the context.
Why Neo is different
Context-aware AI that reduces prompting
Privacy and security built into the browser
Configurable memory — you control what’s remembered
As AI gets more powerful, Neo is built to make it useful, trustworthy, and friction-light.
The United States government is preparing a new set of strict guidelines for artificial intelligence companies seeking federal contracts, according to reports. The proposed rules come at a time of growing tensions between the U.S. government—particularly the Pentagon—and major AI firms over how their technology should be used.
The draft guidelines, developed by the U.S. General Services Administration (GSA), would require AI companies to allow their models to be used for any lawful purpose by the U.S. government if they want to secure federal contracts. The proposal could significantly reshape how AI companies collaborate with government agencies and may increase pressure on firms that want to impose limits on how their technology is used.
Background: The Dispute with Anthropic
The proposed rules follow a major dispute between the United States Department of Defense and AI company Anthropic. Recently, the Pentagon labeled Anthropic a “supply-chain risk” and barred government contractors from using the company’s AI systems in projects related to the U.S. military.
This designation came after months of disagreement between the government and the company. Anthropic had insisted on implementing strict safeguards that would restrict how its AI systems could be used, especially in sensitive or potentially harmful scenarios. However, the Defense Department reportedly argued that those restrictions went too far and could limit the military’s operational flexibility.
The move marked a rare public clash between the U.S. government and a leading AI developer, highlighting the growing importance of artificial intelligence in national security.
Key Provisions in the Draft Guidelines
According to reports, the draft rules contain several important requirements for AI firms that want to work with the U.S. government.
First, companies would be required to grant the government an irrevocable license to use their AI systems for all legal purposes. This means that once a company signs a government contract, it cannot later impose additional restrictions on how the technology is used within the limits of the law.
Second, the rules aim to ensure that AI systems used by the government remain politically neutral. Contractors would be prohibited from intentionally encoding partisan or ideological viewpoints into the outputs generated by their AI models. The requirement reflects growing concerns that AI systems could shape public discourse or policy decisions through biased responses.
Third, companies would have to disclose whether their AI models have been modified to comply with foreign regulations or commercial frameworks. This requirement is designed to help U.S. officials understand how external legal or regulatory systems might influence the behavior of AI models used by federal agencies.
Part of a Broader AI Procurement Strategy
The guidelines are part of a wider effort by the U.S. government to strengthen the way it purchases and deploys AI technology. The GSA oversees many federal procurement processes, and the new framework is expected to apply primarily to civilian government contracts.
However, similar rules are reportedly being considered for military use as well. If implemented, the policy could create a standardized approach across multiple government agencies, ensuring that AI tools operate under consistent guidelines.
Officials believe that clearer procurement standards will help the government adopt AI technologies more efficiently while maintaining transparency and accountability.
Rising Tensions Between Government and AI Companies
The dispute with Anthropic reflects a broader challenge facing the AI industry: balancing corporate responsibility and ethical safeguards with government demands for operational freedom.
Many AI companies have introduced safety policies to prevent their models from being used in harmful ways, such as creating misinformation or enabling autonomous weapons. At the same time, governments—particularly defense agencies—often want fewer restrictions so they can use the technology in intelligence analysis, cybersecurity, logistics, and military planning.
As AI becomes increasingly central to global security and economic competition, these disagreements are likely to intensify.
No Immediate Official Response
According to reports, both the White House and the GSA did not immediately respond to requests for comment about the proposed guidelines. Since the rules are still in draft form, they could undergo revisions before being formally implemented.
Nevertheless, the proposal signals that the U.S. government is moving toward stronger control over how AI systems are deployed in federal operations.
A Defining Moment for Government–AI Partnerships
If adopted, the new guidelines could set a powerful precedent for how governments around the world negotiate with AI developers. By requiring unrestricted lawful use and strict neutrality standards, the United States may be attempting to ensure that government agencies retain full operational authority when using advanced AI systems.
For companies like Anthropic and other AI developers, the decision will be whether to accept these terms or risk losing access to one of the world’s largest technology customers—the U.S. federal government.
As artificial intelligence continues to reshape industries and national security strategies, the relationship between governments and AI companies is entering a new and complex phase.

