
The Department of Defense’s clash with Anthropic over the integration of artificial intelligence into military operations, and who sets the limits on usage, reached a peak this week with Defense Secretary Pete Hegseth giving the AI company until 5:01 p.m. ET Friday to cede to the government’s demands. Anthropic has not budged, to date at least, but the battle between military and industry over AI is just getting started. The Pentagon is colliding with the private companies that control AI in a way that has not been tested in the post-World War II era.
On Thursday, Anthropic refused Defense Secretary Pete Hegseth’s demand to loosen certain safeguards of its models for military use, including mass domestic surveillance or fully autonomous weapons, because it violates company policies. CEO Dario Amodei’s decision comes after the Pentagon warned it could terminate the partnership if the company refuses to support “all lawful uses.”
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote in a statement on Thursday. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
The standoff highlights the emerging reality that private firms developing frontier AI may seek to set their own limits on how the technology is deployed, even in national security contexts.
In July the Defense Department awarded contracts worth up to $200 million each to four companies — Anthropic, OpenAI, Google DeepMind, and Elon Musk’s xAI — to prototype frontier AI capabilities tied to U.S. national security priorities. The awards signal how aggressively the Pentagon is moving to bring cutting-edge commercial AI into defense work.
The urgency is reflected in internal Pentagon planning as well. A January 9 memorandum outlining the military’s artificial intelligence strategy calls for the U.S. to become an “AI-first” fighting force and to accelerate integration of leading commercial AI models across warfighting, intelligence, and enterprise operations.
“There are no winners in this,” Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, told CNBC in a recent interview about the standoff between the Pentagon and Anthropic. “It leaves a sour taste in everyone’s mouth.”
What it does do, though, is mark a shift — a departure from decades of defense innovation during which governments themselves controlled the technology as it was created.
“For most of the post–World War II era, the U.S. government defined the frontier of advanced technology,” said Rear Admiral Lorin Selby, former chief of naval research and current general partner at Mare Liberum, an investment firm that specializes in maritime technology and infrastructure. “It set the requirements, funded the foundational research, and industry executed against government-driven specifications. From nuclear propulsion to stealth to GPS, the state was the primary engine of discovery, and industry was the integrator and manufacturer.”
AI, Selby said, has inverted that model.
“Today the commercial sector is the primary driver of frontier capability. Private capital, global competition, and commercial data scale are advancing AI at a pace that traditional government R&D structures cannot easily replicate. The Department of War is no longer defining the edge of what is technically possible in artificial intelligence — it is adapting to it,” he said.
United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado on Monday, Feb. 23, 2026.
Aaron Ontiveroz | Denver Post | Getty Images
This reversal in the balance of power over technology carries both opportunity and risk.
“We shouldn’t be in a place where private companies feel that they have leverage over the U.S. government or Western allies because of the technological capability they are providing,” said Joe Scheidler, a former associate director and special advisor at the White House and co-founder and CEO of AI start-up Helios. “Technologists should build and do that responsibly, but governments should be the entities making the decisions.”
Anthropic and the DoD did not respond to requests for comment
Why the military needs private AI
Public-private partnerships have long supported U.S. defense innovation, from World War II industrial mobilization to modern aerospace and cybersecurity programs. But artificial intelligence is different because the most advanced capabilities are increasingly concentrated in commercial firms rather than government labs.
“Strong public-private partnerships are what gives America its edge,” Scheidler said. “You will not find a more dynamic and innovative talent pool than that of the American entrepreneurial community. The idea of trying to replicate that level of innovation within government itself … is difficult.”
That concentration is precisely why governments seek partnerships, but according to Selby, the dependency is also primarily driven by speed. “The innovation cycle in venture-backed firms moves in months. Traditional acquisition cycles move in years. Without commercial AI providers, the government would be slower, less adaptive, and far more expensive,” he said.

When critical national security tools are developed by private companies, “the main change is that the government no longer fully controls the development of its most advanced technological tools,” said Betsy Cooper, director of the Aspen Policy Academy and former advising attorney for the U.S. Department of Homeland Security.
Commercial AI systems are typically built first for broad markets rather than military missions, which can create gaps between how companies design their technology and how governments want to deploy it, Cooper said.
That misalignment can become more pronounced when corporate policies, reputational concerns, or global customer pressures conflict with government objectives, a dynamic now visible in the Anthropic dispute.
“Companies may not want to risk negative reaction from their customer base if their product is used for highly controversial reasons — for instance, to create autonomous lethal weapons or commit preemptive killings before crimes are committed,” Cooper said.
Government has longer-term leverage
Despite the shift toward commercial technology, defense leaders are unlikely to relinquish control over mission critical systems.
“The first thing to understand is that from what I have seen to date, the DoD is not going to give up final control,” said Brad Harrison, founder of Scout Ventures, an early-stage venture capital firm investing at the intersection of national security and critical technology Innovation. “The government still wants to understand everything that goes into it and all the dependencies and risks.”
Harrison, who is a former U.S. Army Airborne Ranger and West Point graduate, said AI could eventually influence decisions such as how to intercept incoming threats, so “the government is going to be extremely cautious with how they let AI interact with those data layers,” he said. “Nobody wants to be the person responsible for Skynet,” he said, referring to a fictional AI from the “Terminator” universe that caused a nuclear war.
Governments also retain powerful tools to influence companies, including procurement decisions, export controls, and regulatory authority. “The government has a lot of leverage,” Harrison said. “If you don’t want to work with them, they have a lot of ways to make that a very difficult decision,” he added.
But leverage flows in both directions, at least for now, according to Selby. “In the short term, companies with scarce AI talent and proprietary models may hold significant influence. In the long term, sovereign governments retain regulatory authority, contracting power, funding scale, and if necessary, legal compulsion,” he said.
The most important question, in Selby’s view, is “whether we build a durable public-private compact that treats AI as foundational national security infrastructure rather than just another vendor relationship.”
Risks in new military-Silicon Valley industrial complex
Experts say the issue is ultimately less about whether companies or governments hold permanent leverage and more about how the relationship evolves as AI becomes central to national power.
“If we build alignment and resilience into the public-private relationship, AI can strengthen national security while preserving innovation,” Selby said. “If we fail to do so, we risk a future in which capability is abundant but alignment is brittle,” he added.
There are many new forms of risk in the emerging military-Silicon Valley industrial complex. For example, reliance on externally developed AI could introduce vulnerabilities if systems fail unexpectedly or become unavailable, particularly if military units grow accustomed to them during operations.
“Over-reliance could prove deadly,” said Shanka Jayasinha, founder of Onto AI, a company that develops AI tools for military, healthcare, financial organizations, and enterprise solutions, describing scenarios where special operations units depend on AI-enhanced mission-coordination tools during deployments. If those systems fail after prolonged use, “many lives would be in danger,” he said.
Vendor lock-in is another concern. As AI platforms become embedded in workflows, replacing them may become difficult. “With the current speed of progress in AI, it is tough to unseat any incumbent,” Jayasinha said.
Harrison, however, says one risk the Pentagon won’t expose itself to is being captive to a single company. “The U.S. government is not going to be dependent on any one Silicon Valley company,” he said “They will very methodically test systems, control the data layer, and move step by step.”
In fact, the Pentagon issued its own very clear statement on the importance of Anthropic or any single company in a post on X from Under Secretary of War for Research and Engineering Emil Michael on Thursday night: “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
Anthropic said in its statement that should the government “offboard” Anthropic, “we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
One approach is building what some technologies call “sovereign AI architectures” — systems designed to allow governments to maintain independence from vendors while still benefiting from commercial innovation.
“We talk a lot internally about this notion of sovereign intelligence and vendor independence,” Scheidler said, contending that the U.S. ecosystem remains broad enough to prevent over-reliance on any single provider. “There are new ideas emerging on a daily basis, and we don’t have to rely on one vendor to do that,” he said.

