Pentagon Tech Chief Says 'Anthropic Is Still Blacklisted' as Mythos AI Creates a National Security Dilemma

On May 1, 2026, Defence Department Chief Technology Officer Emil Michael delivered a carefully worded statement that captures the strange position the US military now finds itself in. Anthropic, the maker of Claude AI, remains blacklisted as a national supply chain risk. Its advanced Mythos AI model, however, is being treated by the Pentagon as a "separate national security moment."

Pentagon keeps Anthropic blacklisted but treats Mythos AI as separate national security tool even as NSA uses it.

"I think the Mythos issue that's being dealt with government-wide, not just at the Department of War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them," Michael told CNBC's "Squawk Box".

The distinction is significant. The Pentagon has agreements with seven AI companies - including OpenAI, Google, NVIDIA, Microsoft, Amazon Web Services (AWS), SpaceX and xAI - to deploy their advanced capabilities on the Pentagon's classified networks. Anthropic is notably excluded. At the same time, reports indicate that the National Security Agency (NSA) is actively using Mythos Preview, despite the Pentagon's blacklist.

Read also: Google vs Anthropic: The $200M Pentagon Deal That Redefined AI Ethics

This is the story of an AI model so powerful that even the government agency that banned its creator cannot afford to ignore it.

Split-screen infographic showing Anthropic labeled as “blacklisted” by the Pentagon with a cracked shield, contrasted with Mythos AI depicted as a national security cyber-defense tool under NSA usage. Includes key facts on AI policy conflict, Pentagon partnerships, legal dispute over $200M contract, and restricted access via Project Glasswing, highlighting tensions in US military AI strategy. Branded with CareerTechInsight.in.

The Pentagon-Anthropic Split Over AI Guardrails

The dispute dates back to a $200 million contract Anthropic signed with the Department of War in July 2024, making the company the first frontier AI developer to deploy models on classified US networks. The agreement included explicit restrictions: Anthropic would not allow its AI to be used for mass domestic surveillance or fully autonomous weapons systems.

The arrangement broke down in January 2026 when Defence Secretary Pete Hegseth issued a memo demanding "any lawful use" language across all DoD AI contracts, effectively requiring Anthropic to remove those safety restrictions. You cannot have a vendor inserting itself into the chain of command," Defence Secretary Pete Hegseth said in March. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and putting our warfighters at risk".

Read also: The $5.6 Billion Legal AI War Just Got Hotter. NVIDIA Just Picked Its Side.

Anthropic refused. A senior department official confirmed that for Anthropic, AI tools were never intended for missions like fully autonomous drone strikes, where machines would make kill decisions without direct human control. For the Pentagon, any vendor restricting how its technology could be used was unacceptable.

In late February, the Pentagon issued a sweeping directive. Anthropic was designated a "supply chain risk" - a label traditionally reserved for foreign adversaries. President Trump separately ordered all federal agencies to halt use of Anthropic's technology, with a six-month phase-out window. Trump wrote on social media that the company was run by "left-wing nut jobs," attempting to "strong-arm" defence. "We don't need it, we don't want it, and will not do business with them again!" he wrote.

Anthropic sued in San Francisco in March 2026, calling the Pentagon's supply-chain designation "unprecedented and unlawful" and alleging violations of free speech and due process. The case is ongoing.

Read also: Amazon Wants You to Talk to Its Products. It Just Launched AI Audio Q&A.

How Mythos AI Complicated Everything

Just when the Pentagon thought it had closed the door, Anthropic released Mythos Preview.

Mythos is not a standard large language model. Anthropic designed it specifically for cybersecurity tasks - and it performs at a level that has stunned researchers. The tool can find bugs lurking in decades-old code and autonomously find ways to exploit them.

The model found a 27-year-old vulnerability in OpenBSD - a bug that had survived nearly three decades of human review. It discovered hundreds of zero-day flaws and could chain multiple minor vulnerabilities into complete attack paths without human guidance. Security researchers called it the "Oppenheimer moment" of AI - a company that fears AI the most had created the most dangerous AI.

Anthropic itself has been cautious. It restricted Mythos access to approximately 40 organisations, given concerns over its offensive cybersecurity capabilities, publicly acknowledging only 12 of them. The limited distribution is managed through Project Glasswing, which gives access to major technology companies, security vendors, and critical software maintainers to find and fix vulnerabilities in widely used software.

But the Pentagon's stance on Mythos has been notably measured. Michael, the DoD's CTO, distinguished between the company and its creation: "I think the Mythos issue that's being dealt with government-wide... is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them".

Read also: Stripe Gave Your AI a Credit Card. Congratulations, Your Money Now Works While You Sleep.

The NSA Dilemma: Using What the Pentagon Banned

The most striking twist involves the National Security Agency. According to an Axios report published April 19, 2026, the NSA is actively deploying Anthropic's Mythos Preview, despite the Pentagon's blacklist. Two sources cited by Axios said Mythos Preview is being used "more widely" within the NSA, with usage extending across other parts of the department.

The contradiction could not be more stark. The NSA, a combat support agency that falls under the DoD, is using the most restricted version of a model whose developer the Pentagon had banned across all military work. The NSA is reportedly among the unnamed organisations with access to Mythos. In the UK, the NSA's counterparts are said to access Mythos through the AI Security Institute.

For security professionals, this is a governance problem dressed up as a policy dispute. As one analysis noted, "when top-level policy and ground-level adoption diverge this sharply, oversight breaks down, accountability becomes ambiguous, and risk accumulates in the gaps".

The contradiction has not gone unnoticed. White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent met with Anthropic CEO Dario Amodei on April 17 to discuss government use of Mythos and a potential roadmap for broader federal adoption. When asked about the meeting as he arrived for an event in Phoenix on April 24, President Trump said he had "no idea" about it.

The White House described the meeting as "productive and constructive," saying they "explored the balance between advancing innovation and ensuring safety". For a company that the White House had derided just two months earlier as a "radical left, woke company," the meeting signalled that Mythos may be too critical for even the US government to do without.

Read alsoNVIDIA Crosses $5 Trillion Market Cap: Historic AI Rally Hits India’s Shores – What It Means for You

The Fallout: A More Diverse AI Supplier Base

The Anthropic dispute has forced the Pentagon to confront its overdependence on a single AI provider. A Pentagon technologist told Reuters that the situation "highlighted the risks of overdependence on a single AI provider and underscored the need for a more diversified supplier base".

That realisation is now translating into accelerated contracts for smaller firms. Smack Technologies won a Marine Corps contract in March 2025 and delivered a working prototype by October - software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Through late 2025 and into early 2026, there was no clear direction. Then the Anthropic situation erupted. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on getting the system into combat operations in 2026, more than a year ahead of schedule.

EdgeRunner AI, which works with Army Special Forces and holds a Space Force contract, has also experienced dramatic acceleration. Navy engagement has intensified from monthly meetings to multiple meetings per week. A Space Force contract that had been stuck in procurement for over a year was signed within weeks after the Anthropic situation became public.

The military has told EdgeRunner it can reach IL-6 security clearance - which enables access to secret and top-secret data - within three months. That process normally takes 18 months or longer.

Tyler Sweatt, CEO of Second Front, a company that helps technology firms operate on secure Pentagon networks, described the shift in blunt terms: "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk".

Read also: AI vs. Doctors: Experts Debate Who Wears the Stethoscope in 2026

What This Means for India

For India, there are several takeaways.

First, the Pentagon's policy of "any lawful use" is likely to become the global baseline for military AI procurement - including for India's own defence modernisation. If Indian defence agencies partner with Google, OpenAI, or Microsoft, they will operate under US agreements, not Indian ones.

Second, India has already been offered first-hand experience with US defence AI frameworks. In February 2025, the Pentagon officially offered India co-development of autonomous systems, citing the "operational necessity of working with trusted partners." The INDUS-X initiative, launched in June 2023, has already facilitated direct contracts signed between US and Indian defence startups. If the US imposes "any lawful use" requirements, India would need to decide how much operational control it is willing to cede.

Third, India has been actively building its own indigenous AI capabilities for defence. The Defence Research and Development Organisation (DRDO) has developed the Evaluating Trustworthy AI (ETAI) framework and is working on homegrown large language models for military applications. The IndiaAI Mission has allocated over ten thousand crore rupees to build domestic compute capacity.

The Anthropic-Pentagon split suggests both the risks of over-dependence on foreign AI and the difficulty of walking away when a vendor is that capable. India cannot simply cut American companies out of its defence pipeline. It can, however, accelerate parallel investments in Indian AI - not as a replacement, but as a hedge.

Read also: Anthropic Asks Investors for 48-Hour Commitments in $50 Billion Round That Could Value It Above $900 Billion

The Bottom Line

The Pentagon blacklisted Anthropic but cannot ignore Mythos. The NSA is using the model even as its parent department bans its creator. Smaller defence startups are seeing contracts accelerate as the military rushes to diversify its AI suppliers. And Anthropic is in court trying to clear its name.

The white flag has not been waved. But it has at least been brought out of storage. The question now is not whether Anthropic will work with the Pentagon again. It is who will blink first - and what the rest of the world learns from watching them.

FAQ

Q: Why did the Pentagon blacklist Anthropic? 

A: Anthropic refused to drop restrictions on mass domestic surveillance and fully autonomous weapons. The Pentagon demanded "any lawful use" language. When the contract negotiations collapsed over a $200 million deal, Defence Secretary Hegseth labelled Anthropic a "supply chain risk" - a designation typically used for foreign adversaries.

Q: What makes Mythos AI so powerful? 

A: Mythos is designed specifically for cybersecurity tasks. It can autonomously find software vulnerabilities in decades-old code and chain them into complete attack paths. It discovered a 27-year-old bug that had survived human review for nearly three decades.

Q: Is the NSA using Mythos despite the ban? 

A: According to an Axios report published April 19, 2026, yes. The NSA is actively deploying Mythos Preview, and usage extends across parts of the DoD. The NSA is among about 40 organisations with access via Project Glasswing.

Q: How has this dispute affected other AI companies? 

A: The Pentagon has signed agreements with seven AI companies to deploy their technology on classified networks: OpenAI, Google, NVIDIA, Microsoft, AWS, SpaceX, xAI, and Reflection. Smaller defence startups are seeing dramatically accelerated contract timelines as the military rushes to diversify its AI suppliers.

Q: What does this mean for India? 

A: The Pentagon's "any lawful use" policy is likely to become the global baseline for military AI. India must decide how much operational control it is willing to cede and accelerate parallel investments in indigenous AI capabilities through DRDO and the IndiaAI Mission.

Q: Will Anthropic ever work with the Pentagon again? 

A: The White House meeting in April suggests both sides recognise they need each other. A California court has blocked enforcement of the blacklist, but the relationship may never return to its previous state. The window for a negotiated resolution is open but narrow.

Read also: Stripe Gave Your AI a Credit Card. Congratulations, Your Money Now Works While You Sleep.

Do you think AI companies should be allowed to restrict how their technology is used by the military? Or should governments have unrestricted access to AI for national defence? Drop your thoughts in the comments below.

If you found this breakdown useful, share it with a colleague in defence tech, policy, or cybersecurity - the rules of the AI military game are being written right now.

Tags: Pentagon, Anthropic, Mythos AI, NSA, Supply Chain Risk, AI Security, US Defence

Post a Comment

0 Comments

Have a question about AI or the latest tech trends? We’d love to hear your thoughts!
Please stay on topic and keep it helpful. Note: All comments are moderated to keep our community spam-free.

Post a Comment (0)