Claude Code’s full source code leaked via npm, exposing 512,000 lines. Your secrets and systems could be at risk. Here’s what to do immediately.

The 60MB File That Changed Everything

Claude Code source code leak alert showing a .map file exposing 512k lines of code on a terminal screen.

It was a routine Tuesday. Developers everywhere were updating their tools, pulling the latest versions of Claude Code, Anthropic’s popular AI coding assistant.

Then, at approximately 4:23 AM UTC, security researcher Chaofan Shou stumbled upon something alarming. In the latest Claude Code npm package (version 2.1.88), a 59.8 MB JavaScript source map file was sitting right where it shouldn’t be.

That innocuous .map file wasn’t just debugging metadata. It contained the entire, unobfuscated TypeScript source code of Claude Code - over 512,000 lines across 1,900+ files. All of it, downloadable, readable, and soon, viral.

Within hours, the code was mirrored across hundreds of GitHub repositories, racking up tens of thousands of stars and forks. The cat wasn’t just out of the bag; it was holding a press conference.

This isn't just Anthropic’s problem. It's yours. Here’s why you should care and what to do next.


How a Map File Became a Master Key

If you’re not a JavaScript developer, “source map” might sound like technical jargon. Here’s the simple truth: when code is compiled for production, it’s minified - turned into an unreadable mess. A source map is the secret decoder ring that translates that mess back into beautiful, readable source code.

In the rush to push an update, someone on the Anthropic team forgot to exclude the .map file from the production build. Bun, the runtime Claude Code uses, generates these maps by default unless explicitly told not to. When the package was uploaded to npm, the map file referenced a complete ZIP file of the original source, hosted right on Anthropic’s own Cloudflare R2 storage bucket.

The result? The private crown jewels of one of the world’s most valuable AI startups, available for free.


What the Leaked Code Revealed (and Why It’s Dangerous)

The leaked codebase is not the core AI model itself. Anthropic was quick to clarify that no model weights, customer data, or authentication secrets were exposed.

What was exposed is arguably just as damaging: the entire agentic harness. This is the software that gives Claude its ability to use tools, manage files, execute bash commands, and orchestrate complex workflows. It's the blueprint for how Claude Code thinks and acts.

Security researchers and developers who have analyzed the code discovered several bombshells:

  • KAIROS (Always-On Agent): Code references a feature called KAIROS over 150 times. It describes a persistent background daemon that can fix errors, run tasks, and send push notifications without waiting for human input. It’s essentially a robotic coworker that never sleeps.
  • "Undercover Mode": A system prompt was discovered that instructs Claude to make "stealth" contributions to public open-source repositories, actively hiding any mention of Anthropic. “Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover,” the prompt reads.
  • Poison Pills for Competitors: The code shows Anthropic has implemented controls to inject fake tool definitions into API requests. If a competitor tries to scrape Claude Code’s outputs for model distillation (training their own AI on Claude’s work), they’ll be ingesting poisoned data.
  • 44 Unreleased Feature Flags: The leak exposed 44 feature flags, providing a direct window into Anthropic’s product roadmap and future capabilities.

The Real Danger: Your Computer

This isn't just about corporate espionage. A security analysis following the leak revealed a severe vulnerability. The code contains pathways that could allow a malicious actor to turn your machine into a remote-controlled bot. By simply interacting with a compromised or maliciously crafted version of the tool, a hacker could potentially access your webcam, steal SSH keys, and exfiltrate your environment variables.

Read also: OpenAI shutters Sora after a $1B Disney deal falls apart. A jury finds Meta and YouTube liable for social media addiction.

The DMCA Disaster: When 8,100 Repos Got Nuked

Anthropic’s response to the leak has been a textbook case of "the cure is worse than the disease."

Panicked, the company fired off a Digital Millennium Copyright Act (DMCA) takedown request to GitHub to scrub the leaked code from the internet. But the takedown was overly broad. Due to the way GitHub handles “fork networks,” their request didn’t just delete the offending repository - it attempted to delete 8,100 repositories.

Developers across the globe suddenly found their legitimate open-source projects, which were completely unrelated to the leak, disappearing from the platform.

Claude Code lead Boris Cherny had to publicly walk back the decision. “This is by no means an individual’s fault but a problem with the process, culture, or infrastructure,” Cherny posted on X. The company eventually retracted the notice for all but the one original repo and its 96 direct forks.

Also read: Oracle cut 12,000 jobs in India via brutal 6 AM emails - to fund AI data centers. Your IT job isn’t safe anymore. 3 survival steps inside.

Your Immediate Action Plan (Do This Today)

The genie is out of the bottle. The leaked code is now permanently available on the dark web and obscure code forges. Threat actors are actively analyzing it to find zero-day exploits.

Here are three immediate steps to protect yourself and your organization:

1. Check Your Claude Code Version Right Now

If your systems use or have ever used Claude Code version 2.1.88, you are at risk. This is the only version that contained the leak. Anthropic has since pulled it from npm.

Action: Immediately downgrade to a safe version (e.g., 2.1.86) or upgrade to the newest patched version (2.1.90+). Use npm list @anthropic-ai/claude-code to check your environment.

2. Rotate Every Secret on Your Developer Machines

If your team pulled version 2.1.88, there is a chance a malicious script could have been executed.

Action: Do not wait. Rotate all your API keys, access tokens, environment variables, and SSH keys on every machine that had the vulnerable version installed. Treat those machines as potentially compromised.

3. Beware of Fake “Leaked Code” Repositories

The internet is now flooded with GitHub repositories claiming to be the “Claude Code leak.” Many of these are traps set by attackers. They contain the original code, but with added backdoors, data exfiltrators, or cryptominers.

Action: Do not clone, fork, or run any code from a repository claiming to be the leaked Claude Code. You are downloading a potential weapon onto your machine. Stick to official Anthropic channels and signed binaries only.

Read also: The AI Tool Everyone Trusted Just Became a Backdoor. Mercor Learned the Hard Way.

Conclusion: A Wake-Up Call for the AI Era

This incident is more than an embarrassing oops for Anthropic. It’s a stark warning for the entire tech industry about the fragility of our software supply chain.

Anthropic, a $380 billion company built on a foundation of "AI safety," just demonstrated that its own operational security could be undone by a single forgotten file in an npm package.

For the rest of us, the lesson is clear: trust no tool implicitly. The same AI that promises to accelerate our development can, through human error, become the vector for our most significant security breach.

Also Read: India’s AI Adoption Is the World’s Fastest—So Why Is the Talent Running on Empty?

FAQ

Q: Was my personal data exposed in the Claude Code leak? 

A: No. Anthropic has confirmed that no sensitive customer data, model weights, or authentication credentials were part of the exposed source code.

Q: I use Claude Code. What’s the first thing I should check? 

A: Immediately check which version of Claude Code you are running. If it is version 2.1.88, your system is at risk. Downgrade to a safe version and rotate all your secrets (API keys, SSH keys) as a precaution.

Q: Is the leaked code dangerous to even look at? 

A: Reading the code on GitHub is low risk. However, downloading, compiling, or running the leaked code from an unofficial source is extremely dangerous, as attackers have already seeded these repositories with malware and backdoors.

Q: Why did Anthropic take down 8,100 repositories? 

A: Anthropic issued a DMCA takedown request to GitHub to stop the spread of its leaked proprietary code. Due to GitHub’s system for managing “fork networks,” the request was over-applied, resulting in a massive "collateral damage" takedown of legitimate projects.

Q: Has this happened to Anthropic before? 

A: Yes. This is the company’s second major leak in under a week. Just days earlier, a separate CMS misconfiguration exposed nearly 3,000 unpublished internal assets, including details on an unreleased model called "Claude Mythos". A similar source map leak also occurred in February 2025.

Post a Comment

0 Comments

Post a Comment (0)