Anthropic's Claude Code Source Code Leaked — Company Accidentally DMCA's Thousands of GitHub Repos
On March 31–April 1, 2026, Anthropic accidentally exposed approximately 512,000 lines of TypeScript source code for its Claude Code command-line application after uploading unminified source to NPM instead of the compiled build artifact. Within hours the code was mirrored across thousands of GitHub repositories and analyzed by developers worldwide. Anthropic confirmed the incident in an official statement, calling it a "release packaging issue caused by human error" and stressing that no customer data or credentials were involved. The exposure represents one of the largest accidental source code leaks by a major AI company to date.
In its effort to contain the spread, Anthropic submitted a DMCA takedown notice that GitHub's automated enforcement system executed against approximately 8,100 repositories — including legitimate forks of Anthropic's own publicly released Claude Code repository, triggering an immediate backlash from developers whose unrelated code was incorrectly blocked. Anthropic's head of Claude Code, Boris Cherny, quickly acknowledged the over-broad takedown as accidental and retracted the bulk of the notices, limiting enforcement to one repository and 96 forks that actually contained the leaked source. GitHub confirmed the selective rollback was completed within several hours of the initial enforcement action.
The incident marks Anthropic's second major data leak in less than a week, following the March 26 exposure of details about the company's unreleased "Mythos" AI model — described internally as Anthropic's most capable system to date and flagged as carrying unprecedented cybersecurity risk due to its autonomous offensive security capabilities. Security researchers noted that the Claude Code source exposure opens potential attack surface analysis against one of the industry's most widely deployed AI coding tools, while Anthropic faces mounting questions about its internal operational controls even as it publicly champions AI safety as a core company mission. VentureBeat's security desk identified at least five exploitable patterns in the leaked code before Anthropic's takedown effort took effect.
Sources
TechCrunch, VentureBeat, Bloomberg