Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
U.S. AI giant's 510,000 lines of source code leaked! Developers directly "copying homework"? Lawyers warn of risks
Because an employee accidentally leaked 512,000 lines of Claude Code source code, the entire industry got a glimpse into the internal product architecture of AI upstart Anthropic—and even got an early look at its product plans for electronic pets and persistent AI assistants.
Local time on March 31, due to a packaging mistake when bundling an npm package, the Claude Code source code was leaked. Within hours, the leaked code took off on GitHub, generating more than 10,000 stars and over 20,000 backup downloads.
In response, Anthropic told reporters from the Daily Economic News (hereinafter “each-time reporter”) that this was a release-and-packaging issue caused by human error (human error), not a security vulnerability.
Experts noted that this leak provides a “copy-and-learn” opportunity for small and mid-sized developers to improve their products’ capabilities, but commercial use of the relevant code faces legal risks.
** 512,000 lines of Claude Code source code “released as open source”**
Local time on March 31, Chaofan Shou, an intern researcher at Web3 security company FuzzLand, disclosed on the social platform X that the source code of Anthropic’s AI programming tool Claude Code was accidentally leaked.
According to his description, when he was checking the Claude Code npm package, he found a cli.js.map file of 57MB that pointed to a storage bucket link containing 1,900 TypeScript files, for a total of over 512,000 lines of complete source code that had not been obfuscated or decompiled. This means developers can easily peek into—and even restore—Claude Code’s internal structure.
The root cause is not complicated: the source map file that should have been excluded from production builds, due to a loose .npmignore configuration or improper build-process settings, was released to the npm registry, a public platform. Within hours, the relevant code was uploaded to GitHub and spread widely, and even some developers have fully rebuilt Claude Code based on the leaked content.
After the incident escalated, Anthropic urgently updated the npm package and removed the related files, and also deleted earlier versions. But it was already too late.
Each-time reporter contacted Anthropic to verify the matter. The company replied: “Earlier today, during a release of Claude Code, it included part of the internal source code. This incident did not involve or expose any sensitive customer data or credentials. This was a release-and-packaging problem caused by human error, not a security vulnerability. We are rolling out measures to prevent similar situations from happening again.”
This is already the second major leak incident Anthropic has faced within a week. On March 26, the company had just leaked model information named Claude Mythos and about 3,000 unpublished assets due to a CMS (content management system) configuration error. Even earlier, in February 2025 and December 2024, Claude Code also saw issues involving leaked source code and system prompt text. Frequent “human mistakes” are steadily eroding the market’s trust in its security capabilities.
“Production-grade leak”: the unissued electronic pet and persistent AI assistant exposed early
As developers conducted deeper analysis of the leaked code, Claude Code’s internal system—far beyond what outsiders expected—became increasingly clear. This is not a simple API wrapper tool, but a complete production-grade development environment.
Based on analysis of the GitHub repository, the leaked codebase includes more than 40 permission-control tools, a query engine with 46,000 lines of code, a multi-agent coordination system, an IDE bridging feature, and a persistent memory mechanism, among other components. The code also found 35 compile-time feature flags and more than 120 unpublished environment variables. With the environment variable USER_TYPE=ant, Anthropic employees can unlock all internal features.
Some programmers pointed out that the leaked Claude content shows it is not merely an AI programming assistant—it’s more like an operating system.
Even more noteworthy are multiple experimental features that have not yet been released.
First, a terminal electronic pet system called BUDDY.
The code shows that BUDDY is an AI companion system similar to the electronic pet “Tamagotchi” that was popular worldwide in the 1990s. Its core mechanism combines user ID and a pseudo-random algorithm to generate a unique character, including species, rarity, appearance, and attributes. The system also supports settings like “gacha” and shiny variants, with the model automatically generating “soul descriptions.” Worth noting is that the pet’s key attributes are not stored; instead, they’re dynamically calculated based on the user ID, giving it stable and tamper-proof uniqueness.
Second, a persistent AI assistant called KAIROS.
KAIROS is hidden behind compile flags and is not visible in public versions. Once activated, the system can continuously monitor user behavior, record information, and proactively execute tasks, while also maintaining detailed operation logs. Paired with a backend mechanism called autoDream, the system can also automatically organize memories during low-activity periods, converting short-term conversation content into long-term structured knowledge. This design is considered highly similar to the way humans consolidate memories during sleep.
A Byte AI Agent researcher said the most impressive part is the KAIROS mode—GitHub Webhook + Cron + MCP Channel + backend Dream memory organization—which essentially moves the Agent from a tool to a digital employee.
In addition, to prevent internal information leakage, Anthropic also designed an “Undercover Mode,” limiting employees from mentioning internal model codes or tool names in open-source contributions. At the same time, its API also embeds a “data poisoning” mechanism: by injecting fake tool definitions (fake_tools), it interferes with potential data-scraping and model-distillation behaviors, thereby reducing the performance of competing models.
These designs show that Anthropic has put substantial effort into technical protections and competitive strategy, but this “human error” has exposed a shortcoming at the process-execution layer.
** Behind the developer frenzy: “copy-and-learn” could face legal risks**
As a product meant to compete with OpenAI, Claude Code has long been in competition with tools like GitHub Copilot. Although this accidental leak was not an official open-source release, many developers have viewed it as a rare learning opportunity.
Hu Yanping, a specially appointed professor at Shanghai University of Finance and Economics, told each-time reporter that the primary impact of this leak on the AI ecosystem is that it can help other Agent teams raise their product standards and help developers understand the technical roadmap. He revealed that some technical personnel indeed analyzed, restored, modified, and tested overnight, and even tried deployment and reproduction—conducting systematic research on the leaked files. “For developers who originally have only average capability in AI Agents, this is undoubtedly an opportunity to ‘copy-and-learn,’ quickly aligning product quality. Even during the copy-and-learn process, if they改作业 (make changes), and after localized optimization it may even be better than Claude Code’s framework in some aspects.”
In Hu Yanping’s view, while the code leak does provide substantial help to small and mid-sized developers, it may not be as beneficial to large companies. “Because large companies are either already doing reverse engineering, or are building a more systematized product framework. For products like Claude Code to succeed, beyond single-point product strength, it also depends on building the entire application ecosystem—including the Skills skill ecosystem, the ecosystem of developers and partners, and how to deliver full-stack AI services to correspond to the massive ecosystem formed by hundreds of millions of devices and users.”
Hu Yanping believes the wide attention and discussion around the Claude Code source code leak is because Anthropic is one of the two AI companies worldwide—especially in To B and code-capability areas—with the strongest full-stack capabilities, the other being OpenAI. Moreover, Anthropic has gone further than OpenAI on this kind of product and has stronger product strength. “From the leaked code, Claude Code’s system practices and organic integration incorporate the Prompt Engineering, Context Engineering, and Harness Engineering that are hotly discussed in the industry today—especially Harness Engineering and the upgraded ability to operate a computer—so that through Claude Code, the industry can see the next stage of development direction. As takeover-based Agents become all-rounders—as an application operating system and action intelligence agents—future digital general intelligence will become something different from embodied general intelligence.”
However, in an interview with each-time reporter, attorney Wu Junling of Grandall Law Firm reminded that this incident is more suitable to be categorized as a source map mis-disclosure that caused part of the source code to be reconstructable, rather than the rightsholder actively authorizing the public release of the source code. Therefore, the source code being accessible to the outside world does not automatically mean that anyone has obtained authorization for lawful copying, rewriting, integration, or commercialization.
She analyzed that for enterprises and developers, if after downloading the relevant code they use it to copy, rewrite, or embed into their own products, or use it to optimize or train similar competing products, it will usually trigger compounded risks under copyright, trade secrets, and even unfair competition. Although once the relevant source code has already been widely disseminated, it becomes clearly more difficult for the rightsholder to assert trade secrets over the source code as a whole afterward, this does not necessarily exclude that it can still assert rights over unpublicized detailed content—or over early improper acquisition, dissemination, and use. For Anthropic’s existing users, its official terms also explicitly restrict using its services to develop competing products, train competing AI, and reverse engineer, reverse or copy services—so the relevant use may also add breach-of-contract risk.
She also said that, comparatively, simply “viewing” the relevant code for research or security analysis purposes generally carries a lower risk than actual reuse; but once it enters development, commercialization, and other stages, legal risk will rise significantly.
At present, multiple GitHub repositories that host leaked source code have already received and posted down notices to delete them under the Digital Millennium Copyright Act, and have been taken offline one after another. This further indicates that legal battles surrounding the incident are underway.
(Source: Daily Economic News)