{"id":38060,"date":"2026-04-03T18:41:03","date_gmt":"2026-04-03T18:41:03","guid":{"rendered":"https:\/\/www.duck9.com\/blog\/?p=38060"},"modified":"2026-04-03T14:41:36","modified_gmt":"2026-04-03T18:41:36","slug":"leak-de-la-anthropic","status":"publish","type":"post","link":"https:\/\/www.duck9.com\/blog\/leak-de-la-anthropic\/","title":{"rendered":"Leak de la Anthropic"},"content":{"rendered":"<div class=\"postie-post\">\n<div>\n<div dir=\"ltr\">\n<div dir=\"ltr\"><img decoding=\"async\" alt=\"image0.png\" src=\"https:\/\/www.duck9.com\/wp-content\/uploads\/2026\/04\/image0.png\"><img decoding=\"async\" alt=\"image1.png\" src=\"https:\/\/www.duck9.com\/wp-content\/uploads\/2026\/04\/image1.png\"><\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">On day 31 of the #22daysOfOpenAIatSXSW, Anthropic accidentally exposed the full source code of Claude Code, its flagship AI-powered coding assistant and terminal-based agent, through a packaging error in a public npm release. A 59.8 MB JavaScript source map file (.map) was inadvertently included in version 2.1.88 of the @anthropic-ai\/claude-code package. This file mapped minified production code back to the original readable TypeScript, revealing approximately 512,000\u2013513,000 lines across nearly 1,900\u20132,000 files.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">The leak occurred when a debug artifact pointed to a ZIP archive on Anthropic\u2019s own Cloudflare R2 storage. Security researcher Chaofan Shou spotted the issue on X (formerly Twitter) and shared details, after which the codebase was rapidly mirrored, downloaded, and analyzed by developers worldwide.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">Anthropic quickly confirmed the incident as a \u201crelease packaging issue caused by human error, not a security breach,\u201d noting that no customer data, credentials, or model weights were exposed. The company pulled the package, issued DMCA takedown notices (initially targeting thousands of GitHub repositories and forks, later scaled back), and began implementing preventive measures. Despite these efforts, clean-room reimplementations in languages like Rust and Python proliferated, with some forks gaining tens of thousands of stars rapidly.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">### What the Leak Revealed<\/div>\n<div dir=\"ltr\">The exposed code primarily concerned the agentic harness \u2014 the client-side orchestration layer that wraps Claude models, manages tool use (such as bash execution, file read\/write, edit, glob, and grep), handles memory via files like CLAUDE.md, and orchestrates the agentic loop. Analysts noted modular system prompt assembly at runtime, caching mechanisms, parallel tool execution, and various internal configurations.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">Community deep dives uncovered several previously unannounced or hidden features:<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">&#8211; KAIROS \/ Dream Mode: An always-on background autonomous agent that activates after periods of inactivity (e.g., 5 sessions and 24 hours of silence). It reviews and consolidates memories, prunes outdated information, and can respond to external triggers like GitHub webhooks or Slack\/Discord messages. This feature was gated behind internal flags such as tengu_onyx_plover.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">&#8211; Tamagotchi-style \u201cBuddy\u201d pet: A virtual companion that appears beside the input box, reacts to coding activity, and incorporates gacha mechanics with multiple species (including a \u201cLegendary Cat\u201d that users reportedly hacked to obtain).<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">&#8211; Web search architecture: A two-tier system with a hardcoded list of ~85 \u201cpre-approved\u201d documentation domains (e.g., React, AWS, PostgreSQL) granting full content extraction. Other sites were limited to short paraphrased quotes (enforced via Haiku), with only the &lt;body&gt; of pages processed \u2014 ignoring &lt;head&gt; elements like structured data or schema markup. Tables were often mangled due to the markdown converter. Search results were capped at 8 per query, with some versions allowing the model to post-process results via executable code.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">&#8211; Practical behaviors and quirks: CLAUDE.md instructions are re-injected on every conversation turn (making concise, rule-focused files more impactful). Switching models mid-session breaks prompt caching, incurring full token costs. The codebase included frustration-detection regexes, feature flags (over 40 noted), and comments reflecting real engineering trade-offs, such as memoization increasing complexity without clear performance gains.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">Blog posts and analyses emphasized that the leak highlighted the importance of the \u201charness\u201d over raw model capabilities in AI coding tools. One post argued that production code often appears messy to outsiders, and user delight frequently trumps pristine architecture. Others explored architectural implications for agentic systems, memory management, and supply-chain risks in AI tooling.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">### Context and Reactions<\/div>\n<div dir=\"ltr\">This incident marked Anthropic\u2019s second notable slip in quick succession (following a separate unsecured database exposure earlier in the month that revealed details about an unreleased model codenamed \u201cMythos\u201d). Commentators drew ironic parallels: Anthropic, which has positioned itself as a leader in AI safety and has litigated aggressively over training data copyright, found itself on the receiving end of copyright discussions while scrambling to contain its own IP via DMCA actions. Some observers noted the speed of community porting as evidence that scaffolding and orchestration may be more replicable than frontier model weights.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">Developer discussions on platforms like Reddit and Hacker News cautioned against over-dunking on code quality, pointing instead to valuable patterns in areas like semantic memory merging, budget controls, and adversarial verification. However, the leak also coincided with unrelated malware campaigns (e.g., malicious axios npm variants and \u201cClaude Code leak\u201d lures distributing Vidar\/GhostSocks), prompting security warnings for anyone who experimented with mirrored code.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">### Implications<\/div>\n<div dir=\"ltr\">The Claude Code leak offers a rare public window into how a leading AI lab builds production-grade agentic tools. It underscores persistent challenges in build and release hygiene at scale, even for organizations with strong safety cultures. For the broader AI ecosystem, it reinforces that competitive edges increasingly reside in system-level design \u2014 memory architectures, tool integration, prompt orchestration, and background autonomy \u2014 rather than model parameters alone.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">While Anthropic moved swiftly to limit distribution, the cat was effectively out of the bag, sparking both innovation (through open reimplementations) and scrutiny of operational security practices across the industry. As one blog analysis put it, the episode provides \u201ca glimpse into the future of AI agents, where the system, memory, and tools around the model are more important than raw model capability alone.\u201d<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">In summary, the March 31, 2026 leak was a self-inflicted packaging mishap that exposed the scaffolding powering Claude Code without compromising core model secrets or user data. It fueled intense technical analysis, highlighted unreleased capabilities, and prompted reflection on what truly differentiates AI coding agents in a rapidly evolving field.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">**Key sources consulted** (blog posts and in-depth analyses):<\/div>\n<div dir=\"ltr\">&#8211; Alex Kim\u2019s blog: \u201cThe Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more\u201d (alex000kim.com).<\/div>\n<div dir=\"ltr\">&#8211; MindStudio: \u201cClaude Code Source Code Leak: 8 Hidden Features You Can Use Right Now\u201d (mindstudio.ai).<\/div>\n<div dir=\"ltr\">&#8211; Layer5 blog: \u201cThe Claude Code Source Leak: 512,000 Lines, a Missing .npmignore&#8230;\u201d (layer5.io).<\/div>\n<div dir=\"ltr\">&#8211; Medium posts including \u201cEveryone Analyzed Claude Code\u2019s Features. Nobody Analyzed Its Architecture\u201d and detailed architecture breakdowns.<\/div>\n<div dir=\"ltr\">&#8211; Additional coverage from Axios, The Verge, VentureBeat, and community megathreads provided factual context for the above essays.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>On day 31 of the #22daysOfOpenAIatSXSW, Anthropic accidentally exposed the full source code of Claude Code, its flagship AI-powered coding assistant and terminal-based agent, through a packaging error in a public npm release. A 59.8 MB JavaScript source map file (.map) was inadvertently included in version 2.1.88 of the @anthropic-ai\/claude-code package. This file mapped minified [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":38061,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-38060","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/posts\/38060","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/comments?post=38060"}],"version-history":[{"count":0,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/posts\/38060\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/media\/38061"}],"wp:attachment":[{"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/media?parent=38060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/categories?post=38060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.duck9.com\/blog\/wp-json\/wp\/v2\/tags?post=38060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}