
In March 2026, Anthropic accidentally leaked the full source code for its AI coding assistant, Claude Code, by including a large debugging file in a public software registry. These sources detail how the leak exposed unreleased features like "KAIROS," an autonomous background agent, and "Undercover Mode," which scrubs AI fingerprints from code contributions. While Anthropic attributed the incident to human error rather than a hack, the event coincided with a malicious supply chain attack on the popular "axios" package, creating significant security risks for developers. Furthermore, users have reported frustrating usage limits and bugs that drain quotas faster than expected, leading Anthropic to offer extra credits as compensation. Technical analyses of the leaked code reveal a complex memory architecture that uses "dream" cycles to organize information, though it remains limited by local file constraints. Collectively, these reports highlight the operational maturity challenges facing major AI firms as they ask enterprise clients for deep access to proprietary systems.
Otros episodios de "Elon Musk Podcast"



No te pierdas ningún episodio de “Elon Musk Podcast”. Síguelo en la aplicación gratuita de GetPodcast.








