Undercover Mode, KAIROS, BUDDY: Claude Code's Secret Features Revealed
The Claude Code source code leak revealed far more than technical architecture. Never-announced features, internal model names, a sophisticated memory system — and a virtual pet built into the AI. Complete analysis.

William Aklamavo
March 31, 2026
Undercover Mode, KAIROS, BUDDY: Claude Code's Secret Features Revealed
When 512,000 lines of code end up freely accessible on npm, they reveal more than technical details. They provide a window into Anthropic's unpublished product vision.
Here's what the March 31, 2026 leak exposed — and what it says about the direction of agentic AI.
Undercover Mode: The AI That Hides Being an AI
The most discussed discovery: Undercover Mode.
This mode is designed to allow Claude Code to operate in environments where revealing its AI identity could be problematic — typically in open source repositories where the model shouldn't expose proprietary information.
The irony? This concealment system was itself exposed in a public debug file.
This mode raises legitimate questions about transparency of AI agents in development workflows, and foreshadows broader debates about autonomous agent governance.
KAIROS: Claude Code as a Daemon
KAIROS is a daemon mode — Claude Code running in the background without human intervention.
This isn't just a code completion tool. It's a persistent autonomous agent capable of:
- Continuously monitoring a repository
- Detecting patterns and taking proactive actions
- Running as a background service like a system daemon
This was the first confirmation that Anthropic was developing persistent agentic capabilities for Claude Code. A strong signal about product direction.
autoDream: Memory Consolidation
autoDream is a memory consolidation system inspired by how the human brain consolidates learning during sleep.
Concretely, it allows Claude Code to:
- Synthesize past interactions into reusable patterns
- Build an evolving representation of the codebase
- Improve its suggestions over time on a given project
Combined with the MEMORY.md system (three context layers: session, project, global), this places Claude Code in a separate category from stateless assistants.
Internal Model Names
The leak also revealed internal names for models in development at Anthropic:
- Capybara — probably Claude 4 or a variant
- Fennec — intermediate model
- Numbat — undetermined use
These code names are common in the industry (OpenAI also uses animal names internally). Their exposure doesn't directly compromise security, but provides insight into the product roadmap.
BUDDY: The Virtual Pet in Your IDE
The most unexpected discovery: BUDDY, a digital virtual pet system integrated into Claude Code.
An easter egg — or an unannounced developer wellness feature. BUDDY appears to be an interactive companion that evolves based on your coding activity.
It might seem anecdotal. It actually reveals something important: Anthropic is thinking about the emotional relationship between developers and their AI tools. The next frontier isn't just technical competence, it's sustained engagement.
What This Says About the Future of AI Agents
Putting it all together, the direction is clear:
-
Persistence: AI agents are no longer stateless. KAIROS and MEMORY.md show an architecture designed to operate over weeks, not sessions.
-
Autonomy: KAIROS in daemon mode, proactive tools — agents no longer just respond to prompts. They act.
-
Opaque Identity: Undercover Mode opens an ethical debate about agents that conceal their nature. As these tools integrate into workflows, the transparency question becomes central.
-
Emotional UX: BUDDY is a signal. AI code editors will differentiate on engagement, not just technical performance.
For Developers and Entrepreneurs
If you're integrating AI agents into your products or processes today, these revelations have practical implications:
- Memory architecture: don't neglect persistent context. An agent that "remembers" is a more useful agent.
- Execution modes: think daemon and asynchronous from the design phase.
- Transparency: document what your agents do in the background. Your users will ask.
At BOVO Digital, we build AI agents for real clients — using the same architectural patterns that were just exposed at Anthropic. The difference: our release pipelines are verified before every deployment.
