Alpha. Kit is in active development. Code is not consumer-ready and the architecture is still moving. These notes are a build log, written from inside the work.
← All notes

What Kit is, and why we're building it

This is the starting note. Before the build log starts in earnest, it helps to say plainly what Kit is, what it isn't, and why we keep choosing the harder path.

Kit is a persistent memory substrate for AI agents. It's MCP-native, so any agent that speaks Model Context Protocol can plug in. It runs on your machine. The data structures inside it are typed: every memory has a category, a project, a scope, a tier, edges to other memories, provenance back to its source. There's a nightly consolidation cycle that compresses noise into signal. There's a knowledge graph underneath. There's a small UI. There's a daemon that's just started breathing.

That's the elevator pitch. But Kit isn't really a feature you bolt onto an agent. It's an attempt at a different shape for the whole problem. This note is about that shape.

The friction we kept hitting

Every modern AI agent has the same hole. Each session is an island. Context is whatever you can squeeze into the prompt window before the turn starts. Anything you say to the model evaporates the moment the session closes. If you want continuity, you either pay a platform to remember for you (and watch your context become someone else's training corpus), or you keep your own notes and re-paste them every morning.

That's not a memory problem. It's a substrate problem. The memory layer that should be the most foundational thing in an agent stack is, today, the most rented. The platforms that own the chat window own the continuity. You're a tenant in your own thinking.

The first time you really feel this is when you switch tools. You move a workflow from one agent surface to another and you discover that none of the context comes with you. Your self is portable; the agent's relationship with you isn't. So you start over. Again.

The memory layer that should be the most foundational thing in an agent stack is, today, the most rented.

The reframe

Once you see the substrate problem, two paths open up.

The first is to ask each platform to remember harder. Build better chat-summarization. Stitch sessions together. Let the assistant scroll back further. This is the path most platforms are on, and it's not wrong, exactly — it just doesn't change the rental relationship. Better notes for someone else's filing cabinet.

The second is to flip it. Put the memory layer underneath the agents, not inside them. Make it durable, typed, queryable, model-agnostic, and yours. Let agents read from it and write to it the way they call any other tool. Let it be the thing that survives when the conversation ends.

That's what Kit is. Not a smarter chat. A substrate.

Brain as protocol, not chat

The mistake people make when they hear "agent memory" is to picture another voice in the room — a third character that talks back to the agent. We tried that early. It doesn't work. Every conversation between agent and memory layer becomes another tax on the prompt window, and every fuzzy answer compounds with the agent's own fuzziness.

The brain shouldn't talk. It should hold state.

A useful frame, if you've designed APIs: imagine the brain as a typed key-value store on top of a knowledge graph, exposed through a deliberately small set of operations. brain_remember, brain_recall, brain_read, brain_link, a handful of others. Agents call these like they'd call any tool. There's no chat session with the brain. There's just durable structured state and the operations that touch it.

This sounds boring written out. The interesting part is that once the brain is a protocol, the agents become interchangeable. Any model that speaks MCP can read and write. There's no special relationship between any one agent and the brain. The continuity moves from the model into the substrate, where it belongs.

The four primitives

Inside the brain there are four kinds of thing. We've been called back to this carve again and again across the last few weeks; each time we tried to merge or skip one of them, something broke. The set that holds is:

brain protocol MEMORIES Durable typed state facts, decisions, handoffs, plans MESSAGES Directed transient events "plan ready", "review needed", "blocked" SUBSCRIPTIONS Routing rules "on event X → spawn agent Y" DAEMON Active watcher subscribe → match → spawn → repeat
Four primitives. The daemon is the only active element; everything else is durable state or routing.

Memories are the durable layer. A memory is a typed note — a decision, a handoff, a gotcha, a relationship pattern, a trajectory. It has a title, a body, tags, scope, project, edges to other memories, provenance back to whatever produced it. Memories are how the brain remembers what is true.

Messages are events. Different lifecycle entirely: they're created, delivered, consumed, then they expire. A memory persists; a message moves. "Plan #2713 is ready for review." "Execution done, here's the commit." "I'm blocked on a dependency." Messages are how the brain coordinates what is happening.

Subscriptions are the routing rules. Each one says: "when an event of shape X lands, run matcher M; if it matches, build context with loader L and activate agent A through spawner S." They're typed Python in our case, and stored as durable rows so they survive restarts. Subscriptions are how the brain knows what to do.

The daemon is the only piece that's alive. It listens for events on the brain, runs subscriptions, spawns the right agent into the right context. Everything else is shape; the daemon is motion.

The trick we keep relearning: agents themselves are not active participants. An agent is only alive during a turn. Between turns it doesn't exist. Something has to watch the brain on its behalf and wake it when the world changes. That something is the daemon. Agents are spawn targets, not watchers.

Sovereignty, plainly

"Your context is yours" is a slogan. Sovereignty is the work behind it.

For Kit, sovereignty means three concrete things. One: the memory layer runs on your machine by default. The brain is a small local service plus a database file. Nothing leaves your laptop unless you choose. Two: the data format is portable. Everything inside the brain is documented, exportable, and re-importable into another instance. If you want to move from a hosted setup back to local, that's one button. If a future-you wants to rip Kit out entirely and walk away with the knowledge graph as plain JSON, that works too. Three: the protocol is model-agnostic. There's no special agent that has to be running for the brain to be useful, and there's no model lock-in inside the schema.

The deeper move is what comes after one brain. Federation. Each Kit is a private brain; brains can publish typed feeds (a project's decisions, a person's relationship patterns), and other brains can subscribe with selective trust. No central platform. No universal index. Each brain makes up its own mind about what to take in and how to weight it. Most brains stay sovereign and quiet by default; the ones that opt into federation get distributed coordination without giving up control.

The model in the back of our heads is git, not Slack. Git is sovereign, local-first, federation-capable, and the federation primitive (a signed commit graph) is small enough to reason about. The brain federation sub-protocol we're sketching has the same shape: signed envelopes, typed feeds, capability handshakes. Async by default. Pull-primary, push-optional. Nothing is required for liveness.

Where we are

Honest snapshot, since the alpha banner means anything we say is provisional:

  • The brain is real and runs locally. The schema's settled. Memories, edges, the consolidation cycle, the small UI — all working.
  • The MCP integration is live. Several agent tools speak it; more drop in regularly.
  • The four primitives are implemented through a Phase-1 daemon (kit-loom) that watches brain writes and activates agents. As of tonight, that daemon spawns agents with no human in the loop. The next post in this series unpacks how.
  • Federation is designed but not built. The synthesis of that design lives in the brain and will get its own post when there's something running to point at.
  • The packaging story (a real installable product, with auto-detection, with a menu-bar app, with a homebrew cask) is months away. Right now the way to use Kit is to clone a repo and run a docker compose.

There's a long list of papercuts. Sandbox-mode permissions on macOS. Browser-extension capture for chat surfaces that don't expose hooks. The desktop store reconciler. Migration from local to hosted and back. Onboarding ingestion that turns 200 active project folders into 200 useful memories on day one. Each of these is its own post-and-build.

Why we keep going

The simple answer is that the alternative — every conversation starting cold, every cross-tool workflow re-explaining itself — is a tax we're tired of paying. The longer answer is that what we're building is the substrate underneath whatever the next decade of AI work looks like. If memory remains rented, the rest of the stack stays rented too. If memory becomes sovereign, the rest can follow.

Kit is one attempt at that. It might not be the one. The shape might need to change. We're publishing this build log because the shape changes faster when more people can see it changing.

The next post in this series is about how Kit started talking to itself across substrates without a human in the loop. That's tonight's work. Read on.