A memory system that gets smarter the longer you live with it.
Most knowledge tools file. Engram weaves.
Drop in a chaotic brain dump about apples, meditation, and a fight with your partner — all in the same paragraph. The system doesn't file it under "random_notes_2026.md." It extracts discrete atomic claims, routes the apple facts to your fruits page, files the meditation insight under health, and attaches the relationship fragment to the person it concerns. You didn't organize a thing. The system did.
This is not a note-taking app. It is not a wiki. It is not RAG. It is the thing that happens when you take Karpathy's insight — "the wiki is a persistent, compounding artifact, not a retrieval corpus" — and push it to its conclusion: an autonomous system that doesn't just store what you feed it, but integrates it into an ever-denser fabric of understanding.
Here is the problem with every knowledge tool you've ever used:
Filing tools (Obsidian, Notion, folders) make you do the maintenance. You create the links. You write the summaries. You decide where things go. The burden grows faster than the value, so you abandon your wiki. Everyone does.
Retrieval tools (RAG, search) pretend there's no maintenance at all. The LLM re-derives knowledge from scratch on every query. Nothing accumulates. Every question starts from zero.
Neither compounds.
Engram does something else. When you add a new source — a brain dump, a chat log, an article — six autonomous agents go to work. They don't file it. They weave it:
You mention apples in a brain dump about something unrelated. The system doesn't create a standalone "apples" note. It routes the apple atoms to your existing fruits page. It cross-references them with the nutrition entry under health. It notices that you've mentioned apples three times in six months and surfaces the connection. Over years, what started as scattered observations becomes an integrated understanding.
The knowledge graph doesn't just grow. It densifies. Connections strengthen. Clusters emerge. Each new piece of knowledge enriches every existing piece it touches.
This is compounding. The system gets smarter, not because you're adding more files, but because it's weaving them more tightly together.
The single most important architectural decision: every input is decomposed into order-independent atomic claims before compilation.
One fact per atom. One entity. One relationship. This means the system doesn't care what order you feed it things. It doesn't care if a brain dump covers five topics. It doesn't care if an idea is half-formed. The compiler extracts structure from chaos and routes atoms to the right pages. The Idea Guardian watches for orphaned fragments — two half-formed thoughts waiting to find their other half.
You don't organize. The system organizes.
| Agent | What it does |
|---|---|
| Curator | Ingests and compiles new sources, weaving them into existing knowledge |
| Editor | Lints and fixes structural issues, keeps the knowledge base healthy |
| Idea Guardian | Breeds serendipitous connections — finds two fragments that belong together |
| Reflector | Detects patterns across time, notices what's changing, writes epoch summaries |
| Relationship Curator | Tracks edge health — which connections are fading, strengthening, or dead |
| Researcher | Investigates gaps — what's missing, what needs filling |
They don't communicate directly. They coordinate through the shared knowledge base, like cognitive modules operating on a shared representation.
Twelve life domains, not one: introspection, people, relationships, philosophy, health, learning, ideas, projects, events, ai, treasured, archives.
Melancholic writing belongs here. Treasured conversations belong here. Supplement tracking, creative fragments, half-finished stories, personal philosophy. The full breadth of what a human thinks about and cares about.
Some things are not data. protected: true is a hard gate. No agent may touch what you mark as treasured. The system knows the difference between knowledge to synthesize and knowledge to preserve.
A tool serves a task. A relationship spans a life.
raw/ → atoms/ → wiki/ → SQLite graph
- Ingest — Content-hashed, deduplicated, stored. TextAdapter handles markdown and plain text.
- Atomize — LLM extracts discrete claims. Immutable. Provenance-tracked.
- Compile — Atoms grouped into wiki pages with YAML Frontmatter, Wikilinks, confidence scores.
- Query — FTS5 full-text search + graph navigation + LLM synthesis with
wiki://citations.
Every claim traces back to its source. Files are canonical. The database is a derived index — rebuildable from scratch.
Python 3.12+ and Node.js required.
git clone <repo-url> && cd engram
brew bundle install # System deps (lychee, shellcheck)
uv sync # Python deps
npm install # Node deps (markdownlint, jscpd)
uv run pre-commit installSet ANTHROPIC_API_KEY, OPENROUTER_API_KEY, or use local Ollama. Configure in config/engram.toml.
uv run engram init # Initialize
uv run engram ingest path/to/notes.md # Ingest a source
uv run engram compile # Compile into wiki pages (needs LLM)
uv run engram query "what do I know about X?" # Query with cited synthesis
uv run engram status # System health| For | Read |
|---|---|
| Active roadmap | .taskmaster/tasks/ |
| Vision & life domains | docs/manifest/VISION.md |
| Architecture & data flow | docs/manifest/ARCHITECTURE.md |
| Schema & contracts | docs/manifest/CONTRACTS.md |
| Code conventions | docs/manifest/CONVENTIONS.md |
| Agent rules | docs/manifest/AGENTS.md |
| Development guide | docs/DEVELOPMENT.md |
AGENTS.md is the recommended entry point for new contributors (human or AI).