Build persistent LLM knowledge bases from any content. Compiled markdown wikis, not vector embeddings.
📖 Docs · 🇺🇸 English · 🇨🇳 简体中文
Lore builds persistent LLM knowledge bases from your project content — compiled markdown wikis, not vector embeddings.
Turn raw files, URLs, and transcripts into a navigable wiki organized by an LLM librarian. Ingest once, compile, and your knowledge stays useful across sessions without the retrieval noise of RAG.
Built for teams who need their LLMs to retain real architectural context across sessions.
- Compiled markdown wikis, not vector embeddings — Structured, human-readable, git-friendly. No opaque vectors or retrieval noise.
- LLM-driven librarian — An LLM actively organizes and interlinks your knowledge like a full-time research librarian.
- Paragraph-level provenance — Every sentence traces back to its source. Inline annotations tell you exactly which documents contributed to each line.
- Backlinks + FTS5/BM25 search — Fast, precise retrieval without vector similarity noise. Follow links to adjacent concepts.
- Code-driven pipeline — Deterministic code handles ingestion, compilation, indexing, and graph building. Tokens spent on knowledge, not infrastructure.
- Mixed source ingestion — Docs, code notes, URLs, chat transcripts, and media. Lore normalizes everything into a consistent knowledge structure.
- Export everywhere — Slides, PDF, DOCX, HTML, canvas, GraphML. Your knowledge isn't locked in a proprietary format.
- Agent-ready MCP server — 16 tools over stdio for retrieval, graph diagnostics, write actions, and maintenance. Compatible with any MCP host.
- Git-friendly & portable — Your wiki is plain markdown. Commit it, branch it, ship it with your project.
# 1) Install
npm install -g @telepat/lore
# 2) Create a lore repo in your project
lore init
# 3) Add source material
lore ingest ./README.md
lore ingest https://example.com/article
# 4) Compile into wiki pages
lore compile
# 5) Search and ask questions
lore search "architecture"
lore query "How does this system work?"- Node.js 22+
- Optional:
yt-dlpfor video transcript ingestion- macOS:
brew install yt-dlp
- macOS:
Lore ingests content into .lore/raw/, compiles it into linked wiki articles in .lore/wiki/articles/, then builds a search index and backlink graph. Query and search resolve through the graph and FTS index. Exports bundle wiki content into slides, PDF, docx, web, canvas, or graphml formats.
Lore ships with a first-class MCP server for agent integration:
- MCP server — Run
lore mcpto start the stdio MCP server with 16 tools:- Retrieval:
search,ask,explain,list_articles,get_article,get_neighbors,path - Graph diagnostics:
graph_stats,lint_summary,list_orphans,list_gaps,list_ambiguous - Write:
ingest,compile - Ingest / maintenance:
check_duplicate,list_raw_tags,rebuild_index
- Retrieval:
- Compatible hosts — Works with Claude Code, Cursor, VS Code Copilot, and any stdio MCP client.
- Recommended agent loop:
list_orphans→list_gaps→list_ambiguous→ingest/compile→rebuild_index(repair=true). - Agent docs — MCP Server Guide covers tool schemas, example calls, and troubleshooting.
- Secrets are stored in OS secure storage (Keychain on macOS, platform equivalent on Linux/Windows) when available.
- If secure storage is unavailable or explicitly disabled (
LORE_DISABLE_KEYTAR=true), secret writes fail with guidance to use environment variables. - Lore does not persist secrets in plaintext fallback files.
Environment variables (highest precedence at runtime):
OPENROUTER_API_KEYREPLICATE_API_TOKENLORE_CF_ACCOUNT_ID,LORE_CF_TOKENLORE_DISABLE_KEYTAR
- Documentation site
- Quickstart
- Ingesting content
- Compiling your wiki
- MCP server
- Troubleshooting
- CLI reference
- Repository
- npm package
Contributions are welcome. See Development for setup, workflow, and quality gates.
MIT. See LICENSE.