MCP server with RAG (BM25) for llms.txt documentation. Provides semantic search across documentation sources with automatic background refresh.
- llms.txt support - Automatically parses and indexes documentation from llms.txt files
- BM25 search - Fast, keyword-based retrieval with relevance scoring and stopword filtering
- Named sources - Configure sources with names like
fast_mcp:https://...for easy filtering - Source filtering - Search across all sources or filter by specific source name
- Persistent storage - DuckDB-based index that survives restarts
- Background refresh - Configurable auto-refresh interval (default: 6 hours)
- Source attribution - Every search result includes source name and URL
- Add to Claude Code (
~/.claude/claude_code_config.json):
{
"mcpServers": {
"llmdoc": {
"command": "uvx",
"args": ["llmdoc"],
"env": {
"LLMDOC_SOURCES": "fast_mcp:https://gofastmcp.com/llms.txt"
}
}
}
}-
Restart Claude Code - the server will automatically fetch and index documentation.
-
Ask Claude questions like "How do I create a tool in FastMCP?" and it will search the indexed docs.
llms.txt is a specification for providing LLM-friendly documentation. Websites add a /llms.txt markdown file to their root directory containing curated, concise content optimized for AI consumption. LLMDoc indexes these files and their linked documents to enable semantic search.
Example sources:
# Run directly with uvx (no install needed)
uvx llmdoc
# Or install with uv
uv tool install llmdoc
# Or install with pip
pip install llmdoc
# Or install with pipx
pipx install llmdocSources can be specified in two formats:
- Named:
name:url- e.g.,fast_mcp:https://gofastmcp.com/llms.txt - Unnamed: Just the URL - name is auto-generated from domain
Named sources allow you to filter search results by source name.
# Comma-separated list of sources (named or unnamed)
export LLMDOC_SOURCES="fast_mcp:https://gofastmcp.com/llms.txt,pydantic_ai:https://ai.pydantic.dev/llms.txt"
# Optional: Custom database path (default: ~/.llmdoc/index.db)
export LLMDOC_DB_PATH="/path/to/index.db"
# Optional: Refresh interval in hours (default: 6)
export LLMDOC_REFRESH_INTERVAL="6"
# Optional: Max concurrent document fetches (default: 5)
export LLMDOC_MAX_CONCURRENT="5"
# Optional: Skip refresh on startup (default: false)
export LLMDOC_SKIP_STARTUP_REFRESH="true"Create llmdoc.json in the working directory:
{
"sources": [
"fast_mcp:https://gofastmcp.com/llms.txt",
"pydantic_ai:https://ai.pydantic.dev/llms.txt"
],
"db_path": "~/.llmdoc/index.db",
"refresh_interval_hours": 6,
"max_concurrent_fetches": 5,
"skip_startup_refresh": false
}Or with explicit name/url objects:
{
"sources": [
{"name": "fast_mcp", "url": "https://gofastmcp.com/llms.txt"},
{"name": "pydantic_ai", "url": "https://ai.pydantic.dev/llms.txt"}
]
}LLMDoc uses stdio transport and is designed to be launched by MCP clients. Configure it in your MCP client (see below), and the client will start the server automatically.
For manual testing:
# Using uvx
uvx llmdoc
# Or as module
python -m llmdocsearch_docs(query, limit, source)- Search documentation and return relevant passages with source URLs. Optionalsourceparameter filters by source name (e.g.,fast_mcp)get_doc(url, offset, limit)- Get document content with pagination support for large documents. Parameters:offset(default: 0) start position in bytes,limit(default: 50000, max: 100000) max bytes per call. Returns pagination metadata (has_more,total_length)get_doc_excerpt(url, query, max_chunks, context_chars)- Get relevant excerpts from a large document matching a querylist_sources()- List all configured documentation sources with statisticsrefresh_sources()- Manually trigger a refresh of all documentation
doc://sources- Returns JSON with configured sources list and refresh interval
Add to ~/.claude/claude_code_config.json:
{
"mcpServers": {
"llmdoc": {
"command": "uvx",
"args": ["llmdoc"],
"env": {
"LLMDOC_SOURCES": "fast_mcp:https://gofastmcp.com/llms.txt,pydantic_ai:https://ai.pydantic.dev/llms.txt"
}
}
}
}Add to your MCP client's configuration file:
{
"mcpServers": {
"llmdoc": {
"command": "uvx",
"args": ["llmdoc"],
"env": {
"LLMDOC_SOURCES": "fast_mcp:https://gofastmcp.com/llms.txt"
}
}
}
}Once configured, the LLM can use these tools:
User: How do I create a tool in FastMCP?
LLM: [calls search_docs("create tool FastMCP")]
Result:
[
{
"title": "Tools",
"snippet": "Creating a tool is as simple as decorating a Python function with @mcp.tool...",
"url": "https://gofastmcp.com/servers/tools.md",
"source": "fast_mcp",
"source_url": "https://gofastmcp.com/llms.txt",
"score": 12.5
}
]
You can filter results to a specific documentation source:
User: How do I create an agent in PydanticAI?
LLM: [calls search_docs("create agent", source="pydantic_ai")]
Result:
[
{
"title": "Agents",
"snippet": "Agents are the primary interface for interacting with LLMs in PydanticAI...",
"url": "https://ai.pydantic.dev/agents.md",
"source": "pydantic_ai",
"source_url": "https://ai.pydantic.dev/llms.txt",
"score": 10.2
}
]
Use get_doc to retrieve document content (supports pagination for large documents):
LLM: [calls get_doc("https://ai.pydantic.dev/agents.md")]
Result:
{
"title": "Agents",
"content": "# Agents\n\nAgents are the primary interface for interacting with LLMs in PydanticAI...",
"url": "https://ai.pydantic.dev/agents.md",
"source": "pydantic_ai",
"source_url": "https://ai.pydantic.dev/llms.txt",
"offset": 0,
"length": 5432,
"total_length": 5432,
"has_more": false
}
+------------------+
| MCP Client |
| (Claude, Cursor) |
+--------+---------+
| stdio
v
+------------------+ +------------------+ +------------------+
| FastMCP Server |---->| Document Store |<----|Document Fetcher |
| | | (DuckDB) | | (async HTTP) |
| - search_docs | | | | |
| - get_doc | | - Persistence | | - llms.txt parse |
| - list_sources | | - Deduplication | | - HTML→Markdown |
| - refresh | | - Change detect | | - Concurrent |
+--------+---------+ +------------------+ +------------------+
|
v
+------------------+
| BM25 Index |
| (in-memory) |
| |
| - Chunking |
| - Tokenization |
| - Scoring |
+------------------+
LLMDoc fetches documentation from llms.txt sources, stores it in DuckDB, and provides fast BM25 search through the MCP protocol.
When configured with documentation sources, LLMDoc:
- Parses llms.txt files to discover all linked documents
- Fetches each document concurrently (with rate limiting)
- Converts HTML pages to Markdown automatically
- Extracts titles from the first H1 heading
Documents are processed for efficient search:
- Chunking: Large documents are split into ~500 character chunks at sentence boundaries
- Tokenization: Text is lowercased and stopwords are removed
- Indexing: BM25 algorithm indexes all chunks for relevance scoring
When you search:
- Your query is tokenized the same way as documents
- BM25 scores each chunk against your query
- Results are deduplicated by document URL
- Top results are returned with relevance scores and snippets
LLMDoc automatically keeps documentation up-to-date:
- Checks for staleness on startup
- Refreshes every 6 hours (configurable)
- Uses content hashing to skip unchanged documents
- Removes documents no longer in llms.txt
LLMDoc uses the BM25Okapi algorithm from the rank_bm25 library. Key characteristics:
- Term frequency saturation: Diminishing returns for repeated terms
- Document length normalization: Shorter documents aren't unfairly penalized
- IDF weighting: Rare terms are weighted higher than common ones
The implementation is thread-safe using threading.RLock() for concurrent access.
Documents are chunked using a multi-level approach:
- Paragraph splitting: First split on double newlines (
\n\n) - Sentence-boundary aware: Long paragraphs split at
.!?followed by whitespace - Overlap: 100 character overlap between chunks maintains context
Configuration:
chunk_size: 500 characters (default)chunk_overlap: 100 characters (default)
DuckDB stores documents with this schema:
CREATE TABLE documents (
id INTEGER PRIMARY KEY,
source_name TEXT NOT NULL, -- e.g., 'fast_mcp'
source_url TEXT NOT NULL, -- llms.txt URL
doc_url TEXT NOT NULL UNIQUE, -- document URL
title TEXT,
content TEXT NOT NULL,
content_hash TEXT NOT NULL, -- SHA256 for change detection
updated_at TIMESTAMP NOT NULL
)Indexes on source_url and source_name for efficient filtering.
LLMDoc supports multiple concurrent instances:
- Read operations: Multiple instances can search simultaneously (read-only DuckDB mode)
- Write operations: Single instance holds exclusive lock during refresh
- Graceful handling: If refresh is locked, operation skips with status message
Document fetching uses asyncio.Semaphore to limit concurrent HTTP requests (default: 5).
213 English stopwords are filtered during tokenization, including:
- Articles: a, an, the
- Prepositions: in, on, at, by, for, with, about, etc.
- Pronouns: I, you, he, she, it, we, they, etc.
- Auxiliaries: is, are, was, were, be, been, being, etc.
- Common verbs: have, has, had, do, does, did, etc.
MIT License - see LICENSE file.