Automated book download manager for Usenet & Torrents
Monitor authors. Search indexers. Download. Organize. Done.
Readarr is dead. The official project was archived in June 2025 and its metadata backend (api.bookinfo.club) is permanently offline. Community forks rely on fragile Goodreads scrapers that break regularly. There was no reliable, open-source tool for automated book management on Usenet.
Bindery is the clean-room replacement. Built from scratch in Go with a modern React UI, Bindery uses only stable, documented public APIs for book metadata. No scraping. No dead backends. No fragile dependencies.
- Author monitoring — Add authors and Bindery tracks all their works automatically via OpenLibrary's author works endpoint
- Book tracking — Per-book monitor toggle, status workflow (wanted → downloading → downloaded → imported)
- Dual-format books — Each book can hold an ebook and an audiobook simultaneously. The Book Detail page has separate format panels with independent status, file path, and grab buttons. The search pipeline uses Newznab category 7020 for ebooks and 3030 for audiobooks; the importer moves whole audiobook folders (multi-part
.m4b/.mp3) as one unit into a separate audiobook library root; the Wanted page lists each missing format as a separate row. - Series support — Books grouped by series with position tracking and dedicated Series page
- Edition tracking — Multiple editions per work, with format, ISBN, publisher, page count
- Library scan — Walk
/books/and reconcile existing files with wanted books in the database; trigger on-demand from Settings → General → Scan Library - Author aliases — Merge duplicate authors ("RR Haywood" / "R.R. Haywood" / "R R Haywood") into one canonical row from the Authors page; add-author flow detects aliases and prompts for merge instead of silently ingesting a duplicate
- Newznab + Torznab — Query multiple Usenet and torrent indexers in parallel, deduplicated and ranked
- SABnzbd, qBittorrent, Transmission — Full support for both Usenet and torrent download clients
- Auto-grab — Scheduler searches for wanted books every 12h and automatically grabs the best result. Adding a new author or flipping a book to
wantedfires an immediate search — no waiting for the next scheduled pass. Toggle the global kill-switch at Settings → General → Auto-grab to pause all automatic grabbing without losing your monitored list. - Interactive search — Manual per-book search from the Wanted page with full result details; Grab button shows a spinner while in-flight and a ✓ on success
- Smart matching — Four-tier query fallback (
t=book→surname+title→author+title→ title); word-boundary keyword matching; contiguous-phrase requirement for multi-word titles; dual-author-anchor for ambiguous short titles; subtitle-aware (Title: Subtitle) - Composite ranking — Results scored by format quality, edition tags (RETAIL / UNABRIDGED / ABRIDGED), year match to the book's release year, grab count, size, and ISBN exact-match bonus
- Quality profiles — Preference order for EPUB / MOBI / AZW3 / PDF, with cutoff rules
- Language filter — Preferred language setting (English by default); filters releases with foreign-language tags at word boundaries
- Custom formats — Regex-based release scoring for freeleech, retail tags, etc.
- Delay profiles — Wait N hours before grabbing to let higher-quality releases appear
- Blocklist — Consulted on every search and auto-grab; prevents re-grabbing releases you've rejected. Add entries directly from History with one click
- Failure visibility — Download errors surfaced in Queue (active) and History (permanent)
- Automatic import — Completed downloads matched by NZO ID, placed in library with configurable naming template
- Import modes — Move (default): source deleted after import. Copy: source kept so torrent clients continue seeding. Hardlink: zero extra disk, both paths share an inode (download dir and library must be on the same filesystem). Configurable under Settings → General → Import Mode.
- Naming tokens —
{Author},{SortAuthor},{Title},{Year},{ext}with sanitized path components - Cross-filesystem moves — Atomic rename when possible, copy+verify+delete for NFS/separate volumes
- History — Every grab, import, and failure recorded with full detail (shown inline on History page)
- Calibre library integration — Three modes, all configurable under Settings → Calibre:
- Off — no Calibre interaction (default).
- calibredb CLI — every successful import calls
calibredb add --with-library <path>; the returned Calibre book id is persisted on the Bindery book row. - Plugin bridge — POSTs the imported file path to the Bindery Bridge plugin running an HTTP server inside a separate Calibre container; no shared binary or sidecar required. See docs/CALIBRE-PLUGIN.md for setup.
- Library import — read an existing Calibre library's
metadata.dbdirectly and ingest it as Bindery's catalogue (idempotent; trigger from the Settings page or setcalibre.sync_on_startup). Toggling the mode takes effect without a restart.
- OpenLibrary (primary) — Authors, books, editions, covers, ISBN lookup
- Google Books (enricher) — Richer descriptions and ratings
- Hardcover.app (enricher) — Community ratings and series data via GraphQL
- DNB (enricher) — Deutsche Nationalbibliothek via the public SRU endpoint. No API key. Always on. Fills description, language, year, and publisher from MARC21 records — especially useful for German-language titles where OpenLibrary coverage is thin.
- Audnex — Audiobook narrator, duration, cover, and description by Audible ASIN via the free api.audnex.us wrapper. Trigger with
POST /api/v1/book/{id}/enrich-audiobook. - Cover image proxy — Cover images are fetched and cached server-side under
<dataDir>/image-cache/(30-day TTL). AllimageURLfields in API responses are rewritten to/api/v1/images?url=<encoded>before leaving the server. The browser never contacts Goodreads, OpenLibrary, or Google Books directly — no IP leakage, no third-party tracking. - No Goodreads scraping. All sources use documented, stable public APIs.
- CSV import — Upload a newline-separated list of author names (or a
name,monitored,searchOnAddCSV); each name is resolved against OpenLibrary. - Readarr import — Upload
readarr.dbdirectly. Authors are re-resolved via OpenLibrary (Goodreads IDs aren't portable sincebookinfo.clubis dead); Indexers, download clients, and blocklist entries port structurally. Run a library scan afterward to match existing files. - CLI —
bindery migrate csv <path>andbindery migrate readarr <path>for first-time bulk imports without opening the UI. - UI — Settings → Import tab with file upload + per-section result summary.
- Webhook notifications — Configurable HTTP callbacks for grab / import / failure events (pipe to Apprise, ntfy, Home Assistant, etc.)
- Metadata profiles — Filter books by language, popularity, page count, ISBN presence
- Import lists — Auto-add authors/books from external sources; exclusion list to skip unwanted entries
- Tag system — Scope indexers/profiles/notifications to specific authors
- Backup/restore — Snapshot the SQLite database on demand
- Log viewer — Settings → Logs shows the last 200 entries from an in-process ring buffer, colour-coded by severity with WARN/ERROR filters and 5 s auto-refresh. Runtime log level switchable to DEBUG without restarting via
PUT /api/v1/system/loglevel - Authentication — First-run setup creates an admin account (argon2id password hashing, signed session cookies). Three modes: Enabled (always require login), Local only (bypass auth for private IPs — home network convenience), Disabled (no auth, for trusted reverse-proxy deployments). Per-account API key for external integrations. Per-IP rate limiting on the login endpoint.
- Multilingual UI — English, French, German, Dutch, Spanish, Filipino (Tagalog), and Indonesian. Language auto-detected from the browser; manual override in Settings → General → Language. Persists to
localStorageso the first paint is always in the right language. - Light and dark themes — iOS-style slider toggle in Settings → General → Appearance. First-load default respects the browser's
prefers-color-scheme; preference persists to localStorage. - Modern React SPA — React 19 + TypeScript + Tailwind CSS 3, built with Vite.
- Detail pages — Routed
/book/:idand/author/:idpages replace the previous modal flow. Deep-linkable, back-button friendly, hold per-book history inline. - Grid / Table view toggle — Switch between poster-grid and dense-table views on the Books and Authors pages; choice persists per page.
- Mobile-friendly — Responsive layout with hamburger nav, card views for History/Blocklist, agenda view for Calendar. Table views hide less-critical columns on narrow viewports.
- Pagination everywhere — First/Prev/Next/Last + page numbers + configurable page size on all list pages; page size persists per-page in
localStorage - Search, filter, sort — On Authors, Books, Wanted, and History pages. Authors can be filtered by
Monitored / Unmonitored. Author detail page sorts books by publication date and filters by type, status, or released/upcoming. Books filter chips includeType: Ebook / Audiobook; Books view shows the author inline. - Calendar view — Upcoming book releases from monitored authors, with compact dot-indicator grid on mobile
- Full REST API — Every feature accessible via HTTP for scripting and integration
- OPDS 1.2 catalogue at
/opds/v1.2/— browse and download your library from KOReader, Moon+ Reader, or any OPDS-capable reading app without Calibre. Authenticated with HTTP Basic Auth (any username, API key as password). Feeds: catalog root, recent, by author, search.
- Single binary — Frontend embedded via
go:embed. No nginx, no sidecars, no complexity - Distroless Docker image — Minimal attack surface, published to GHCR
- Kubernetes-ready — Helm chart included for ArgoCD / Flux deployments
- SQLite + WAL — Pure Go driver (
modernc.org/sqlite), no CGO, no external database to manage
docker run -d \
--name bindery \
-p 8787:8787 \
-v /path/to/config:/config \
-v /path/to/books:/books \
-v /path/to/downloads:/downloads \
ghcr.io/vavallee/bindery:latestOpen http://localhost:8787, follow the first-run setup to create the admin account, and you're in.
Download the archive for your platform from the latest release, extract, and run:
# Linux / macOS
tar -xzf bindery_<version>_<os>_<arch>.tar.gz
./bindery:: Windows
:: Unzip bindery_<version>_windows_amd64.zip, then:
bindery.exeOn first run the database lands in the platform's standard location:
| Platform | Default BINDERY_DB_PATH |
Default BINDERY_DATA_DIR |
|---|---|---|
| Linux / Docker / Helm | /config/bindery.db |
/config |
| Windows | %APPDATA%\Bindery\bindery.db |
%APPDATA%\Bindery |
| macOS | ~/Library/Application Support/Bindery/bindery.db |
~/Library/Application Support/Bindery |
Set BINDERY_DB_PATH / BINDERY_DATA_DIR if you want them elsewhere. The resolved paths are logged at startup ("starting bindery" log line).
For Docker Compose, Kubernetes (Helm), running as a specific UID/GID, and upgrade notes, see docs/DEPLOYMENT.md.
Bindery is configured through the web UI under Settings. Core env vars:
| Variable | Default | Description |
|---|---|---|
BINDERY_PORT |
8787 |
HTTP server port |
BINDERY_DB_PATH |
platform-dependent (see Binary install) | SQLite database path |
BINDERY_DATA_DIR |
platform-dependent (see Binary install) | Config directory (backups live here) |
BINDERY_LOG_LEVEL |
info |
debug / info / warn / error |
BINDERY_DOWNLOAD_DIR |
/downloads |
Where the download client places completed downloads |
BINDERY_LIBRARY_DIR |
/books |
Destination for imported ebook files |
BINDERY_AUDIOBOOK_DIR |
falls back to BINDERY_LIBRARY_DIR |
Destination for imported audiobook folders |
The full variable reference (path remapping, API key seeding, BINDERY_PUID / BINDERY_PGID sanity checks) is in docs/DEPLOYMENT.md.
Bindery aggregates book metadata from multiple open sources:
| Source | Auth Required | Used For |
|---|---|---|
| OpenLibrary | None | Primary: authors, books, editions, covers, ISBN lookup |
| Google Books | API key (free) | Enrichment: descriptions, ratings |
| Hardcover.app | None (public GraphQL) | Enrichment: community ratings, series |
| DNB | None (public SRU) | Enrichment: descriptions, language, year for German-language titles |
No Goodreads scraping. All sources use documented, stable public APIs. Cover images are fetched and cached server-side (GET /api/v1/images) — the browser never contacts external image hosts directly.
- SABnzbd — full support (NZB submission, queue/history polling, pause/resume/delete)
- qBittorrent — WebUI API v2 with Username/Password auth (add magnet/URL, list/delete torrents)
- Transmission - WebUI API with Username/Password auth (add magnet/URL, list/delete torrents)
- Newznab (Usenet) — NZBGeek, NZBFinder, NZBPlanet, DrunkenSlug, etc.
- Torznab (Torrents) — Prowlarr, Jackett, or direct Torznab endpoints
- Configurable categories per indexer — set custom Newznab category IDs in Settings → Indexers (e.g.
7120for German books,3130for German audio on SceneNZBs). Bindery routes 7xxx IDs to ebook searches and 3xxx IDs to audiobook searches automatically.
- Generic webhooks — Any HTTP endpoint. Pipe to Apprise, ntfy, Home Assistant, Slack, Discord via proxies.
Bindery is a single Go binary with the React frontend embedded via go:embed:
Newznab / Torznab
indexers
│
▼
┌────────────────────────────┐
│ Bindery │──► SABnzbd / qBittorrent
│ Go backend + React SPA │──► /books/ library
│ SQLite (WAL mode) │──► Webhook notifications
└────────────────────────────┘
▲ ▲
│ │
OpenLibrary Google Books, Hardcover.app, DNB
(primary) (enrichers)
- Backend: Go 1.25 with chi router
- Database: SQLite via modernc.org/sqlite (pure Go, no CGO)
- Frontend: React 19 + TypeScript + Tailwind CSS + Vite
- Container: Multi-stage build on distroless (minimal attack surface)
Bindery exposes a full REST API under /api/v1. A few highlights:
GET /api/v1/health - server health
GET /api/v1/author - list authors
POST /api/v1/author - add author (triggers async book fetch)
GET /api/v1/book?status=wanted - filter books by status
POST /api/v1/book/{id}/search - manual indexer search for a book
GET /api/v1/queue - active downloads with live SABnzbd overlay
POST /api/v1/queue/grab - submit a search result to download client
GET /api/v1/history - grab/import/failure events
POST /api/v1/history/{id}/blocklist - add a history event's release to the blocklist
GET /api/v1/blocklist - blocked releases
GET /api/v1/images?url=<encoded> - proxied + cached cover image (30-day TTL)
POST /api/v1/notification/{id}/test - fire a test webhook
POST /api/v1/backup - snapshot the database
Every request to /api/v1/* (except /health, /auth/status, /auth/login, /auth/logout, /auth/setup) is authenticated. A request is allowed if any of:
- Auth mode is Disabled.
- Auth mode is Local only and the request originates from a private-range IP (
10/8,172.16/12,192.168/16,127/8, IPv6 ULA, link-local, loopback). - A valid
X-Api-Keyheader (or?apikey=query param) matches the stored key. - A valid
bindery_sessioncookie is present.
Otherwise the server responds with 401. The API key lives in Settings → General → Security — copy it from there for scripts and integrations. Regenerating the key invalidates any existing consumers.
| Topic | Where |
|---|---|
| Quickstart — first author to first grab in 10 minutes | Wiki |
| Deployment — Docker, Compose, k8s/Helm, binary, UID/GID, upgrades | docs/DEPLOYMENT.md |
| Calibre plugin bridge — cross-container Calibre integration via the Bindery Bridge plugin | docs/CALIBRE-PLUGIN.md |
| Roadmap — planned work, scope notes, and explicitly-out-of-scope items (Z-Library, OpenBooks, etc.) | docs/ROADMAP.md |
| Contributing & CI checks — dev setup, full quality/security matrix, local check suite | CONTRIBUTING.md |
| Changelog — release notes | CHANGELOG.md |
| Reverse-proxy & SSO setups — Traefik / Caddy / Nginx / Authelia / Authentik recipes | Wiki |
| Troubleshooting — permission-denied, path-remap, import failures | Wiki |
| Indexer & download-client recipes — NZBGeek / DrunkenSlug / Prowlarr / Jackett / SAB / qBit tips | Wiki |
| Migrating from Readarr — step-by-step with known failure modes | Wiki |
Bindery holds API keys, reaches LAN services, and writes to disk. We take that seriously.
Every push and every weekly cron runs gosec, govulncheck, Semgrep, gitleaks, Trivy, Grype, Dockle, Syft, ZAP baseline, and OpenSSF Scorecard. Findings upload to GitHub's Security tab as SARIF and are public-readable. Release images ship with SLSA build provenance and Syft SBOMs.
Highlights of the in-app controls:
- SSRF guards on every outbound URL (webhooks, indexers, download clients), with DNS-rebinding defense.
- Hardened headers — CSP,
X-Frame-Options: DENY,Referrer-Policy, auto HSTS when TLS is present. - Cookie
Secureauto-detect via TLS orX-Forwarded-Proto, overridable. - Distroless, non-root, read-only rootfs container with all caps dropped and RuntimeDefault seccomp.
- Digest-pinned base images tracked by Dependabot.
To report a vulnerability, follow the process in SECURITY.md. The full threat model, control catalogue, and verification recipes live on the wiki Security page.
PRs, issues, and feedback welcome. See CONTRIBUTING.md for the dev setup, the full local check suite, and the PR flow. Tracked feature work lives in docs/ROADMAP.md — open an issue before starting anything substantial.
MIT. See LICENSE for details.
- The *arr community for pioneering the monitor-search-download-import pattern
- OpenLibrary for free, open book metadata
- The Readarr project for the original vision, even though the implementation couldn't be sustained
