Skip to content

feat: Add lightweight web UI for monitoring, activity history & runtime control#337

Open
lolimmlost wants to merge 7 commits intoManiMatter:latestfrom
lolimmlost:feat/web-ui
Open

feat: Add lightweight web UI for monitoring, activity history & runtime control#337
lolimmlost wants to merge 7 commits intoManiMatter:latestfrom
lolimmlost:feat/web-ui

Conversation

@lolimmlost
Copy link
Copy Markdown

Summary

Decluttarr currently has zero visibility into what it's doing — all config is YAML, all output is logs. This PR adds a lightweight web UI for monitoring, activity history, and runtime control without changing the existing daemon behavior.

  • Dashboard — real-time queue view across all arr instances, instance status cards, live activity feed, "Run Now" button
  • Activity Log — searchable, filterable, paginated history of every action (flags, removals, recoveries, strikes) stored in SQLite
  • Settings Editor — toggle test_run, enable/disable jobs, adjust max_strikes/min_speed at runtime without editing YAML or restarting
  • Download Protection — protect individual downloads from removal via the UI (supplements the qBit "Keep" tag)
  • REST API — full JSON API with auto-generated OpenAPI docs at /api/docs
  • SSE Live Updates — server-sent events push changes to the browser in real time

Tech Choices

Component Choice Why
Web framework FastAPI Async-native (shares existing asyncio loop), lightweight, built-in OpenAPI
Frontend Jinja2 + HTMX + Alpine.js No build step, no Node tooling in a Python project
Styling Pico CSS (dark theme) Classless CSS, minimal custom styles needed
Persistence SQLite via aiosqlite Zero config, file-based, auto-creates on first run
Real-time Server-Sent Events Simpler than WebSockets, unidirectional, HTMX-compatible

Architecture

The web server runs as a sibling asyncio task alongside the existing main loop — both share the same event loop and process memory. An EventBus class decouples the job system from the UI: jobs emit events at decision points, the web layer (ActivityRecorder + SSE) consumes them. When web is disabled, a NoOpEventBus is used with zero overhead.

Job System → EventBus → ActivityRecorder (writes SQLite)
                      → SSE endpoint (pushes to browser)

Browser → FastAPI API → reads Tracker state (queue/strikes)
                      → reads/writes SQLite (activity, config, protected)
                      → mutates Settings object (runtime config)

Database Schema (SQLite)

Three tables: activity_log (action history), protected_downloads (UI-managed protection), config_overrides (runtime config layered on top of YAML). Auto-created at ./data/decluttarr.db.

API Endpoints

Method Path Purpose
GET /api/status Uptime, test_run state, instance count
GET /api/queue Current queue across all arr instances with strike info
GET /api/activity Paginated activity log with filters
GET /api/strikes Current strike data across all trackers
POST/DELETE /api/protected/{id} Protect/unprotect a download
GET/PATCH /api/config Read/update runtime config
POST /api/config/test-run Toggle test_run on/off
POST /api/config/reload Reset overrides to YAML defaults
GET /api/events SSE stream for real-time updates
POST /api/trigger Manually trigger a job cycle

Configuration

Zero new required config. Defaults to enabled on port 9999.

# config.yaml (optional)
web:
  enabled: true    # or WEB_ENABLED=false to disable
  host: "0.0.0.0"
  port: 9999

Migration / Backward Compatibility

  • Defaults to enabled but works with zero config — existing YAML configs unaffected
  • No new required env vars — all web settings have sensible defaults
  • Event bus is no-op when web is disabled — zero overhead on existing behavior
  • Database auto-creates on first run
  • All 192 existing tests pass unchanged

New Dependencies

fastapi==0.115.6
uvicorn[standard]==0.34.0
aiosqlite==0.20.0
jinja2==3.1.5
python-multipart==0.0.20

Files Changed

New (15 files in src/web/): events.py, database.py, app.py, routes.py, config_manager.py, templates (base, dashboard, activity, settings, 4 partials), static/style.css

Modified (11 files): main.py, job_manager.py, removal_job.py, removal_handler.py, strikes_handler.py, _general.py, _user_config.py, _instances.py, Dockerfile, requirements.txt, config_example.yaml

Screenshots

The UI uses Pico CSS dark theme with color-coded badges for arr instances (Sonarr=blue, Radarr=yellow, etc.), action types (removed=red, recovered=green, flagged=amber), and strike counts.

Test Plan

  • pytest tests/ — all 192 existing tests pass
  • Web UI loads at http://localhost:9999
  • Job loop still runs on timer (verified via logs)
  • Queue table shows downloads with strike/protection status
  • Protect/unprotect buttons work and survive next cycle
  • test_run toggle via settings page takes immediate effect
  • "Run Now" button triggers early cycle
  • Activity log records and displays actions
  • Docker build succeeds with EXPOSE 9999
  • Verify with WEB_ENABLED=false that web is fully disabled
  • Test with multiple concurrent SSE clients

🤖 Generated with Claude Code

ManiMatter and others added 6 commits November 1, 2025 17:32
Added note about Decluttar V2 release and breaking changes.
AttributeError: 'Response' object has no attribute 'get'
…ntime control

Adds a FastAPI-based web interface that runs alongside the existing job loop
as a sibling asyncio task. Zero new required config — defaults to enabled on
port 9999 and can be disabled via `web.enabled: false` or `WEB_ENABLED=false`.

Key features:
- Dashboard with real-time queue view, instance cards, and live activity feed
- Activity log with search, filtering (by job/arr/action/date), and pagination
- Runtime settings editor (toggle test_run, enable/disable jobs, adjust strikes)
- Download protection via UI (supplements qBit "Keep" tag)
- "Run Now" button to manually trigger a cycle
- SSE-powered live updates — no polling needed for real-time state
- Full REST API with auto-generated OpenAPI docs at /api/docs

Architecture:
- EventBus decouples job system from web layer (no-op when web disabled)
- SQLite via aiosqlite for activity history and config overrides
- Jinja2 + HTMX + Alpine.js frontend — no build step, no Node tooling
- Pico CSS for dark-theme styling

New files: src/web/ (events, database, app, routes, config_manager, templates)
Modified: main.py, job_manager, removal_job, removal_handler, strikes_handler,
          settings (_general, _user_config, _instances), Dockerfile, requirements

All 192 existing tests pass unchanged.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ponent

Inline Jinja2 tojson in @click attributes was getting double-escaped,
causing raw JS to render as button text. Moved to a queueRow() Alpine
component function instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Rubilmax
Copy link
Copy Markdown

Can we get this reviewed and merged please?

@lolimmlost
Copy link
Copy Markdown
Author

Can we get this reviewed and merged please?

I appreciate your enthusiasm but definitively needs testing as I'm getting webui errors after a weeklong usage. I'll review the code once again this weekend.

@Rubilmax
Copy link
Copy Markdown

Amazing, thanks 🙏

@lolimmlost
Copy link
Copy Markdown
Author

I'm attempting this fix for the crashing.

Wrap per-instance job runs and download client jobs in try/except so
a Sonarr/Radarr timeout logs an error and continues instead of crashing.
Add main_with_restart() wrapper so even unexpected failures auto-recover
after 30s while the web server stays up independently.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@lolimmlost
Copy link
Copy Markdown
Author

Pushed a fix for a crash that was happening after ~1 week of uptime.

Root cause: When Sonarr/Radarr timed out (read timeout=15s), the unhandled exception propagated up through asyncio.gather(main_task, web_task), which cancelled the web server task too — killing the entire app.

Fix (commit 25f3e2f):

  • Wrapped per-instance job runs and download client jobs in try/except so timeouts log an error and continue to the next cycle
  • Added main_with_restart() wrapper so even unexpected failures auto-recover after 30s while the web UI stays up independently

Verified running 24hrs+ on production with multiple Sonarr/Radarr timeouts — all recovered cleanly on the next cycle, no crashes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants