-
Notifications
You must be signed in to change notification settings - Fork 0
Description
This EPIC tracks the overall development roadmap for the Torrust Tracker Deployer project.
π Living Documentation
- π Complete Roadmap:
docs/roadmap.md - β Q&A Clarifications:
docs/roadmap-questions.md - π οΈ Feature Process:
docs/features/README.md
π― Roadmap Overview
The roadmap is organized into 11 main sections with incremental delivery:
- Main app scaffolding - Console commands, logging, presentation layer β COMPLETED
- Hetzner provider - Additional infrastructure provider support β COMPLETED
- Application commands - Service deployment with incremental slicing β COMPLETED
- Docker image - Official containerized deployer β COMPLETED
- Console commands - Status and testing capabilities β COMPLETED
- HTTPS support - SSL/TLS for all services β COMPLETED
- Backup support - Database and configuration backups β COMPLETED
- Verbosity levels - User-facing output control with
-v,-vv,-vvvflags - Extend deployer usability -
validateandrendercommands for partial adoption - Improve usability (UX) - DNS reminders, service URLs, purge command
- Improve AI agent experience - agentskills.io, documentation headers, dry-run mode
π Key Insights
Target Users
- Primary: Developers wanting simple deployment without infrastructure knowledge
- Secondary: System administrators comfortable with the deployer's approach
Technical Approach
- Configuration: TOML files + environment variables (aligned with Torrust ecosystem)
- Architecture: Clear DDD layering (presentation β application β domain)
- Deployment: Service-based incremental slicing (hello-world β Tracker β MySQL β Prometheus β Grafana)
- Testing: Focus on E2E tests, expanding with each service addition
Strategic Decisions
- MVP Scope: Basic deployer with Hetzner provider support
- Service Slicing: Deploy fully working stacks incrementally, not deployment stages
- State Management: JSON persistence with simple locking mechanisms
- Error Handling: Detailed messages with verbosity levels, user-friendly guidance
Development Process
- Team Size: 1 Rust developer
- Dependencies: Minimal external team dependencies (some Tracker project coordination)
- Feature Workflow: Document in
docs/features/β Create issue β Link as child of this EPIC
π Progress Tracking
Child issues will be created for each major feature and linked to this EPIC. Progress can be tracked through:
- Individual feature completion
- Roadmap document checkbox updates
- Integration test expansion
π Related Resources
Note: This is a living roadmap. The linked documents will be updated as development progresses and new insights are gained. Below you have a copy of docs/roadmap.md, which is the source of truth.
Roadmap
1. Scaffolding for main app β COMPLETED
Epic Issue: #2 - Scaffolding for main app
- 1.1 Setup logging - Issue #3 β Completed
- 1.2 Create command
torrust-tracker-deployer destroy- EPIC #8, EPIC #9, EPIC #10 β Completed - 1.3 Refactor extract shared code between testing and production for app bootstrapping β Completed
- 1.4 Improve command to use better abstraction to handle presentation layer β Completed - EPIC #102
- 1.5 Create command
torrust-tracker-deployer create- EPIC #34 β Completed - 1.6 Create command
torrust-tracker-deployer provision(UI layer) - Issue #174 β Completed - 1.7 Create command
torrust-tracker-deployer configure(UI layer) - Issue #180 β Completed - 1.8 Create command
torrust-tracker-deployer test(UI layer) - Issue #188 β Completed
2. Add new infrastructure provider: Hetzner β COMPLETED
Epic Issue: #205 - Add Hetzner Provider Support β Completed
- 2.1 Add Hetzner provider support (Phase 1: Make LXD Explicit) β Completed
- 2.2 Add Hetzner provider support (Phase 2: Add Hetzner) β Completed
3. Continue adding more application commands β COMPLETED
Epic Issue: #216 - Implement ReleaseCommand and RunCommand with vertical slices
Note: These are internal app layer commands. The approach is to slice by functional services - we fully deploy a working stack from the beginning and incrementally add new services.
-
3.1 Finish ConfigureCommand β Completed - Epic #16
-
3.2 Implement ReleaseCommand and RunCommand with vertical slices - Epic #216
Strategy: Build incrementally with working deployments at each step.
4. Create a docker image for the deployer β COMPLETED
- 4.1 Create docker image for the deployer - Issue #264 β Completed
5. Add extra console app commands β COMPLETED
- 5.1
torrust-tracker-deployer show- Issue #241 β Completed - 5.2
torrust-tracker-deployer testβ Completed - 5.3
torrust-tracker-deployer list- Issue #260 β Completed
6. Add HTTPS support β COMPLETED
Research Complete: Issue #270 - Caddy evaluation successful, production deployment verified
- 6.1 Add HTTPS support with Caddy for all HTTP services - Issue #272 β
Completed
- Implemented Caddy TLS termination proxy
- Added HTTPS support for HTTP tracker
- Added HTTPS support for tracker API
- Added HTTPS support for Grafana
7. Add backup support β COMPLETED
Epic Issue: #309 - Add backup support
- 7.1 Research database backup strategies - Issue #310 β
Completed
- Investigated SQLite and MySQL backup approaches
- Recommended maintenance-window hybrid approach (container + crontab)
- Built and tested POC with 58 unit tests
- Documented findings in
docs/research/backup-strategies/
- 7.2 Implement backup support - Issue #315 β
Completed
- Added backup container templates (Dockerfile, backup.sh) - Published to Docker Hub
- Added backup service to Docker Compose template with profile-based enablement
- Extended environment configuration schema with backup settings
- Deployed backup artifacts via Ansible playbooks
- Installed crontab for scheduled maintenance-window backups (3 AM daily)
- Supports: MySQL dumps, SQLite file copy, config archives
- Backup retention cleanup (configurable days, default 7)
- Note: Volume management is out of scope - user provides a mounted location
- Implementation Details: Phase 1-4 completed (container, service integration, crontab scheduling, documentation)
8. Add levels of verbosity
- 8.1 Add levels of verbosity as described in the UX research
- Implement
-v,-vv,-vvvflags for user-facing output - See docs/research/UX/ for detailed UX research
- Implement
9. Extend deployer usability
Add new commands to allow users to take advantage of the deployer even if they do not want to use all functionalities. This enables partial adoption of the tool.
These commands complete a trilogy of "lightweight" entry points:
register- For users with pre-provisioned instancesvalidate- For users who only want to validate a deployment configurationrender- For users who only want to build artifacts and handle deployment manually
This makes the deployer more versatile for different scenarios and more AI-agent friendly (dry-run commands provide feedback without side effects).
- 9.1 Implement
validatecommand- Validate deployment configuration without executing any deployment steps
- See feature specification:
docs/features/config-validation-command/
- 9.2 Implement artifact generation command
- Command name TBD - candidates:
render,generate,export,prepare,scaffold - Generate all build artifacts (docker-compose, tracker config, Ansible playbooks, etc.) to a
build/directory - Users can copy these files to their own servers and deploy manually
- Target audience: System administrators who only need the final configuration files
- Command name TBD - candidates:
10. Improve usability (UX)
Minor changes to improve the output of some commands and overall user experience.
- 10.1 Add DNS setup reminder in
provisioncommand output- Display reminder when any service has a domain configured
- See draft:
docs/issues/drafts/dns-setup-reminder-in-provision-command.md
- 10.2 Improve
runcommand output with service URLs- Show service URLs immediately after services start
- Include hint about
showcommand for full details - See draft:
docs/issues/drafts/improve-run-command-output-with-service-urls.md
- 10.3 Add DNS resolution check to
testcommand- Verify configured domains resolve to the expected instance IP
- Advisory warning only (doesn't fail tests) - DNS is decoupled from service tests
- See draft:
docs/issues/drafts/add-dns-resolution-check-to-test-command.md
- 10.4 Add
purgecommand to remove local environment data- Removes
data/{env}/andbuild/{env}/for destroyed environments - Allows reusing environment names after destruction
- See draft:
docs/issues/drafts/add-purge-command-to-remove-local-data.md
- Removes
11. Improve AI agent experience
Add features and documentation that make the use of AI agents to operate the deployer easier, more efficient, more reliable, and less prone to hallucinations.
Context: We assume users will increasingly interact with the deployer indirectly via AI agents (GitHub Copilot, Cursor, etc.) rather than running commands directly. This section ensures AI agents have the best possible experience when working with the deployer.
-
11.1 Consider using agentskills.io for AI agent capabilities
- Agent Skills is an open format for extending AI agent capabilities with specialized knowledge and workflows
- Developed by Anthropic, adopted by Claude Code, OpenAI Codex, Amp, and others
- Provides progressive disclosure: metadata at startup, instructions on activation, resources on demand
- Skills can bundle scripts, templates, and reference materials
- See issue: #274
- See spec:
docs/issues/274-consider-using-agentskills-io.md
-
11.2 Add AI-discoverable documentation headers to template files
- Templates generate production config files (docker-compose, tracker.toml, Caddyfile, etc.)
- Documentation is moving from templates to Rust wrapper types (published on docs.rs)
- Problem: AI agents in production only see rendered output, not the source repo
- Solution: Add standardized header to templates with links to repo, wrapper path, and docs.rs
- Enables AI agents to find documentation even when working with deployed configs
- See draft:
docs/issues/drafts/add-ai-discoverable-documentation-headers-to-templates.md
-
11.3 Provide configuration examples and questionnaire for AI agent guidance
- Problem: AI agents struggle with the many valid configuration combinations
- Questionnaire template: structured decision tree to gather all required user information
- Example dataset: real-world scenarios mapping requirements to validated configs
- Covers: provider selection, database type, tracker protocols, HTTPS, monitoring, etc.
- Benefits: few-shot learning for agents, reduced hallucination, training/RAG dataset
- See draft:
docs/issues/drafts/provide-config-examples-and-questionnaire-for-ai-agents.md
-
11.4 Add dry-run mode for all commands
- Allow AI agents (and users) to preview what will happen before executing operations
- Particularly valuable for destructive commands (
destroy,stop) - Flag:
--dry-runshows planned actions without executing - Reduces risk when AI agents operate autonomously
Deferred Features
Features considered valuable but out of scope for v1. We want to release the first version and wait for user acceptance before investing more time. These can be revisited based on user feedback.
| Feature | Rationale | Notes |
|---|---|---|
Machine-readable JSON output (--json flag) |
Easy to add thanks to MVC pattern, but not critical for v1 | Structured output helps AI agents parse results reliably |
| MCP (Model Context Protocol) server | Native AI integration without shell commands | Would let AI agents call deployer as MCP tools directly |
| Structured error format for AI agents | Already improving errors in section 10 | Could formalize with error codes, fix suggestions in JSON |
Note: See docs/research/UX/ for detailed UX research that will be useful to implement features in sections 8, 9, 10, and 11.