Version: v1.5.10 (Forensic Audit Remediation)
Last Updated: February 13, 2026
Status: ✅ v1.5.10 Deployed: Forensic Stats & Storage Stabilization Complete
Next Release: 1.6.0 (Unified Node Intelligence)
This release stabilizes the core telemetry and storage subsystems after a 25-pass forensic audit. Key fixes include autonomous stats merging, chat metadata restoration, and Port 445 SMB probing.
- The Commander system is now fully operational across all nodes.
- Permissions require further refinement, but the core functionality is stable.
- Added support for distributed orchestration with seamless multi-node ignition.
- Enhanced GUI with real-time status updates for all nodes.
- Screenshot added to README:
assets/The_Commander_has_arrived.png. - YouTube video announcement coming soon.
- Permissions need further tweaking for optimal security.
This patch adds significant improvements to how the system reports and manages running inference engines. Operators should read the notes below and the full operational guidance in docs/OPERATIONAL_NOTES_1_5_3.md.
- GUI now displays the ACTUAL running model and live statistics (context usage, tokens/sec) by querying each node's running inference server rather than using only the
relay.yamlconfig. Save & Re-Ignitehas been hardened: the system now verifies tracked PIDs, kills stale PIDs and any process bound to the engine port, and then launches a fresh engine instance so ghost CMD windows no longer block re-launches.- Engine dial changes (model_file, ctx, ngl, fa, extra_flags) made via the Node Control Panel are persisted to the target node's
relay.yamland the engine is restarted to apply them. - New health and shutdown behaviors prevent configuration drift and reduce false-positive reports that an engine is running when it is not.
See docs/status/OPERATIONAL_NOTES_LATEST.md for full API details, GUI behavior, command examples, and a testing checklist.
The Commander now supports fully config-driven engine startup across all nodes in the cluster.
| Node | IP | API Port | Engine Port |
|---|---|---|---|
| Main | 10.0.0.164 | 9000 | 8000 |
| HTPC | 10.0.0.42 | 9001 | 8001 |
| Laptop | 10.0.0.93 | 9002 | 8002 |
| Steam-Deck | 10.0.0.139 | 9003 | 8003 |
Startup is managed via config/relay.yaml.
Engine launch is 100% config-driven — the system reads binary, model_file, ctx, ngl, fa,
and bind_host from each node’s engine section in relay.yaml and constructs the CLI command directly.
No external launch scripts are used.
- Start HTPC first (has relay/storage roles)
- Start Main node
- Start Steam-Deck
- Start Laptop (or whichever node you're working from)
- Open GUI at
http://localhost:5173(or node's IP:5173) - Click "IGNITE ALL" or engines should already be running
Each node has complete engine settings:
engine:
binary: go.exe # or 'go' on Linux
model_file: model.gguf # Model filename
ctx: 30000 # Context window
ngl: 999 # GPU layers
fa: true # Flash attention
bind_host: 10.0.0.164 # Network binding
extra_flags: "" # Additional CLI flags (optional)The Commander now includes a comprehensive filesystem explorer for browsing and selecting files on any node in the cluster - both local and remote.
- Remote File Browsing: Browse directories on any node from any GUI
- Drive Enumeration: Automatically detect drives (Windows) or root (Linux)
- Storage Path Selection: Select storage folders via file explorer instead of hardcoding
- Binary Discovery: Browse and select engine binaries (.exe, .bat, .sh) from any location
- Model Selection: Navigate model folders and select .gguf files visually
- Transparent Proxying: Remote node requests seamlessly proxy through the cluster
Commander OS implements a "Local-First" storage pattern where:
- Each node operates on local NVMe/SSD for maximum performance
- Critical events (node offline) sync immediately to HTPC/ZFS
- Regular events batch sync every 60 seconds to reduce network overhead
- ZFS pool on HTPC is the authoritative storage (end authority)
GET /nodes/{node_id}/filesystem/drives
→ Returns list of drives (C:\, D:\, etc.) or root (/)
GET /nodes/{node_id}/filesystem/list?path=...
→ Returns directory contents with file/folder metadata
- Relay server automatically restarts when reigniting nodes with
relayorstorageroles - Storage location is broadcast to all nodes when relay starts
- Nodes can query relay config via
GET /relay/config
- No more hardcoded paths: Binary paths removed from
relay.yaml - User-selected paths: All file paths now chosen via GUI file explorer
- Dynamic discovery: Engines, models, and storage paths discovered at runtime
The Commander Chat provides direct interaction with The Commander Avatar - your AI strategic advisor powered by 100% local inference.
- Conversation Management: Create multiple chat threads with persistent history
- Real-time Streaming: See responses as they generate via WebSocket
- Auto Node Selection: Commander routes to highest-performance node automatically
- Intent Classification: Understands commands, queries, and conversations
- Decision Engine: Trust boundaries prevent unauthorized operations
- Cross-Node Access: Access any node's GUI from any machine in the cluster
- Left Sidebar: Conversation list + "New Chat" button
- Center: Message stream with user/assistant bubbles
- Bottom Bar: Node stats (commanding node, model, context, TPS)
- Input: Command line with send button
- You type a message in the chat input
- Message goes to Commander Avatar on the hub
- Avatar classifies intent (QUERY, COMMAND, CHAT, ESCALATION)
- Decision Engine checks trust boundaries
- LlamaClient routes inference to highest TPS node (not the selected node in left panel)
- Response streams back to GUI via WebSocket
Note: The left panel "Selected Node" is for configuration only. Chat always routes through the highest-performance available node.
You can access any node's GUI from any browser on the LAN:
- From Laptop → open
http://gillsystems-main:9000in browser - From Main → open
http://gillsystems-htpc:9001in browser - GUI automatically uses the same host/port for API calls
Both Windows and Linux now use unified clear-launch launchers by default. These automatically clear stale Commander processes and ports before startup, eliminating "Address already in use" errors.
Primary Launchers:
- Windows:
The_Commander.bat - Linux/macOS:
./the_commander.sh
These now include:
- ✅ Automatic port clearance - kills zombie Commander processes
- ✅ Network preflight validation - verifies all nodes are reachable
- ✅ Auto-hosts setup - configures
/etc/hostsor Windows hosts file with node mappings - ✅ Dynamic versioning - pulls version from centralized
commander_os.__version__.py - ✅ GUI/HUD sync - ensures frontend and backend launch in correct exact order
Legacy launchers (without auto-clear) are archived in docs/archive/ for reference.
# Windows
The_Commander.bat
# Linux/macOS
./the_commander.shCommander OS includes a self-healing network identity layer that automatically resolves and corrects node addresses at startup. This eliminates the "Hub is unreachable" error caused by DHCP IP address changes.
- Hostname-based configuration:
config/relay.yamluses hostnames (e.g.,gillsystems-main) instead of hard-coded IPs - Preflight smoke test: Before starting the hub, the system resolves all hostnames and verifies port reachability
- LAN discovery fallback: If hostname resolution fails, the system scans the local subnet to find the node by its
/identityendpoint - Runtime caching: Resolved addresses are cached for fast subsequent lookups
Preflight automatically detects and adds missing hosts entries. Manual setup is optional:
Windows (auto-setup when running as admin):
- Launcher automatically detects and adds missing entries to hosts file
Linux/macOS (auto-setup with sudo):
sudo ./the_commander.shManual Edit (if preferred):
# Windows: C:\Windows\System32\drivers\etc\hosts
# Linux/macOS: /etc/hosts
# Gillsystems Commander OS Nodes
10.0.0.164 gillsystems-main
10.0.0.42 gillsystems-htpc
10.0.0.93 gillsystems-laptop
10.0.0.139 gillsystems-steam-deck
Available when running the backend manually via commander_os.main:
# Run with full preflight (default)
python -m commander_os.main commander-gui-dashboard-
"Hub is unreachable": Ensure nodes are powered on or check hostname resolution:
ping gillsystems-main
-
Preflight fails: Check that target nodes are running Commander OS
-
Discovery finds wrong node: Ensure each node has the
/identityendpoint (included in v1.4.0+) -
Port already in use: Launchers automatically clear stale processes. Run again if needed.
-
Best Practice: Set up DHCP reservations on your router to give each machine a stable IP address
If you encounter [Errno 98] Address already in use, it means a previous instance of the Commander is still hanging in the background. Use the following tactical clearance scripts to reset your node's ports:
- Linux / Steam Deck:
./kill_all_active_port_stealers_for_your_node.sh
- Windows:
kill_all_active_port_stealers_for_your_node.bat
- OS: Windows 10/11 or Linux (Ubuntu 20.04+)
- Python: v3.10 ONLY (Strict Requirement)
- Warning: Python 3.14+ will cause dependency failures
- Node.js: v20+ (LTS Required for Vite compatibility)
- GPU: AMD Radeon 7000 Series (Optional, for Local LLM Acceleration)
For automated installation of all dependencies (Python 3.10, Node.js 20+, npm, build tools):
cd The-Commander-Agent
./scripts/linux_prereqs.shThis script will:
- Install Node.js 20+ (required for Vite frontend)
- Ensure Python 3.10 is available (via pyenv if needed)
- Create virtual environment and install Python dependencies
- Validate all prerequisites before completion
-
Clone Repository:
git clone https://github.com/OCNGill/The-Commander-Agent.git cd The-Commander-Agent -
Install Dependencies (run the platform pre-req first):
# Linux (recommended) ./scripts/linux_prereqs.sh # Windows (run as Administrator) # From Explorer: Right-click `install_prereqs.bat` and choose "Run as administrator" # Or in an elevated PowerShell prompt: .\install_prereqs.bat
After the prereq installer finishes, continue with the launcher step below to start the Commander OS.
-
Launch System:
# Windows The_Commander.bat # Linux ./the_commander.sh
-
Access Dashboard: Open browser to
http://localhost:5173
For complete model discovery and distributed orchestration across all nodes:
-
Clone Repository on Each Node:
# On each physical machine (Main, HTPC, Steam-Deck, Laptop) git clone https://github.com/OCNGill/The-Commander-Agent.git cd The-Commander-Agent
-
Install Dependencies on Each Node:
# Automated (Linux) - Recommended ./scripts/linux_prereqs.sh # Manual Windows py -3.10 -m pip install -r requirements.txt # Manual Linux python3.10 -m pip install -r requirements.txt
-
Verify Node Configuration:
- Check
config/relay.yamlfor correct IP addresses and ports - Ensure
model_root_pathpoints to each node's model directory
- Check
-
Launch on Each Node:
# Each node automatically detects its identity based on port The_Commander.bat # Windows ./the_commander.sh # Linux
-
Verify Network Connectivity:
- Each node's API should be accessible at
http://<node-ip>:<port> - Test from any node:
curl http://10.0.0.164:9000/nodes
- Each node's API should be accessible at
Why Multi-Node Deployment?
- Model Discovery: Each node scans its own filesystem and reports available models
- Distributed Inference: Chat requests route to highest-ranking available node
- Load Balancing: System automatically distributes work across active nodes
- Fault Tolerance: If one node fails, others continue operating
| Node ID | Physical Host | Hardware | Bench (t/s) | Model Configuration |
|---|---|---|---|---|
| Gillsystems-Main | Gillsystems-Main | Radeon 7900XTX | 130 | Qwen3-Coder-25B (131k ctx, 999 NGL) |
| Gillsystems-HTPC | Gillsystems-HTPC | Radeon 7600 | 60 | Granite-4.0-h-tiny (114k ctx, 40 NGL) |
| Gillsystems-Steam-Deck | Gillsystems-Steam-Deck | Custom APU | 30 | Granite-4.0-h-tiny (21k ctx, 32 NGL) |
| Gillsystems-Laptop | Gillsystems-Laptop | Integrated | 9 | Granite-4.0-h-tiny (21k ctx, 999 NGL) |
- ✅ Chat Interface Overhaul: Persistent chat history with multisession conversation management.
- ✅ Secure Conversation Storage: SQLite on ZFS for all historical chat context.
- ✅ Frontend Sidebar: New "Conversations" panel for instant switching between strategic threads.
- ✅ Context Awareness: Optimized message handling with conversation-specific routing.
- Implementing Commander brain logic (Avatar + Cyberbot)
- Decision Engine with trust boundaries and escalation rules
- Working chat interface in Strategic Dashboard GUI
- Local llama.cpp integration for natural language processing
- Path 4 (Hybrid Local Architecture) selected after LLM consultation
- Implemented the new storage framework for Commander OS.
- Added agent-specific storage modules for Commander and Recruiter agents.
- Updated documentation to reflect the new storage architecture.
If you find this project helpful, you can support ongoing work — thank you!
Donate:
The Commander system is now fully operational across all nodes. Permissions require further refinement, but the core functionality is stable.



