A microservices-based space station management system designed to demonstrate OpenTelemetry. Built with Java 17, Spring Boot 3.4, React 18, and TypeScript.
NEXUS Station simulates operations management for a space station, including:
- Docking - Ship arrival/departure, bay management
- Crew - Personnel tracking, section assignments
- Life Support - Environmental monitoring, O2/CO2/temperature
- Power - Grid management, source allocation
- Inventory - Supplies, cargo manifests, resupply requests
| Component | Technology |
|---|---|
| Backend Services | Java 17, Spring Boot 3.4.1 |
| Frontend | React 18, TypeScript, Vite, Tailwind CSS, Framer Motion, Lucide React |
| Database | PostgreSQL 16 (schema-per-service) |
| Cache | Redis 7 |
| Load Testing | Locust (Python) |
| Observability | OpenTelemetry (enabled in Kubernetes) |
| Container Runtime | Docker / Podman |
| Orchestration | Kubernetes + Helm |
The system follows a microservices architecture with CORTEX acting as the Backend-for-Frontend (BFF) layer. All inter-service communication uses synchronous REST calls.
flowchart TB
subgraph Frontend
UI[React Dashboard]
end
subgraph BFF
CORTEX[CORTEX]
end
subgraph Backend Services
POWER[Power Service]
DOCKING[Docking Service]
CREW[Crew Service]
LIFE[Life Support Service]
INVENTORY[Inventory Service]
end
subgraph Infrastructure
PG[(PostgreSQL)]
REDIS[(Redis)]
end
UI --> CORTEX
CORTEX --> POWER
CORTEX --> DOCKING
CORTEX --> CREW
CORTEX --> LIFE
CORTEX --> INVENTORY
LIFE --> POWER
DOCKING --> POWER
DOCKING --> CREW
CREW --> LIFE
INVENTORY --> DOCKING
INVENTORY --> CREW
POWER & DOCKING & CREW & LIFE & INVENTORY --> PG
POWER & DOCKING & CREW & LIFE & INVENTORY --> REDIS
| Service | Depends On | Purpose of Dependency |
|---|---|---|
| CORTEX | power-service | Fetches power grid status, sources, allocations |
| docking-service | Manages docking bays, ship operations, schedules | |
| crew-service | Retrieves crew roster, sections, relocations | |
| life-support-service | Monitors environment, alerts, adjustments | |
| inventory-service | Tracks supplies, cargo, resupply requests | |
| Life Support | power-service | Allocates power for environmental systems (priority: 1 - highest) |
| Docking | power-service | Allocates power for docking bay operations (priority: 4) |
| crew-service | Registers arriving crew members from docked ships | |
| Crew | life-support-service | Notifies of crew occupancy changes for capacity adjustments |
| Inventory | docking-service | Schedules cargo deliveries via docking bays |
| crew-service | Fetches available crew for cargo operations | |
| Power | (none) | Core service with no service dependencies |
All backend services share the following infrastructure:
| Infrastructure | Purpose | Used By |
|---|---|---|
| PostgreSQL | Persistent storage (schema-per-service) | All backend services |
| Redis | Caching and session management | All backend services |
| OpenTelemetry Collector | Distributed tracing and metrics | All services (when enabled) |
Database Schemas:
power- Power grid, sources, allocationsdocking- Bays, ships, docking logscrew- Members, sections, assignmentslife_support- Environment readings, sections, alertsinventory- Supplies, manifests, resupply requests
When a user docks a ship, the request cascades through multiple services:
React Dashboard
│
▼
CORTEX (BFF)
│ POST /api/docking/ships/{id}/dock
▼
Docking Service
│
├──► Power Service
│ POST /api/power/allocate
│ (allocates power for bay)
│
└──► Crew Service
│ POST /api/crew/arrival
│ (registers arriving crew)
│
▼
Life Support Service
│ POST /api/life-support/adjust-capacity
│ (updates capacity for crew occupancy)
│
▼
Power Service
POST /api/power/allocate
(allocates power for environmental systems)
Power Service is called twice: once directly from Docking and once at the end of the Crew → Life Support chain.
- Docker or Podman with Compose
- 4GB+ RAM available for containers
# Clone and start
git clone <repo-url>
cd nexus_station
# Start all services
docker compose up -d --build
# Or with Podman
podman compose up -d --buildOpen http://localhost:8080 to access the dashboard.
docker compose --profile load up -d --buildAccess Locust UI at http://localhost:8089
Copy .env.example to .env to customize settings:
cp .env.example .envInject failures and latency for testing resilience:
| Level | Error Rate | Latency |
|---|---|---|
none |
0% | 0ms |
low |
5% | 200-500ms |
medium |
15% | 1-3s |
high |
30% | 3-8s |
# Global chaos level
CHAOS_DEFAULT=low
# Per-service override
DOCKING_CHAOS=high
POWER_CHAOS=medium| Variable | Default | Description |
|---|---|---|
CHAOS_DEFAULT |
none |
Global chaos level |
{SERVICE}_CHAOS |
- | Per-service chaos override |
LOAD_USERS |
10 |
Concurrent Locust users |
LOAD_SPAWN_RATE |
2 |
Users spawned per second |
POSTGRES_* |
nexus |
Database credentials |
All endpoints are proxied through CORTEX at /api/*:
| Endpoint | Method | Description |
|---|---|---|
/api/dashboard/status |
GET | Aggregated system status |
| Endpoint | Method | Description |
|---|---|---|
/api/docking/bays |
GET | List all docking bays |
/api/docking/ships/incoming |
GET | Ships waiting to dock |
/api/docking/ships/{id}/dock |
POST | Dock a ship |
/api/docking/ships/{id}/undock |
POST | Undock a ship |
/api/docking/logs |
GET | Docking activity log |
| Endpoint | Method | Description |
|---|---|---|
/api/crew |
GET | Full crew roster |
/api/crew/sections |
GET | Station sections |
/api/crew/sections/{id}/members |
GET | Section crew |
/api/crew/relocate |
POST | Move crew member |
/api/crew/count |
GET | Crew statistics |
| Endpoint | Method | Description |
|---|---|---|
/api/life-support/environment |
GET | All section readings |
/api/life-support/environment/sections/{id} |
GET | Section environment |
/api/life-support/environment/sections/{id}/adjust |
POST | Adjust settings |
/api/life-support/alerts |
GET | Active alerts |
/api/life-support/alerts/{id}/acknowledge |
POST | Acknowledge alert |
| Endpoint | Method | Description |
|---|---|---|
/api/power/grid |
GET | Grid status overview |
/api/power/sources |
GET | All power sources |
/api/power/sources/{id} |
GET | Source details |
/api/power/allocate |
POST | Allocate power |
/api/power/deallocate/{id} |
DELETE | Release allocation |
| Endpoint | Method | Description |
|---|---|---|
/api/inventory/supplies |
GET | All supplies |
/api/inventory/low-stock |
GET | Low stock items |
/api/inventory/resupply |
POST | Request resupply |
/api/inventory/cargo-manifests |
GET | Cargo manifests |
/api/inventory/cargo-manifests/{id}/unload |
POST | Unload cargo |
nexus_station/
├── services/
│ ├── cortex/ # BFF + React frontend
│ │ ├── frontend/ # React/TypeScript app
│ │ └── src/ # Spring Boot BFF
│ ├── crew-service/
│ ├── docking-service/
│ ├── inventory-service/
│ ├── life-support-service/
│ └── power-service/
├── charts/ # Helm charts (one per service)
│ ├── nexus-common/ # Shared library chart
│ ├── nexus-infra/ # PostgreSQL + Redis
│ ├── nexus-power/
│ ├── nexus-life-support/
│ ├── nexus-crew/
│ ├── nexus-docking/
│ ├── nexus-inventory/
│ ├── nexus-cortex/
│ └── nexus-load-generator/
├── load-generator/ # Locust load tests
├── helmfile.yaml # Multi-namespace orchestration
└── docker-compose.yml
# Build a single service
cd services/power-service
./mvnw clean package
# Build without tests
./mvnw package -DskipTests
# Run tests
./mvnw test
# Run single test
./mvnw test -Dtest=PowerServiceTest#testAllocatePowercd services/cortex/frontend
# Install dependencies
npm install
# Development server (proxies to localhost:8080)
npm run dev
# Production build
npm run build
# Lint
npm run lint| Service | Port |
|---|---|
| CORTEX (Frontend + API) | 8080 |
| Docking | 8080 |
| Crew | 8080 |
| Life Support | 8080 |
| Power | 8080 |
| Inventory | 8080 |
| PostgreSQL | 5432 |
| Redis | 6379 |
| Locust UI | 8089 |
Each microservice is deployed to its own Kubernetes namespace. This design mirrors real-world enterprise deployments where teams independently own and operate their services (e.g., via ArgoCD).
Namespaces:
nexus-infra → PostgreSQL + Redis (shared infrastructure)
nexus-power → Power Service
nexus-life-support → Life Support Service
nexus-crew → Crew Service
nexus-docking → Docking Service
nexus-inventory → Inventory Service
nexus-cortex → Cortex BFF + Frontend
nexus-load-gen → Load Generator (optional)
Services communicate across namespaces using Kubernetes DNS FQDNs (e.g., power.nexus-power.svc.cluster.local).
- Kubernetes cluster (1.25+)
- Helm 3
- Helmfile
- Container images published to a registry
Helmfile orchestrates deployment of all charts in the correct dependency order.
# Deploy all services to their namespaces
helmfile apply
# Deploy with load generator enabled
ENABLE_LOAD_GENERATOR=true helmfile apply
# Deploy a specific service only
helmfile -l name=power apply
# Preview what will be deployed
helmfile diff
# Destroy all releases
helmfile destroyEach service can also be deployed independently with Helm:
# Deploy infrastructure first
helm install infra ./charts/nexus-infra -n nexus-infra --create-namespace
# Then deploy services
helm install power ./charts/nexus-power -n nexus-power --create-namespace
helm install life-support ./charts/nexus-life-support -n nexus-life-support --create-namespace
# ... etcEach chart has its own values.yaml. Common configuration options:
# Cross-namespace service discovery
global:
namespaces:
infra: nexus-infra
power: nexus-power
# ... other namespaces
ports:
services: 80
# Database credentials
database:
name: nexus
username: nexus
password: nexus_password
# OpenTelemetry (three modes)
otel:
enabled: false # Manual OTEL config
sdkDisabled: false # Set true to disable OTEL entirely
# Chaos engineering
chaos:
level: none # none, low, medium, high
# Container image
image:
registry: ghcr.io
repository: maxanderson95/nexus/power-service
tag: "" # Defaults to chart appVersionFor production, override sensitive values:
# Using Helmfile with environment
POSTGRES_PASSWORD=secure-password helmfile -e production apply
# Or with Helm directly
helm install power ./charts/nexus-power -n nexus-power \
--set database.password=secure-password \
--set image.tag=v1.0.0OpenTelemetry is disabled by default in Docker Compose (OTEL_SDK_DISABLED=true). Services still log to stdout.
OpenTelemetry can be configured in three modes per chart:
- Dash0 mode (default):
otel.enabled=false,otel.sdkDisabled=false- Dash0 operator injects configuration - Disabled mode:
otel.sdkDisabled=true- No telemetry exported - Manual mode:
otel.enabled=true- Configure endpoint/protocol manually
Recommended setup:
- OpenTelemetry Operator or Dash0 for auto-instrumentation
- Jaeger, Tempo, or similar for trace visualization
- Prometheus for metrics
# Check logs
docker compose logs -f cortex
docker compose logs -f power-service
# Restart everything
docker compose down
docker compose up -d --build# Reset database (data is ephemeral)
docker compose down
docker compose up -d postgres
# Wait for healthy, then start services
docker compose up -d- Ensure CORTEX is healthy:
curl http://localhost:8080/actuator/health - Check browser console for errors
- Verify backend services are running:
docker compose ps
MIT
