A production-grade distributed task queue built in Go, backed by Redis for prioritized scheduling and PostgreSQL for persistence. Designed for reliable background job processing with observability, retries, and Kubernetes-native deployment.
+------------------+
| Client |
| (gRPC / loadtest)|
+--------+---------+
|
| gRPC :9090
v
+------------------+
| Server |
| (gRPC + HTTP) |
| :9090 :8080 |
+---+---------+----+
| |
gRPC submit | | /healthz, /metrics
v v
+--------+----+ Prometheus
| Redis | scrapes :8080
| sorted set | |
| (priority | v
| queue) | +---------+
+------+-----+ | Grafana |
| +---------+
dequeue |
v
+------+------+
| Worker Pool |
| (N goroutines)|
+------+------+
|
+------+------+
| PostgreSQL |
| (task state,|
| audit log) |
+-------------+
- gRPC API for task submission, status polling, and worker registration
- Redis-backed priority queue using sorted sets for fair scheduling
- PostgreSQL persistence with full task audit trail and lifecycle tracking
- Pluggable handler registry - register Go functions by task type
- Exponential backoff retry with dead-letter routing for poison tasks
- Worker pool with configurable concurrency and graceful shutdown
- Heartbeat-based worker health monitoring with automatic stale detection
- Prometheus metrics and Grafana dashboard (auto-provisioned via Docker Compose)
- Kubernetes manifests and Helm chart with Horizontal Pod Autoscaler
- Load testing tool for throughput benchmarking
Real-time monitoring showing queue depth, task throughput, worker count, and latency percentiles. Auto-provisioned with Docker Compose.
# Clone and start everything
git clone https://github.com/shinegami-2002/distributed-task-queue.git
cd distributed-task-queue
docker compose up --build -d
# Verify
curl http://localhost:8080/healthz
# {"status":"ok"}
# View Grafana dashboard
open http://localhost:3000
# Run load test (from host, requires Go)
go run ./cmd/loadtest -tasks 100 -concurrency 10distributed-task-queue/
├── cmd/
│ ├── server/main.go # Server entry point (gRPC + HTTP)
│ ├── worker/main.go # Worker entry point
│ └── loadtest/main.go # Load testing CLI
├── internal/
│ ├── broker/
│ │ ├── broker.go # Redis broker (enqueue, dequeue, dead-letter)
│ │ └── broker_test.go # Broker unit tests
│ ├── config/config.go # Environment-based configuration
│ ├── handlers/demo.go # Demo task handler
│ ├── logging/logging.go # Structured logging (zerolog)
│ ├── metrics/
│ │ ├── metrics.go # Prometheus metric definitions
│ │ └── updater.go # Background queue depth updater
│ ├── server/
│ │ ├── grpc.go # gRPC service implementation
│ │ └── http.go # HTTP router (health, metrics)
│ ├── store/
│ │ ├── store.go # PostgreSQL task store
│ │ ├── migrations.go # Auto-applied schema migrations
│ │ └── store_test.go # Store unit tests
│ └── worker/
│ ├── handler.go # Handler registry
│ └── pool.go # Worker pool with retry logic
├── proto/
│ ├── taskqueue.proto # Protobuf service definition
│ └── gen/ # Generated Go code
│ ├── taskqueue.pb.go
│ └── taskqueue_grpc.pb.go
├── k8s/
│ ├── base/ # Plain Kubernetes manifests
│ │ ├── namespace.yaml
│ │ ├── server.yaml
│ │ ├── worker.yaml
│ │ ├── redis.yaml
│ │ ├── postgres.yaml
│ │ └── hpa.yaml
│ └── helm/ # Helm chart
│ ├── Chart.yaml
│ ├── values.yaml
│ └── templates/
│ ├── _helpers.tpl
│ ├── configmap.yaml
│ ├── hpa.yaml
│ ├── namespace.yaml
│ ├── postgres.yaml
│ ├── redis.yaml
│ ├── server.yaml
│ └── worker.yaml
├── monitoring/
│ ├── prometheus/prometheus.yml
│ └── grafana/
│ ├── dashboards/task-queue.json
│ └── provisioning/
│ ├── dashboards/dashboard.yml
│ └── datasources/prometheus.yml
├── Dockerfile
├── docker-compose.yml
├── Makefile
├── go.mod
└── go.sum
The service exposes three RPCs defined in proto/taskqueue.proto:
grpcurl -plaintext -d '{
"type": "demo",
"payload": "eyJtZXNzYWdlIjoiaGVsbG8ifQ==",
"priority": 5,
"max_retries": 3
}' localhost:9090 taskqueue.TaskQueueService/SubmitTaskResponse:
{
"id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": "pending"
}grpcurl -plaintext -d '{
"id": "f47ac10b-58cc-4372-a567-0e02b2c3d479"
}' localhost:9090 taskqueue.TaskQueueService/GetTaskStatusResponse:
{
"id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": "completed",
"result": "eyJzdGF0dXMiOiJkb25lIn0=",
"created_at": "2026-03-07T12:00:00Z",
"updated_at": "2026-03-07T12:00:01Z"
}pending ──> running ──> completed
│
└──> failed ──> pending (retry with backoff)
│
└──> dead_lettered (max retries exceeded)
All settings are controlled via environment variables:
| Variable | Default | Description |
|---|---|---|
GRPC_PORT |
9090 |
gRPC server port |
HTTP_PORT |
8080 |
HTTP server port (metrics, health) |
REDIS_ADDR |
localhost:6379 |
Redis address |
DATABASE_URL |
postgres://taskqueue:taskqueue@localhost:5432/taskqueue?sslmode=disable |
PostgreSQL connection string |
WORKER_CONCURRENCY |
5 |
Number of concurrent worker goroutines |
MAX_RETRIES |
3 |
Default max retries per task |
LOG_LEVEL |
info |
Log level (info, debug) |
Register your own task handlers by type name. Each handler receives a context and a raw payload, and returns a result or error:
registry.Register("email", func(ctx context.Context, payload []byte) ([]byte, error) {
var req EmailRequest
json.Unmarshal(payload, &req)
// send email...
return json.Marshal(EmailResponse{Sent: true})
})The worker pool automatically routes tasks to the matching handler based on the type field submitted via gRPC.
# Unit tests
make test
# Integration tests (requires Docker)
make test-integration
# Load test
go run ./cmd/loadtest -tasks 10000 -concurrency 50# Using plain manifests
kubectl apply -f k8s/base/
# Using Helm
helm install task-queue k8s/helm/The Helm chart includes configurable values for replica counts, resource limits, and autoscaling thresholds. The HPA scales workers based on CPU utilization.
Docker Compose automatically provisions:
- Prometheus (
:9090) - scrapes/metricsfrom the server - Grafana (
:3000) - pre-loaded dashboard with panels for:- Task throughput (submitted, completed, failed per second)
- Queue depth over time
- Worker pool utilization
- Retry and dead-letter rates
- Task latency percentiles
- Language: Go 1.22+
- API: gRPC / Protocol Buffers
- Queue: Redis (sorted sets)
- Storage: PostgreSQL
- HTTP Router: Chi v5
- Logging: zerolog
- Metrics: Prometheus
- Dashboards: Grafana
- Containers: Docker, Docker Compose
- Orchestration: Kubernetes, Helm
MIT
