Skip to content

spiritledsoftware/commissary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3,000 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Commissary

English | Chinese

Quick Start

cp config.example.yaml config.yaml
cp .env.example .env
./scripts/bootstrap-compose.sh
# set auth-dir: '/home/app/.commissary' in config.yaml for Docker
docker compose up -d
docker compose ps

Default Docker mode uses named volumes for auth and logs.

To use host bind mounts instead:

docker compose -f docker-compose.yml -f docker-compose.bind.yml up -d

Then open:

  • http://localhost:8317/healthz
  • http://localhost:8317/management.html

Storage backend precedence

Backend Enabled by Local working path Notes
Postgres PGSTORE_DSN PGSTORE_LOCAL_PATH/pgstore Highest priority
Object store OBJECTSTORE_ENDPOINT OBJECTSTORE_LOCAL_PATH/objectstore Used when Postgres is unset
Git GITSTORE_GIT_URL GITSTORE_LOCAL_PATH/gitstore Used when Postgres and object store are unset
Local files No remote store env vars auth-dir + config.yaml Default fallback

Harness the AI subscriptions you already own.

Commissary is a BYOS-first meta-harness for AI runtimes. It composes the subscriptions, tools, MCP servers, environments, and protocol surfaces you already use into a programmable execution layer that you can run yourself.

Commissary is not primarily a proxy with extra endpoints bolted on. It is a runtime substrate for shaping how AI work is executed, routed, governed, and observed.

What Commissary Is

Commissary sits between clients, runtimes, tools, and upstream providers.

It is built to:

  • harness subscription-native runtimes such as Claude Code, Codex, Gemini, and other provider-shaped ecosystems
  • preserve explicit protocol surfaces instead of flattening everything into one accidental API
  • compose tools, MCP servers, browser/runtime environments, and operator policy into the execution path
  • give operators a self-hosted control point for routing, governance, observability, and capability exposure

That means gateway behavior is part of the product, but not the whole product.

The point is not merely to relay requests. The point is to assemble a controllable AI runtime stack out of the subscriptions and systems you already own.

Core Differentiators

1. Bring Your Own Subscription

Commissary is built around the idea that many users already pay for valuable AI access.

Instead of forcing a resale layer, Commissary lets operators harness existing entitlements and compose them into a unified runtime surface.

2. Explicit protocol surfaces

Commissary intentionally preserves provider-shaped and ecosystem-shaped contracts.

The route family tells you what contract you are using. This avoids the ambiguity and accidental coupling that comes from pretending every AI system should look like a single universal schema.

3. Runtime composition

Commissary is designed to grow beyond request translation into a runtime composition layer that can inject:

  • tools
  • MCP servers
  • environment packs
  • operator policy
  • routing and selection logic
  • capability guards

4. Operator-first deployment

Commissary is self-hosted and operator-first today.

It is being built so a future hosted control plane could sit on top later, but the product does not depend on that future to be useful.

Public API Surface Families

Commissary exposes multiple protocol families on purpose.

The route prefix tells you which request/response contract you are using:

  • /openai/... → OpenAI-compatible API shape
  • /google/... → Google/Gemini-compatible API shape
  • /anthropic/... → Anthropic-compatible API shape
  • /cohere/... → Cohere-compatible API shape

These route families are first-class public surfaces, not ambiguous aliases of one another.

Current public routes

OpenAI-compatible examples:

  • GET /openai/v1/models
  • POST /openai/v1/chat/completions
  • POST /openai/v1/completions
  • POST /openai/v1/embeddings
  • POST /openai/v1/rerank
  • POST /openai/v1/audio/transcriptions
  • POST /openai/v1/audio/speech
  • POST /openai/v1/images/generations
  • POST /openai/v1/images/edits
  • POST /openai/v1/images/variations
  • POST /openai/v1/videos
  • GET /openai/v1/videos/:video_id
  • GET /openai/v1/videos/:video_id/content
  • POST /openai/v1/videos/:video_id/remix
  • POST /openai/v1/videos/edits
  • POST /openai/v1/videos/extensions
  • GET /openai/v1/responses
  • POST /openai/v1/responses
  • POST /openai/v1/responses/compact

Google/Gemini-compatible examples:

  • GET /google/v1beta/models
  • GET /google/v1beta/models/{model}
  • POST /google/v1beta/models/{model}:generateContent
  • POST /google/v1beta/models/{model}:streamGenerateContent
  • POST /google/v1beta/models/{model}:countTokens
  • POST /google/v1beta/models/{model}:embedContent
  • POST /google/v1beta/models/{model}:batchEmbedContents

Anthropic-compatible examples:

  • POST /anthropic/v1/messages
  • POST /anthropic/v1/messages/count_tokens

Cohere-compatible examples:

  • POST /cohere/v2/embed

This project intentionally uses namespaced public route families so endpoint shape is obvious to users and client integrations.

API Documentation

  • Human-readable inventory: docs/api/public-inventory.md
  • Interactive reference (served by the running server): /api-reference
  • Raw OpenAPI YAML: /openapi.yaml
  • Raw OpenAPI JSON: /openapi.json

Docker Compose

The checked-in docker-compose.yml is set up for a local operator-managed deployment using named volumes by default.

  1. Copy config.example.yaml to config.yaml.
  2. Set auth-dir: '/home/app/.commissary' in config.yaml when running inside the container.
  3. Run ./scripts/bootstrap-compose.sh to create missing local files.
docker compose up -d
docker compose ps

The default Compose file mounts:

  • ./config.yaml to /app/config.yaml
  • commissary_auth to /home/app/.commissary
  • commissary_logs to /app/logs

The container runs as a non-root user, and the named-volume default avoids most first-run host permission problems.

Bind-mount override

If you want auth and log files directly on the host, use the override file instead of the default named-volume layout:

mkdir -p auths logs
docker compose up -d
docker compose -f docker-compose.yml -f docker-compose.bind.yml up -d

When using bind mounts, make sure the host directories are writable by the container user. The server now fails fast with a clear startup preflight error if auth-dir or the log directory is not writable.

Postgres store example

Set auth-dir: '/home/app/.commissary' in config.yaml, then provide the Postgres store variables through .env or your shell:

PGSTORE_DSN=postgres://commissary:commissary@postgres:5432/commissary?sslmode=disable
PGSTORE_SCHEMA=public
PGSTORE_LOCAL_PATH=/home/app/.commissary-store

Commissary mirrors config and auth files into PGSTORE_LOCAL_PATH/pgstore inside the container while PostgreSQL remains the source of truth.

Git store example

Set auth-dir: '/home/app/.commissary' in config.yaml, then configure the git-backed store:

GITSTORE_GIT_URL=https://github.com/your-org/commissary-state.git
GITSTORE_GIT_USERNAME=git
GITSTORE_GIT_TOKEN=your-token
GITSTORE_GIT_BRANCH=main
GITSTORE_LOCAL_PATH=/home/app/.commissary-store

Commissary clones or opens the repository under GITSTORE_LOCAL_PATH/gitstore, stores auth files in auths/, and keeps config/config.yaml committed alongside them.

S3-compatible object store example

Set auth-dir: '/home/app/.commissary' in config.yaml, then configure the object-backed store:

OBJECTSTORE_ENDPOINT=https://s3.amazonaws.com
OBJECTSTORE_ACCESS_KEY=your-access-key
OBJECTSTORE_SECRET_KEY=your-secret-key
OBJECTSTORE_BUCKET=commissary-state
OBJECTSTORE_LOCAL_PATH=/home/app/.commissary-store

Commissary mirrors state under OBJECTSTORE_LOCAL_PATH/objectstore while syncing config/config.yaml and auths/* to the bucket. Any S3-compatible endpoint is supported as long as it uses http or https.

Compose env file example

services:
  commissary:
    env_file:
      - .env

Choose only one remote store backend at a time. Startup preference is Postgres first, then object store, then git, then plain local files.

Management UI

Commissary keeps the browser management UI source in-repo under management-ui/.

The server serves /management.html from a local file only. It does not fetch management panel assets from a separate repository at runtime.

Browser UI Development

cd management-ui
npm ci
npm run dev

Browser UI Build

cd management-ui
npm run build:server-asset

The standard server-asset build flow is npm run build:server-asset.

The Vite build output is management-ui/dist/index.html. For file-based serving by the Go server, place the built file at the server management asset path as management.html.

For local development with the default repo layout, that usually means:

mkdir -p static
cp management-ui/dist/index.html static/management.html

Near-Term Product Direction

Commissary is being built in this order:

  1. make the BYOS harness trustworthy
  2. strengthen capability-aware routing and protocol-surface fidelity
  3. add composition primitives for tools, MCP servers, and environments
  4. deepen governance, observability, and operator control
  5. leave clean seams for a future hosted control plane without making that a prerequisite

Contributing

This repository accepts pull requests that strengthen Commissary as a BYOS-first runtime harness, including:

  • protocol-surface fidelity and compatibility work
  • provider and subscription integrations
  • routing, governance, and observability improvements
  • tool, MCP, and environment composition primitives
  • management UI support tied directly to the runtime/control plane

If a change belongs to the original upstream rather than Commissary’s fork-specific direction, open it against the upstream repository.

Release Contract

  • Open pull requests against main.
  • Releases are managed by Release Please from conventional commits on main.
  • Release Please opens or updates a release PR that carries the next semantic version and curated changelog.
  • Merging the release PR creates the GitHub tag/release, then GoReleaser publishes release artifacts and Docker images through the repository GitHub App bot.
  • Version bumps follow conventional commits: feat => minor; fix and other non-breaking types => patch; ! or BREAKING CHANGE => major.
  • Tags must stay strict semver in vMAJOR.MINOR.PATCH form.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

A meta-harness for AI runtimes

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

 
 
 

Contributors