A multi-provider native proxy for Anthropic and OpenAI.
ccflare routes each provider by URL prefix, load-balances across multiple accounts, and keeps full request history, rate-limit state, and usage analytics without translating provider payloads.
- Native passthrough — Anthropic stays Anthropic, OpenAI stays OpenAI
- Multi-provider routing — route by
/v1/{provider}/* - Compatibility routes — route by
/v1/ccflare/*with family-prefixed models - Account failover — retry another account when one provider account is rate limited
- Built-in observability — dashboard, request history, analytics, logs, and health endpoints
- Flexible auth — API key and OAuth account support
git clone https://github.com/snipeship/ccflare
cd ccflare
bun install
# Start the server + dashboard on http://localhost:8080
bun run start
# Or launch the TUI, which can also start the server
bun run ccflareVerify the server is up:
curl http://localhost:8080/healthccflare proxies requests by provider prefix:
http://localhost:8080/v1/anthropic/*http://localhost:8080/v1/openai/*http://localhost:8080/v1/ccflare/*
Examples:
/v1/anthropic/v1/messages→https://api.anthropic.com/v1/messages/v1/openai/chat/completions→https://api.openai.com/v1/chat/completions/v1/openai/responses→https://api.openai.com/v1/responses
The /v1/{provider} prefix is stripped exactly once before forwarding upstream.
Compatibility routes keep the client-facing schema but select a provider family from
the model prefix:
openai/<model-id>→ preferscodex, thenopenaianthropic/<model-id>→ prefersclaude-code, thenanthropic
Examples:
/v1/ccflare/openai/chat/completionswith"model":"openai/gpt-5.4"/v1/ccflare/openai/responseswith"model":"anthropic/claude-sonnet-4"/v1/ccflare/anthropic/messageswith"model":"openai/gpt-4o-mini"
Add accounts through the management API:
curl -X POST http://localhost:8080/api/accounts \
-H "content-type: application/json" \
-d '{
"name": "anthropic-main",
"provider": "anthropic",
"auth_method": "api_key",
"api_key": "sk-ant-..."
}'
curl -X POST http://localhost:8080/api/accounts \
-H "content-type: application/json" \
-d '{
"name": "openai-main",
"provider": "openai",
"auth_method": "api_key",
"api_key": "sk-openai-..."
}'Use the CLI/TUI for interactive OAuth setup:
# Claude Code OAuth
bun run ccflare --add-account work --provider claude-code
# Codex OAuth
bun run ccflare --add-account codex --provider codexThe management API also exposes provider-specific auth endpoints:
POST /api/auth/anthropic/initPOST /api/auth/anthropic/completePOST /api/auth/openai/initPOST /api/auth/openai/complete
Point Anthropic SDKs or curl at the Anthropic-prefixed base URL:
export ANTHROPIC_BASE_URL=http://localhost:8080/v1/anthropicPoint OpenAI-compatible clients at the OpenAI-prefixed base URL:
export OPENAI_BASE_URL=http://localhost:8080/v1/openaiYou can configure both providers at the same time and ccflare will keep account selection isolated per provider.
curl -X POST http://localhost:8080/v1/anthropic/v1/messages \
-H "content-type: application/json" \
-d '{
"model": "claude-3-7-sonnet",
"max_tokens": 128,
"messages": [
{ "role": "user", "content": "Say hello from ccflare." }
]
}'curl -X POST http://localhost:8080/v1/openai/chat/completions \
-H "content-type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{ "role": "user", "content": "Say hello from ccflare." }
]
}'curl -X POST http://localhost:8080/v1/openai/responses \
-H "content-type: application/json" \
-d '{
"model": "gpt-4o",
"input": "Summarize why provider-prefixed routing is useful."
}'curl -X POST http://localhost:8080/v1/ccflare/openai/chat/completions \
-H "content-type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4",
"messages": [
{ "role": "user", "content": "Say hello from the compatibility route." }
]
}'Key endpoints:
GET /health— status, account count, strategy, supported providersGET /api/accounts— list accountsPOST /api/accounts— create an accountPATCH /api/accounts/:id— update an account (rename, changebase_url)DELETE /api/accounts/:id— remove an accountPOST /api/accounts/:id/pause/resume— exclude or restore an accountPOST /api/accounts/:id/rename— rename an accountGET /api/requests— recent request summariesGET /api/requests/detail— detailed request info with payloadsGET /api/requests/stream— live request stream via SSEGET /api/analytics— aggregated analyticsGET /api/stats— usage and performance statsPOST /api/stats/reset— reset usage statisticsGET /api/logs/stream— live server logs via SSEGET /api/logs/history— historical log entriesGET /api/config— current configurationGET /api/config/strategy— current load balancing strategyPOST /api/config/strategy— update load balancing strategyGET /api/strategies— list available strategiesGET /api/config/retention— data retention settingsPOST /api/config/retention— update data retention settingsPOST /api/maintenance/cleanup— run data cleanupPOST /api/maintenance/compact— compact the database
When require_access_keys: true in ccflare.json, the /v1/* proxy path requires an access key per request and every request is tagged with the corresponding user_id in the requests table — useful for multi-operator deployments where you want both access control and per-user attribution.
The feature is opt-in and default-off. With the flag off, ccflare behavior is byte-identical to a build without this feature.
# Set in ~/.config/ccflare/ccflare.json (or equivalent config location):
{ "require_access_keys": true }Create your first user (mirrors the --add-account bootstrap pattern):
bun run ccflare --add-user alice
# ✅ User "alice" created
# Access key (store now — not shown again):
# ccfk_<64-hex>List users (shows names + IDs + created timestamp, never the key):
bun run ccflare --list-usersClients send the key on either header — both work; ccfk_ prefix disambiguates:
# Authorization: Bearer
curl -H "Authorization: Bearer ccfk_..." http://localhost:8080/v1/anthropic/v1/messages ...
# Or x-api-key (what Claude Code's apiKeyHelper populates)
curl -H "x-api-key: ccfk_..." http://localhost:8080/v1/anthropic/v1/messages ...The key is SHA-256 hashed at rest in the users table; plaintext is only shown once on CLI creation.
GET /api/users— list usersPOST /api/users— create a user, returns the plaintext key onceDELETE /api/users/:id— remove a user
These endpoints return 404 when require_access_keys is false — the admin surface is invisible unless the feature is on.
- WebSocket upgrades on
/v1(dashboard live-stream) are exempt from the guard - Internal tagging header (
x-ccflare-user-id) is stripped before forwarding upstream — never reaches Anthropic / OpenAI - The access key check runs after WebSocket handling and before proxy forwarding, so all other ccflare behavior (rotation, rate-limit handling, request history) is unchanged
- Dashboard:
http://localhost:8080 - TUI:
bun run ccflare - Server only:
bun run start
- Bun >= 1.2.8
- Anthropic and/or OpenAI credentials
Additional repo docs live in docs/:
MIT — see LICENSE.
