Skip to content

D1DX/ccflare

 
 

Repository files navigation

ccflare 🛡️

A multi-provider native proxy for Anthropic and OpenAI.

ccflare routes each provider by URL prefix, load-balances across multiple accounts, and keeps full request history, rate-limit state, and usage analytics without translating provider payloads.

ccflare Dashboard

Why ccflare?

  • Native passthrough — Anthropic stays Anthropic, OpenAI stays OpenAI
  • Multi-provider routing — route by /v1/{provider}/*
  • Compatibility routes — route by /v1/ccflare/* with family-prefixed models
  • Account failover — retry another account when one provider account is rate limited
  • Built-in observability — dashboard, request history, analytics, logs, and health endpoints
  • Flexible auth — API key and OAuth account support

Quick start

git clone https://github.com/snipeship/ccflare
cd ccflare
bun install

# Start the server + dashboard on http://localhost:8080
bun run start

# Or launch the TUI, which can also start the server
bun run ccflare

Verify the server is up:

curl http://localhost:8080/health

How routing works

ccflare proxies requests by provider prefix:

  • http://localhost:8080/v1/anthropic/*
  • http://localhost:8080/v1/openai/*
  • http://localhost:8080/v1/ccflare/*

Examples:

  • /v1/anthropic/v1/messageshttps://api.anthropic.com/v1/messages
  • /v1/openai/chat/completionshttps://api.openai.com/v1/chat/completions
  • /v1/openai/responseshttps://api.openai.com/v1/responses

The /v1/{provider} prefix is stripped exactly once before forwarding upstream.

Compatibility routes keep the client-facing schema but select a provider family from the model prefix:

  • openai/<model-id> → prefers codex, then openai
  • anthropic/<model-id> → prefers claude-code, then anthropic

Examples:

  • /v1/ccflare/openai/chat/completions with "model":"openai/gpt-5.4"
  • /v1/ccflare/openai/responses with "model":"anthropic/claude-sonnet-4"
  • /v1/ccflare/anthropic/messages with "model":"openai/gpt-4o-mini"

Account setup

API key accounts

Add accounts through the management API:

curl -X POST http://localhost:8080/api/accounts \
  -H "content-type: application/json" \
  -d '{
    "name": "anthropic-main",
    "provider": "anthropic",
    "auth_method": "api_key",
    "api_key": "sk-ant-..."
  }'

curl -X POST http://localhost:8080/api/accounts \
  -H "content-type: application/json" \
  -d '{
    "name": "openai-main",
    "provider": "openai",
    "auth_method": "api_key",
    "api_key": "sk-openai-..."
  }'

OAuth accounts

Use the CLI/TUI for interactive OAuth setup:

# Claude Code OAuth
bun run ccflare --add-account work --provider claude-code

# Codex OAuth
bun run ccflare --add-account codex --provider codex

The management API also exposes provider-specific auth endpoints:

  • POST /api/auth/anthropic/init
  • POST /api/auth/anthropic/complete
  • POST /api/auth/openai/init
  • POST /api/auth/openai/complete

Provider configuration

Anthropic clients

Point Anthropic SDKs or curl at the Anthropic-prefixed base URL:

export ANTHROPIC_BASE_URL=http://localhost:8080/v1/anthropic

OpenAI clients

Point OpenAI-compatible clients at the OpenAI-prefixed base URL:

export OPENAI_BASE_URL=http://localhost:8080/v1/openai

You can configure both providers at the same time and ccflare will keep account selection isolated per provider.

Example usage

Anthropic example

curl -X POST http://localhost:8080/v1/anthropic/v1/messages \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-3-7-sonnet",
    "max_tokens": 128,
    "messages": [
      { "role": "user", "content": "Say hello from ccflare." }
    ]
  }'

OpenAI chat completions example

curl -X POST http://localhost:8080/v1/openai/chat/completions \
  -H "content-type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      { "role": "user", "content": "Say hello from ccflare." }
    ]
  }'

OpenAI Responses API example

curl -X POST http://localhost:8080/v1/openai/responses \
  -H "content-type: application/json" \
  -d '{
    "model": "gpt-4o",
    "input": "Summarize why provider-prefixed routing is useful."
}'

ccflare compatibility example

curl -X POST http://localhost:8080/v1/ccflare/openai/chat/completions \
  -H "content-type: application/json" \
  -d '{
    "model": "anthropic/claude-sonnet-4",
    "messages": [
      { "role": "user", "content": "Say hello from the compatibility route." }
    ]
  }'

Management API

Key endpoints:

  • GET /health — status, account count, strategy, supported providers
  • GET /api/accounts — list accounts
  • POST /api/accounts — create an account
  • PATCH /api/accounts/:id — update an account (rename, change base_url)
  • DELETE /api/accounts/:id — remove an account
  • POST /api/accounts/:id/pause / resume — exclude or restore an account
  • POST /api/accounts/:id/rename — rename an account
  • GET /api/requests — recent request summaries
  • GET /api/requests/detail — detailed request info with payloads
  • GET /api/requests/stream — live request stream via SSE
  • GET /api/analytics — aggregated analytics
  • GET /api/stats — usage and performance stats
  • POST /api/stats/reset — reset usage statistics
  • GET /api/logs/stream — live server logs via SSE
  • GET /api/logs/history — historical log entries
  • GET /api/config — current configuration
  • GET /api/config/strategy — current load balancing strategy
  • POST /api/config/strategy — update load balancing strategy
  • GET /api/strategies — list available strategies
  • GET /api/config/retention — data retention settings
  • POST /api/config/retention — update data retention settings
  • POST /api/maintenance/cleanup — run data cleanup
  • POST /api/maintenance/compact — compact the database

Per-user access keys (opt-in)

When require_access_keys: true in ccflare.json, the /v1/* proxy path requires an access key per request and every request is tagged with the corresponding user_id in the requests table — useful for multi-operator deployments where you want both access control and per-user attribution.

The feature is opt-in and default-off. With the flag off, ccflare behavior is byte-identical to a build without this feature.

Enabling

# Set in ~/.config/ccflare/ccflare.json (or equivalent config location):
{ "require_access_keys": true }

Create your first user (mirrors the --add-account bootstrap pattern):

bun run ccflare --add-user alice
# ✅ User "alice" created
# Access key (store now — not shown again):
#   ccfk_<64-hex>

List users (shows names + IDs + created timestamp, never the key):

bun run ccflare --list-users

Using a key

Clients send the key on either header — both work; ccfk_ prefix disambiguates:

# Authorization: Bearer
curl -H "Authorization: Bearer ccfk_..." http://localhost:8080/v1/anthropic/v1/messages ...

# Or x-api-key (what Claude Code's apiKeyHelper populates)
curl -H "x-api-key: ccfk_..." http://localhost:8080/v1/anthropic/v1/messages ...

The key is SHA-256 hashed at rest in the users table; plaintext is only shown once on CLI creation.

Admin API (flag-gated)

  • GET /api/users — list users
  • POST /api/users — create a user, returns the plaintext key once
  • DELETE /api/users/:id — remove a user

These endpoints return 404 when require_access_keys is false — the admin surface is invisible unless the feature is on.

Notes

  • WebSocket upgrades on /v1 (dashboard live-stream) are exempt from the guard
  • Internal tagging header (x-ccflare-user-id) is stripped before forwarding upstream — never reaches Anthropic / OpenAI
  • The access key check runs after WebSocket handling and before proxy forwarding, so all other ccflare behavior (rotation, rate-limit handling, request history) is unchanged

UI and developer tools

  • Dashboard: http://localhost:8080
  • TUI: bun run ccflare
  • Server only: bun run start

Requirements

  • Bun >= 1.2.8
  • Anthropic and/or OpenAI credentials

Documentation

Additional repo docs live in docs/:

License

MIT — see LICENSE.

About

The ultimate CC proxy

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 96.2%
  • HTML 2.1%
  • CSS 1.6%
  • Dockerfile 0.1%