A lightweight LiteLLM server boilerplate pre-configured with uv and Docker for hosting your own OpenAI- and Anthropic-compatible endpoints. Includes LibreChat as an optional web UI.
-
Updated
Dec 8, 2025 - Python
A lightweight LiteLLM server boilerplate pre-configured with uv and Docker for hosting your own OpenAI- and Anthropic-compatible endpoints. Includes LibreChat as an optional web UI.
High-performance, async-first system that computes and monitors LLM API spend across millions of requests with PostgreSQL and Prometheus.
Adds LiteLLM as a provider within Copilot in VSCode, expanding the models you can access significantly. Available on Visual Studio Code Marketplace: https://marketplace.visualstudio.com/items?itemName=Gethnet.litellm-connector-copilot
A VSCode extension to use LiteLLM Provider in Copilot Chat
Compact LLMOps stack for production-grade LLM apps: LiteLLM proxy + Langfuse observability. Self-hostable, scalable, open-source toolkit for AI engineers
Add a description, image, and links to the litellm-proxy topic page so that developers can more easily learn about it.
To associate your repository with the litellm-proxy topic, visit your repo's landing page and select "manage topics."