** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.
🔴 Required Information
Is your feature request related to a specific problem?
ADK follows a "Code-First, Environment-Aware" philosophy — API keys, project IDs, and locations are configured via environment variables, letting the same agent code move between local development, staging, and production without edits. Endpoint URLs (base_url) are the one gap.
Today, redirecting an ADK agent's LLM traffic to an enterprise gateway — for observability, auditing, regional routing, or staging parity — requires code changes at each model instantiation. This breaks the environment-parity promise that holds everywhere else in ADK:
# Today: base_url only settable in code
agent = Agent(model=Gemini(model="gemini-2.5-flash", base_url="http://gateway"))
agent = Agent(model=AnthropicLlm(model="claude-sonnet-4-...", base_url="..."))
agent = Agent(model=LiteLlm(model="openai/gpt-4o", base_url="..."))
There is no way to move between environments without touching source.
Describe the Solution You'd Like
Add environment-variable support for base_url on BaseLlm and the three built-in subclasses, extending ADK's existing env-driven pattern:
| Variable |
Scope |
Precedence |
ADK_LLM_BASE_URL |
Framework-wide fallback |
lowest |
ADK_GEMINI_BASE_URL |
Gemini only |
higher |
ANTHROPIC_BASE_URL |
AnthropicLlm (SDK-native) |
higher |
LITELLM_API_BASE / OPENAI_API_BASE / OPENAI_BASE_URL |
LiteLlm (SDK-native) |
higher |
Explicit base_url=... in constructor |
All |
highest |
Behavior:
- If nothing is set, behavior is identical to today. Strictly additive.
- An explicit constructor value always wins.
- Provider-specific env vars win over the global fallback.
LiteLlm appends /v1 automatically when inheriting from the global fallback, so OpenAI-compatible endpoints work with a root URL. SDK-native vars (LITELLM_API_BASE, etc.) are pass-through because users setting those are expected to include the full path per LiteLLM convention.
Impact on your work
Our team runs ADK agents in an enterprise environment where LLM traffic is routed through an internal gateway for observability, policy enforcement, and regional controls. Today this requires a wrapper around every ADK model instantiation in our codebase. With this feature, the same agent code would work locally against public APIs and in production through the gateway, with zero code changes — the same way API keys and project IDs already work.
For a reproducible test environment, we validated against VidaiMock (Apache 2.0; disclosure: members of our team contribute to this project), which accepts native OpenAI, Anthropic, and Gemini protocols at root paths and supports per-request chaos-injection headers (x-vidai-chaos-drop, x-mock-status, latency overrides) for exercising SDK failure modes as well as the happy path. With the proposed change, a single env var routes all three ADK providers end-to-end:
$ export ADK_LLM_BASE_URL=http://localhost:8636
$ python - <<'EOF'
import asyncio
from google.adk.models.google_llm import Gemini
from google.adk.models.anthropic_llm import AnthropicLlm
from google.adk.models.lite_llm import LiteLlm
from google.adk.models.llm_request import LlmRequest
from google.genai import types
async def t(llm, name):
req = LlmRequest(model=llm.model,
contents=[types.Content(role="user",
parts=[types.Part(text="hi")])])
async for r in llm.generate_content_async(req):
print(f"[{name}] OK base_url={llm.base_url!r}")
break
asyncio.run(t(Gemini(model="gemini-2.0-flash"), "Gemini "))
asyncio.run(t(AnthropicLlm(model="claude-haiku-4"), "Anthropic"))
asyncio.run(t(LiteLlm(model="openai/gpt-4o"), "LiteLlm "))
EOF
[Gemini ] OK base_url='http://localhost:8636'
[Anthropic] OK base_url='http://localhost:8636'
[LiteLlm ] OK base_url='http://localhost:8636/v1' # /v1 auto-appended
This is a recurring friction point for any team running ADK behind a gateway.
Willingness to contribute
Are you interested in implementing this feature yourself or submitting a PR?
Yes. I have a working implementation with full unit test coverage (all existing model tests pass, plus new tests covering env var resolution order, explicit-override precedence, and the /v1 normalization path for LiteLlm). I am happy to submit the PR once the direction is confirmed.
🟡 Recommended Information
Describe Alternatives You've Considered
- Subclass each model in user code — works but duplicates boilerplate in every project and doesn't compose with ADK's
LlmRegistry string-based model resolution.
- Monkey-patch at import time — fragile across ADK version bumps.
- Publish a wrapper package — usable as a fallback but doesn't solve the underlying ADK parity gap for other teams hitting the same problem.
None of these address the "one env var change between environments" property that ADK already provides for API keys and project config.
Proposed API / Implementation
Minimal sketch — the field lives on BaseLlm, each subclass overrides the default_factory to honor SDK-native env vars first:
# google/adk/models/base_llm.py
class BaseLlm(BaseModel):
model: str
base_url: Optional[str] = Field(
default_factory=lambda: os.environ.get("ADK_LLM_BASE_URL")
)
# google/adk/models/google_llm.py
class Gemini(BaseLlm):
base_url: Optional[str] = Field(
default_factory=lambda: (
os.environ.get("ADK_GEMINI_BASE_URL")
or os.environ.get("ADK_LLM_BASE_URL")
)
)
# google/adk/models/anthropic_llm.py
class AnthropicLlm(BaseLlm):
base_url: Optional[str] = Field(
default_factory=lambda: (
os.environ.get("ANTHROPIC_BASE_URL")
or os.environ.get("ADK_LLM_BASE_URL")
)
)
# google/adk/models/lite_llm.py — with /v1 auto-suffix on global fallback only
def _resolve_litellm_base_url() -> Optional[str]:
for var in ("LITELLM_API_BASE", "OPENAI_API_BASE", "OPENAI_BASE_URL"):
if value := os.environ.get(var):
return value # SDK-native vars are pass-through
if value := os.environ.get("ADK_LLM_BASE_URL"):
# Global fallback: ensure /v1 is present for OpenAI-compat transport
return value if _has_version_path(value) else value.rstrip("/") + "/v1"
return None
The resolved base_url is then passed to each SDK's client (AsyncAnthropic, AsyncAnthropicVertex, google.genai.Client, litellm.acompletion) at construction / invocation time.
Prior art in ADK:
GOOGLE_GENAI_USE_VERTEXAI — environment-driven backend switch.
GOODMEM_BASE_URL (plugin) — same pattern for a service endpoint.
DEMO_AGENT_MODEL — environment-driven model selection in samples.
Additional Context
- Related discussion: Support for Environment-Driven Endpoint Configuration (base_url) #5378 (community feedback positive;
adk-bot noted alignment with GOOGLE_GENAI_USE_VERTEXAI and GOODMEM_BASE_URL precedents).
- One open question for the team: is a similar change already planned via your internal pipeline? I would like to avoid duplicating work if so. If it is not in-flight, I will open the PR with the implementation above within a few days.
** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.
🔴 Required Information
Is your feature request related to a specific problem?
ADK follows a "Code-First, Environment-Aware" philosophy — API keys, project IDs, and locations are configured via environment variables, letting the same agent code move between local development, staging, and production without edits. Endpoint URLs (
base_url) are the one gap.Today, redirecting an ADK agent's LLM traffic to an enterprise gateway — for observability, auditing, regional routing, or staging parity — requires code changes at each model instantiation. This breaks the environment-parity promise that holds everywhere else in ADK:
There is no way to move between environments without touching source.
Describe the Solution You'd Like
Add environment-variable support for
base_urlonBaseLlmand the three built-in subclasses, extending ADK's existing env-driven pattern:ADK_LLM_BASE_URLADK_GEMINI_BASE_URLGeminionlyANTHROPIC_BASE_URLAnthropicLlm(SDK-native)LITELLM_API_BASE/OPENAI_API_BASE/OPENAI_BASE_URLLiteLlm(SDK-native)base_url=...in constructorBehavior:
LiteLlmappends/v1automatically when inheriting from the global fallback, so OpenAI-compatible endpoints work with a root URL. SDK-native vars (LITELLM_API_BASE, etc.) are pass-through because users setting those are expected to include the full path per LiteLLM convention.Impact on your work
Our team runs ADK agents in an enterprise environment where LLM traffic is routed through an internal gateway for observability, policy enforcement, and regional controls. Today this requires a wrapper around every ADK model instantiation in our codebase. With this feature, the same agent code would work locally against public APIs and in production through the gateway, with zero code changes — the same way API keys and project IDs already work.
For a reproducible test environment, we validated against VidaiMock (Apache 2.0; disclosure: members of our team contribute to this project), which accepts native OpenAI, Anthropic, and Gemini protocols at root paths and supports per-request chaos-injection headers (
x-vidai-chaos-drop,x-mock-status, latency overrides) for exercising SDK failure modes as well as the happy path. With the proposed change, a single env var routes all three ADK providers end-to-end:This is a recurring friction point for any team running ADK behind a gateway.
Willingness to contribute
Are you interested in implementing this feature yourself or submitting a PR?
Yes. I have a working implementation with full unit test coverage (all existing model tests pass, plus new tests covering env var resolution order, explicit-override precedence, and the
/v1normalization path for LiteLlm). I am happy to submit the PR once the direction is confirmed.🟡 Recommended Information
Describe Alternatives You've Considered
LlmRegistrystring-based model resolution.None of these address the "one env var change between environments" property that ADK already provides for API keys and project config.
Proposed API / Implementation
Minimal sketch — the field lives on
BaseLlm, each subclass overrides thedefault_factoryto honor SDK-native env vars first:The resolved
base_urlis then passed to each SDK's client (AsyncAnthropic,AsyncAnthropicVertex,google.genai.Client,litellm.acompletion) at construction / invocation time.Prior art in ADK:
GOOGLE_GENAI_USE_VERTEXAI— environment-driven backend switch.GOODMEM_BASE_URL(plugin) — same pattern for a service endpoint.DEMO_AGENT_MODEL— environment-driven model selection in samples.Additional Context
adk-botnoted alignment withGOOGLE_GENAI_USE_VERTEXAIandGOODMEM_BASE_URLprecedents).