diff --git a/docs.json b/docs.json
index f18fd5a6..ab189ab7 100644
--- a/docs.json
+++ b/docs.json
@@ -435,6 +435,13 @@
"enterprise/k8s-install/index",
"enterprise/k8s-install/resource-limits"
]
+ },
+ {
+ "group": "Tech Notes",
+ "pages": [
+ "enterprise/tech-notes/index",
+ "enterprise/tech-notes/llm-key-protection"
+ ]
}
]
}
diff --git a/enterprise/tech-notes/images/llm-key-architecture.d2 b/enterprise/tech-notes/images/llm-key-architecture.d2
new file mode 100644
index 00000000..b56b29b8
--- /dev/null
+++ b/enterprise/tech-notes/images/llm-key-architecture.d2
@@ -0,0 +1,110 @@
+direction: down
+
+title: LLM API Key Protection Architecture {
+ near: top-center
+ shape: text
+ style: {
+ font-size: 24
+ bold: true
+ }
+}
+
+agent-server: Agent Server (Python Process) {
+ style: {
+ fill: "#f8fafc"
+ stroke: "#94a3b8"
+ border-radius: 12
+ }
+
+ agent-loop: Agent Loop {
+ style: {
+ fill: "#eef2ff"
+ stroke: "#6366f1"
+ border-radius: 8
+ }
+ }
+
+ llm: LLM\n(api_key 🔑) {
+ style: {
+ fill: "#fef3c7"
+ stroke: "#f59e0b"
+ border-radius: 8
+ font-color: "#92400e"
+ }
+ }
+
+ provider-api: LiteLLM / Provider API {
+ style: {
+ fill: "#ecfdf5"
+ stroke: "#10b981"
+ border-radius: 8
+ }
+ }
+
+ agent-loop -> llm: {
+ style: {
+ stroke: "#6366f1"
+ stroke-width: 2
+ }
+ }
+
+ llm -> provider-api: {
+ style: {
+ stroke: "#6366f1"
+ stroke-width: 2
+ }
+ }
+
+ sandbox: Sandbox Environment (Isolated Container) {
+ style: {
+ fill: "#fef2f2"
+ stroke: "#ef4444"
+ stroke-dash: 5
+ border-radius: 10
+ }
+
+ bash: Bash Commands {
+ style: {
+ fill: "#ffffff"
+ stroke: "#d1d5db"
+ border-radius: 6
+ }
+ }
+
+ files: File Editor {
+ style: {
+ fill: "#ffffff"
+ stroke: "#d1d5db"
+ border-radius: 6
+ }
+ }
+
+ tools: Other Tools {
+ style: {
+ fill: "#ffffff"
+ stroke: "#d1d5db"
+ border-radius: 6
+ }
+ }
+
+ no-access: |md
+ ❌ No LLM_API_KEY
+ ❌ No SESSION_API_KEY
+ ❌ No Provider Credentials
+ | {
+ style: {
+ fill: "#fee2e2"
+ stroke: "#fecaca"
+ border-radius: 6
+ font-color: "#991b1b"
+ }
+ }
+ }
+
+ agent-loop -> sandbox: Tool calls\n(no secrets) {
+ style: {
+ stroke: "#9ca3af"
+ stroke-width: 2
+ }
+ }
+}
diff --git a/enterprise/tech-notes/images/llm-key-architecture.svg b/enterprise/tech-notes/images/llm-key-architecture.svg
new file mode 100644
index 00000000..da0875f0
--- /dev/null
+++ b/enterprise/tech-notes/images/llm-key-architecture.svg
@@ -0,0 +1,851 @@
+
diff --git a/enterprise/tech-notes/index.mdx b/enterprise/tech-notes/index.mdx
new file mode 100644
index 00000000..b6af39b9
--- /dev/null
+++ b/enterprise/tech-notes/index.mdx
@@ -0,0 +1,38 @@
+---
+title: Tech Notes
+description: In-depth technical articles on OpenHands architecture, security, and implementation details
+icon: file-lines
+---
+
+Tech Notes provide detailed technical explanations of how OpenHands works under
+the hood. These articles go beyond basic documentation to explain architecture
+decisions, security models, and implementation details that help you understand
+and trust the platform.
+
+## Available Tech Notes
+
+
+
+ How OpenHands protects LLM API keys from agent access, including the
+ security architecture for LiteLLM virtual keys and BYOK configurations.
+
+
+
+## About Tech Notes
+
+Tech Notes are written for:
+
+- **Security teams** evaluating OpenHands for enterprise deployment
+- **Platform engineers** integrating OpenHands into existing infrastructure
+- **Developers** who want to understand how OpenHands works internally
+
+Each Tech Note includes:
+
+- Detailed technical explanations with code references
+- Security considerations and threat models
+- Architecture diagrams where applicable
+- Links to relevant source code for verification
diff --git a/enterprise/tech-notes/llm-key-protection.mdx b/enterprise/tech-notes/llm-key-protection.mdx
new file mode 100644
index 00000000..3c21d328
--- /dev/null
+++ b/enterprise/tech-notes/llm-key-protection.mdx
@@ -0,0 +1,370 @@
+---
+title: LLM API Key Protection
+description: How OpenHands protects LLM API keys from agent access and exfiltration
+icon: key
+---
+
+This technical note explains how OpenHands protects LLM API keys—including
+LiteLLM virtual keys and Bring Your Own Key (BYOK) configurations—from being
+accessed or exfiltrated by the AI agent running in the sandbox.
+
+## Overview
+
+When you use OpenHands, an AI agent executes code in a sandboxed environment.
+A natural security concern is: **can the agent access and steal the LLM API key
+used to power it?**
+
+The short answer is **no**. OpenHands implements multiple layers of protection
+to ensure that LLM API keys are never exposed to the agent's execution
+environment.
+
+## The LiteLLM Proxy Layer
+
+Before diving into SDK-level protections, it's important to understand the
+first layer of defense: the LiteLLM proxy.
+
+### How It Works
+
+OpenHands routes all LLM API calls through a LiteLLM proxy server. This proxy
+holds the actual provider API keys (OpenAI, Anthropic, etc.) and issues
+**virtual keys** to users:
+
+| Component | Who Configures It | What It Holds |
+|-----------|-------------------|---------------|
+| **LiteLLM Proxy** | OpenHands (SaaS) or Customer (Enterprise) | Master API keys for all LLM providers |
+| **Virtual Key** | Generated per Organization/Personal Workspace | Reference to master key, with budget/usage tracking |
+| **Agent** | N/A | Receives only the virtual key (if at all) |
+
+### Master Key Configuration
+
+- **OpenHands SaaS**: OpenHands configures master API keys in the LiteLLM proxy.
+ Users never see or handle provider API keys directly.
+- **OpenHands Enterprise**: Customers configure master API keys in their Helm
+ values file or VM Installer. These keys are stored in the LiteLLM proxy, not
+ in the application layer.
+
+### Virtual Keys Per Organization
+
+When a user or organization is created, OpenHands generates a **virtual key**
+in LiteLLM:
+
+```python
+# Simplified from enterprise/storage/lite_llm_manager.py
+
+key = await LiteLlmManager._generate_key(
+ client,
+ keycloak_user_id,
+ org_id,
+ key_alias,
+ max_budget,
+)
+
+# The virtual key is stored in user settings, NOT the master key
+oss_settings.update({
+ 'agent_settings_diff': {
+ 'llm': {
+ 'model': get_default_litellm_model(),
+ 'api_key': key, # Virtual key, not provider key
+ 'base_url': LITE_LLM_API_URL, # Points to proxy
+ }
+ }
+})
+```
+
+This virtual key:
+- **Cannot be used directly** with provider APIs (OpenAI, Anthropic, etc.)
+- **Only works** with the LiteLLM proxy that issued it
+- **Has budget limits** enforced by the proxy
+- **Can be revoked** without affecting other users
+
+### What About BYOK (Bring Your Own Key)?
+
+When users provide their own API keys through the OpenHands settings UI, the
+behavior depends on the configuration:
+
+| BYOK Scenario | Goes Through LiteLLM? | Key Exposure |
+|---------------|----------------------|--------------|
+| Custom `base_url` pointing to own LiteLLM | Yes (user's proxy) | User's proxy holds master key |
+| Custom `base_url` pointing directly to provider | No | Key goes directly to provider |
+| Only custom `api_key` (no custom `base_url`) | Yes (OpenHands proxy) | Key is passed to OpenHands LiteLLM proxy |
+
+In all cases, the API key (whether virtual or BYOK) is stored in
+`Agent.llm.api_key` and protected by the SDK-level mechanisms described below.
+
+## Architecture
+
+OpenHands uses a split architecture where:
+
+1. **The Agent Server** (Python process) holds sensitive credentials and makes
+ LLM API calls
+2. **The Sandbox** (isolated container or process) executes agent-requested
+ commands without access to credentials
+
+
+
+
+
+## Protection Mechanisms
+
+### 1. LLM API Key Isolation
+
+The LLM's `api_key` is stored in the `Agent.llm.api_key` field as a Pydantic
+`SecretStr`. This key is:
+
+- Used **only within the SDK's Python process** when making API calls via
+ LiteLLM
+- **Never exported** as an environment variable to the shell
+- **Never accessible** via bash commands like `echo $LLM_API_KEY`
+
+The agent cannot request to "print environment variables" or write code that
+reads the LLM API key because **it simply doesn't exist** in the sandbox's
+environment.
+
+### 2. SESSION_API_KEY Stripping
+
+The `SESSION_API_KEY` is a credential that grants access to user secrets via
+the OpenHands API. If an agent could read this, it could potentially access
+other sensitive data.
+
+OpenHands explicitly strips this variable before any subprocess execution:
+
+```python
+# From openhands-sdk/openhands/sdk/utils/command.py
+
+_SENSITIVE_ENV_VARS = frozenset({"SESSION_API_KEY"})
+
+def sanitized_env(env: Mapping[str, str] | None = None) -> dict[str, str]:
+ """Return a copy of *env* with sanitized values.
+
+ Sensitive environment variables (e.g., ``SESSION_API_KEY``) are stripped
+ to prevent LLM-driven agents from accessing credentials via terminal
+ commands.
+ """
+ base_env: dict[str, str]
+ if env is None:
+ base_env = dict(os.environ)
+ else:
+ base_env = dict(env)
+
+ # Strip sensitive env vars to prevent agent access via bash commands
+ for key in _SENSITIVE_ENV_VARS:
+ base_env.pop(key, None)
+
+ return base_env
+```
+
+This `sanitized_env()` function is called in:
+- `bash_service.py` — before executing any bash command
+- `desktop_service.py` — before starting desktop processes
+- `vscode_service.py` — before launching VS Code
+- `skills_service.py` — before running skill-related processes
+
+### 3. Registered Secrets: On-Demand Injection with Masking
+
+For secrets that **are** meant to be used by the agent (like `GITHUB_TOKEN` for
+git operations), OpenHands uses a controlled injection mechanism:
+
+1. **On-demand injection**: Secrets are only added to the environment when the
+ command text explicitly references them
+2. **Output masking**: Any secret values that appear in command output are
+ automatically replaced with ``
+
+#### Understanding "Controlled": LLM vs Agent Access
+
+Registered secrets are **accessible to the agent** but **hidden from the LLM**:
+
+| Layer | Access to Secret Values |
+|-------|------------------------|
+| **LLM** (language model) | ❌ Never sees actual values—masked as `` in conversation history |
+| **Agent** (sandbox execution) | ✅ Full access—can read, write to files, transmit over network |
+
+**How it works**: When a command outputs a secret value, it's replaced with
+`` before being added to the conversation history. This prevents
+the secret from appearing in prompts sent to the LLM. However, the agent executing
+in the sandbox has full access to use the secret as needed.
+
+**Expected behavior**: The agent will use registered secrets for legitimate tasks—writing
+to `.git-credentials`, including tokens in API headers, configuring services, etc.
+This is by design. Output masking keeps secrets out of conversation logs and the UI,
+but does not restrict how the agent uses them during execution.
+
+#### Implementation Details
+
+```python
+# From openhands-sdk/openhands/sdk/conversation/secret_registry.py
+
+def get_secrets_as_env_vars(self, command: str) -> dict[str, str]:
+ """Get secrets that should be exported as environment variables."""
+ found_secrets = self.find_secrets_in_text(command)
+
+ if not found_secrets:
+ return {}
+
+ env_vars = {}
+ for key in found_secrets:
+ source = self.secret_sources[key]
+ value = source.get_value()
+ if value:
+ env_vars[key] = value
+ # Track for masking
+ self._exported_values[key] = value
+
+ return env_vars
+
+def mask_secrets_in_output(self, text: str) -> str:
+ """Mask secret values in the given text."""
+ masked_text = text
+ for value in self._exported_values.values():
+ masked_text = masked_text.replace(value, "")
+ return masked_text
+```
+
+### 4. LookupSecret for Dynamic Tokens
+
+For OAuth tokens and other credentials that may be refreshed, OpenHands uses
+`LookupSecret` which fetches tokens via authenticated HTTP requests at runtime:
+
+```python
+# From openhands-sdk/openhands/sdk/secret/secrets.py
+
+class LookupSecret(SecretSource):
+ """A secret looked up from some external url"""
+
+ url: str
+ headers: dict[str, str] = Field(default_factory=dict)
+
+ def get_value(self) -> str:
+ response = httpx.get(self.url, headers=self.headers, timeout=30.0)
+ response.raise_for_status()
+ return response.text
+```
+
+This means tokens are never stored statically in the sandbox—they're fetched
+fresh when needed, and the fetch URL/headers are also protected.
+
+## Security Testing
+
+The SDK includes explicit security tests to verify these protections work:
+
+```python
+# From tests/agent_server/test_terminal_service.py
+
+@pytest.mark.asyncio
+async def test_terminal_does_not_expose_session_api_key(bash_service, monkeypatch):
+ """Verify SESSION_API_KEY is not accessible to bash commands.
+
+ This is a security test: SESSION_API_KEY grants access to user secrets via
+ the SaaS API. If an LLM-driven agent could read this env var via terminal
+ commands, it could exfiltrate all user secrets.
+ """
+ secret_value = "super-secret-session-key-12345"
+ monkeypatch.setenv("SESSION_API_KEY", secret_value)
+
+ # Agent tries to read the env var
+ request = ExecuteBashRequest(
+ command='echo "SESSION_API_KEY=$SESSION_API_KEY"',
+ cwd="/tmp",
+ )
+ command, task = await bash_service.start_bash_command(request)
+ await task
+
+ # The secret value should NOT appear in the output
+ assert secret_value not in combined_stdout
+```
+
+## What About BYOK (Bring Your Own Key)?
+
+When users provide their own API keys through the OpenHands settings UI:
+
+| Secret Type | Exposed to LLM? | Exposed to Agent? | Notes |
+|-------------|-----------------|-------------------|-------|
+| LLM API Key | ❌ No | ❌ No | Stored in `Agent.llm.api_key`, used only by SDK |
+| LiteLLM Virtual Key | ❌ No | ❌ No | Same protection as direct API keys |
+| GitHub/GitLab Tokens | ❌ No | ✅ Yes | Agent can use for git operations, write to files, etc. |
+| Custom Secrets | ❌ No | ✅ Yes | Agent can use as needed for tasks |
+
+The LLM API key specifically is **never** injected into the agent environment,
+regardless of whether it's a direct provider key, a LiteLLM virtual key, or
+an OpenHands-provided key.
+
+Registered secrets (GitHub/GitLab tokens, custom secrets) are fully accessible
+to the agent by design—this is required for the agent to perform tasks like
+pushing code or calling APIs. Output masking ensures these values don't appear
+in conversation history sent to the LLM.
+
+**Important**: A user can instruct the agent to write secret values to files.
+While output masking prevents the secret from appearing in the tool call results
+sent to the LLM, once written to disk the agent could subsequently read the file
+and include its contents in a message, pass the value to an external LLM, or
+transmit it via network requests. The security mechanism is designed to protect
+secrets from the LLM—not from the end user who stored them originally. Users
+retain full control over their own secrets and can direct the agent to use them
+however they choose.
+
+## Potential Attack Vectors (and Mitigations)
+
+### Could an agent write a program to read env vars?
+
+The agent could write Python code like:
+```python
+import os
+print(os.environ.get('LLM_API_KEY', 'not found'))
+```
+
+**Mitigation**: The variable doesn't exist in the subprocess environment—it
+returns `'not found'`.
+
+### Could an agent read the agent-server's memory?
+
+In theory, a malicious program could try to read `/proc//environ` of the
+parent process.
+
+**Mitigation**: The sandbox runs in an isolated container (Docker) with no
+access to the host's process space. The agent-server process is outside the
+container.
+
+### Could an agent intercept LLM API calls?
+
+The agent doesn't make LLM calls—the agent server does. The agent only
+receives the LLM's text responses, not the API request/response details.
+
+### Could secrets leak through error messages?
+
+FastAPI validation errors could potentially echo back request bodies containing
+secrets.
+
+**Mitigation**: OpenHands sanitizes all validation error responses:
+
+```python
+# From openhands-agent-server/openhands/agent_server/api.py
+
+def _sanitize_validation_errors(errors: Sequence[Any]) -> list[dict]:
+ """Sanitize validation error details to remove sensitive input values."""
+ sanitized: list[dict] = []
+ for error in errors:
+ error = dict(error)
+ if "input" in error:
+ error["input"] = sanitize_dict(error["input"])
+ sanitized.append(error)
+ return sanitized
+```
+
+## Summary
+
+OpenHands protects LLM API keys through defense-in-depth:
+
+1. **Architectural separation**: Keys live in the agent server, not the sandbox
+2. **Environment stripping**: Sensitive vars are removed before subprocess exec
+3. **On-demand injection**: Only explicitly-needed secrets are injected
+4. **Output masking**: Secret values are redacted from all output
+5. **Container isolation**: Sandbox cannot access host process memory
+
+This design ensures that even a malicious or manipulated agent cannot access
+or exfiltrate LLM API keys or LiteLLM virtual keys.
+
+## References
+
+- [OpenHands SDK Source Code](https://github.com/OpenHands/software-agent-sdk)
+- [Secret Registry Implementation](https://github.com/OpenHands/software-agent-sdk/blob/main/openhands-sdk/openhands/sdk/conversation/secret_registry.py)
+- [Sanitized Environment Implementation](https://github.com/OpenHands/software-agent-sdk/blob/main/openhands-sdk/openhands/sdk/utils/command.py)
+- [Security Tests](https://github.com/OpenHands/software-agent-sdk/blob/main/tests/agent_server/test_terminal_service.py)