fix(models): proxy custom OpenAI-compatible models through backend to bypass CORS#723
Open
fix(models): proxy custom OpenAI-compatible models through backend to bypass CORS#723
Conversation
Add createCustomProxyFetch factory and isLocalhostUrl helper. - Cloud URLs POST to /v1/custom-model/proxy via authenticated HttpClient; upstreamAuth travels in the request body only (never in Authorization header). - Localhost/loopback URLs use globalThis.fetch directly (CORS carve-out). - Tauri runtime delegates to src/lib/fetch (native-fetch path, unchanged). - createModel accepts optional httpClient and threads it to the custom case. - 29 tests covering localhost detection, routing decisions, and security invariants.
…end proxy Fixes CORS-blocked model list fetch for cloud custom URLs by posting to POST /v1/custom-model/models via the authenticated httpClient. Localhost URLs keep the existing direct fetch path. Maps ProxyErrorCode values to user-friendly error messages.
Semgrep Security ScanNo security issues found. |
PR Metrics
Updated Fri, 24 Apr 2026 20:37:26 GMT · run #1201 |
…mitted path fixes
…ject tauri fetch, prune tautological tests
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit ad596a2. Configure here.
…dels-in-web Signed-off-by: Ítalo Menezes <italo.menezes@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Problem
Web app cannot add custom OpenAI-compatible model endpoints — browser CORS blocks
GET {url}/v1/modelsandPOST {url}/v1/chat/completions. Tauri desktop/mobile already bypass via native fetch.Solution
Two new backend routes proxy cloud URLs through the existing SSRF-hardened
createSafeFetch(backend/src/utils/url-validation.ts). Localhost URLs keep direct-browser fetch (backend can't reach user loopback). Tauri path unchanged.POST /v1/custom-model/proxy— chat completions, streaming via OpenAI SDK.POST /v1/custom-model/models— model discovery.Why
createSafeFetchinstead ofundici.Agent: Phase-1.5 spike proved Bun 1.3.10 silently ignores undici'sconnecthook.createSafeFetchis already production-tested (pro/proxy.ts,pro/link-preview.ts,mcp-proxy/routes.ts) and uses the sameipaddr.jsdenylist. Zero new npm deps.Security
SSRF defense via
createSafeFetch(RFC-1918, loopback, link-local incl. 169.254.169.254, CGNAT, IPv4-mapped IPv6, etc). Per-user rate limit 60 req/min. Content-Type gate (application/json|text/event-stream). 101 Switching Protocols → 502. 50 MB total byte cap. Auth required;upstreamAuthredacted at pino level and kept in body (never query string orAuthorizationheader on the browser→backend leg). OutboundUser-Agent: Thunderbolt-Proxy/1.0+X-Abuse-Contact.Changes
shared/custom-model-proxy.tsbackend/src/inference/custom-model-proxy.tsbackend/src/inference/client.tsgetCustomModelClientfactorybackend/src/config/logger.tsupstreamAuthsrc/ai/fetch.ts,src/ai/custom-proxy-fetch.tstauriFetchfor tests)src/settings/models/index.tsxsrc/ai/is-localhost-url.tsTest plan
bun tsc --noEmitclean (backend + frontend, apart from pre-existingbun:testnoise on main)bun test src/lib/http.test.ts— 13/13bun test src/services/encryption.test.ts— 16/16bun test src/ai/— 210/210bun test src/settings/models/— 2/2bun test backend/src/inference/— 32/32url-validation.test.tsNote
High Risk
Adds new authenticated proxy endpoints that forward user-supplied URLs and API keys and perform outbound network requests/streaming, which is inherently SSRF- and data-exfiltration-sensitive despite added guards (safe fetch, validation, rate limits, redaction). Failures or gaps in validation/limits could impact security or availability.
Overview
Enables web usage of custom OpenAI-compatible model endpoints by routing model discovery and chat-completions through new authenticated backend proxy routes (
POST /v1/custom-model/modelsandPOST /v1/custom-model/proxy) to avoid browser CORS.The proxy layer adds SSRF-focused validation, per-user rate limiting, content-type checks, protocol-upgrade blocking, request timeouts, and response size caps, and forces outbound
User-Agent/X-Abuse-Contactheaders; logging is updated to redactauthorization/apiKey/upstreamAuth.Frontend model creation/testing now uses a
createCustomProxyFetchwrapper to send cloud requests via the backend (keeping upstream keys out of browser→third-party traffic), while localhost/loopback URLs still use directfetch; the Models UI routes custom/modelsfetching via the proxy and maps proxy error codes to user-friendly messages, with added shared wire types and tests.Reviewed by Cursor Bugbot for commit a6e8bc0. Bugbot is set up for automated code reviews on this repo. Configure here.