Add optional OpenAI-compatible endpoint support per agent#65
Add optional OpenAI-compatible endpoint support per agent#65AXGZ21 wants to merge 1 commit intoTinyAGI:mainfrom
Conversation
|
@codex review this PR |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 76a322ba33
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| fi | ||
|
|
||
| if [[ "$SETUP_AGENTS" =~ ^[yY] ]]; then | ||
| AGENTS_JSON="$AGENTS_JSON \"$DEFAULT_AGENT_NAME\": { \"name\": \"$DEFAULT_AGENT_DISPLAY\", \"provider\": \"$PROVIDER\", \"model\": \"$MODEL\", \"working_directory\": \"$DEFAULT_AGENT_DIR\"${DEFAULT_OPENAI_JSON} }" |
There was a problem hiding this comment.
Keep default agent inheriting models.openai defaults
The setup wizard serializes the provider-level OpenAI endpoint/key into the default agent object (${DEFAULT_OPENAI_JSON}), which turns those values into per-agent overrides instead of defaults. Because getAgents merges models.openai first and then agent.openai (src/lib/config.ts lines 89-93), later rotating only models.openai.api_key or models.openai.base_url will not affect the default agent created by setup, leaving it on stale credentials.
Useful? React with 👍 / 👎.
| const codexOutput = await runCommand('codex', codexArgs, workingDir); | ||
| const openAIEnv = buildOpenAIEnvOverrides(agent); | ||
| if (openAIEnv?.OPENAI_BASE_URL) { | ||
| log('INFO', `Using custom OpenAI-compatible endpoint for agent ${agentId}: ${openAIEnv.OPENAI_BASE_URL}`); |
There was a problem hiding this comment.
Avoid logging full custom OpenAI endpoint URLs
This logs the configured endpoint verbatim, so any credentials embedded in the URL (for example https://user:pass@host/... or tokenized query strings used by some proxies) will be persisted in logs. Since this path runs on every OpenAI invocation when a custom endpoint is set, it creates an avoidable secret-leak surface; the log should redact sensitive URL components or only state that a custom endpoint is enabled.
Useful? React with 👍 / 👎.
Summary\n- add optional openai.base_url and openai.api_key settings for OpenAI agents\n- apply global models.openai endpoint defaults to OpenAI agents while allowing per-agent overrides\n- export endpoint/key env vars when invoking Codex (OPENAI_BASE_URL, OPENAI_API_BASE, OPENAI_API_KEY)\n- extend setup flows ( inyclaw setup, inyclaw agent add) to support custom OpenAI-compatible model names and optional endpoint/key input\n- document the new config options in README and AGENTS docs\n\n## Validation\n- �ash -n lib/setup-wizard.sh\n- �ash -n lib/agents.sh\n-
pm run build\n\nCloses #41.