Skip to content

feat(oauth): OAuth Codex PKCE + LiteLLM 1.83.3 security upgrade#2

Closed
NeritonDias wants to merge 20 commits intoEvolutionAPI:mainfrom
NeritonDias:feat/oauth-codex-v2
Closed

feat(oauth): OAuth Codex PKCE + LiteLLM 1.83.3 security upgrade#2
NeritonDias wants to merge 20 commits intoEvolutionAPI:mainfrom
NeritonDias:feat/oauth-codex-v2

Conversation

@NeritonDias
Copy link
Copy Markdown

@NeritonDias NeritonDias commented Apr 18, 2026

Summary

This replaces the previous device-code / openai/ remapping approach (originally proposed in evo-crm-community#14) with a correct PKCE browser flow wired against LiteLLM's native chatgpt/ provider, plus security hardening.

1. LiteLLM bump 1.68.2 → 1.83.3

  • 1.68.2 is vulnerable to CVE-2026-35030 (OIDC auth bypass, fixed in 1.83.0).
  • 1.68.2 pre-dates the March 2026 TeamPCP supply-chain attack that poisoned 1.82.7 and 1.82.8.
  • 1.83.3 ships the native chatgpt/ provider (litellm/llms/chatgpt/authenticator.py) that reads ~/.config/litellm/chatgpt/auth.json and talks to chatgpt.com/backend-api/codex. Without the bump, every LiteLlm(model="chatgpt/gpt-5.x") call at runtime fails with 'LLM Provider NOT provided'.

2. OAuth security hardening (oauth_codex_service.py)

  • complete_auth_flow() now validates the state parameter from the callback URL against the value stored encrypted during generate_auth_url(). Previous code only extracted code, leaving the PKCE flow open to CSRF / authorization-code-injection.
  • complete_auth_flow() / disconnect_oauth() take SELECT ... FOR UPDATE on the ApiKey row so concurrent callers cannot interleave reads/writes of oauth_data or is_active.
  • get_fresh_token() keeps the fast read path lock-free but takes a row-level lock and re-reads when the token is within 5 min of expiry, so only one worker actually calls OpenAI's refresh endpoint while concurrent callers observe the freshly stored tokens.
  • plan_type="plus" is no longer hardcoded; best-effort extracted from id_token claims (chatgpt_plan_type, plan_type, https://api.openai.com/plan) and omitted when absent.

3. Constants + scopes aligned with upstream Codex CLI

  • New env-overridable CODEX_AUTH_URL and CODEX_REDIRECT_URI in oauth_constants.py (they were hardcoded in the service).
  • Default CODEX_SCOPES now matches the Codex CLI: openid profile email offline_access api.connectors.read api.connectors.invoke. This ensures the id_token carries organization claims needed by _extract_account_id() and that future api.connectors.* features work without a re-consent.
  • New CODEX_ID_TOKEN_ADD_ORGS toggle (default true) appends id_token_add_organizations=true to the authorize URL.

Test plan

  • POST /api/v1/agents/oauth/codex/auth-start returns an authorize_url that contains client_id=app_EMoamEEZ73f0CkXaXp7hrann, code_challenge_method=S256, codex_cli_simplified_flow=true, id_token_add_organizations=true and the full scope string
  • Complete the browser flow; POST /api/v1/agents/oauth/codex/auth-complete rejects a callback URL with a tampered state
  • get_fresh_token() correctly refreshes a token within 5 min of expiry and does not thrash on concurrent calls
  • Building an agent with model="chatgpt/gpt-5.4" succeeds and ~/.config/litellm/chatgpt/auth.json is written
  • Existing agents using API keys continue to work

Related PRs

Summary by Sourcery

Introduce OAuth Codex PKCE browser flow with secure token storage and integrate it with LiteLLM’s native ChatGPT provider while upgrading LiteLLM to a secure version.

New Features:

  • Add OAuth Codex PKCE flow service and API endpoints for starting, completing, checking status of, and disconnecting Codex OAuth connections.
  • Support OAuth-based API keys in the data model, schemas, and agent execution so agents can run using ChatGPT browser tokens via LiteLLM’s chatgpt provider.
  • Expose an internal, token-secured endpoint for service-to-service retrieval of fresh OAuth access tokens.

Bug Fixes:

  • Harden OAuth callback handling by validating the state parameter and preventing CSRF or authorization-code-injection during token exchange.
  • Prevent concurrent races when refreshing or disconnecting OAuth tokens by using row-level locking and safe token refresh logic.

Enhancements:

  • Encrypt and store OAuth token payloads alongside existing API key encryption utilities.
  • Adjust API key resolution logic to distinguish between static keys and OAuth-backed keys and avoid treating OAuth keys as traditional API keys.
  • Align Codex OAuth constants, scopes, and redirect URI configuration with the upstream Codex CLI and make them env-overridable.

Build:

  • Upgrade LiteLLM dependency to version 1.83.3 to pick up security fixes and native ChatGPT provider support.

daniloleonecarneiro and others added 20 commits March 18, 2026 16:34
- Removed account_id parameter from various service methods (HttpMemoryService, MondayService, NotionService, PayPalService, SupabaseService) to streamline authorization processes.
- Updated methods to utilize agent_id instead of account_id where applicable.
- Adjusted logging and error handling to reflect changes in parameters.
- Simplified session and agent retrieval functions in session_service and temp_limits_service to eliminate account_id dependency.
- Cleaned up context utility functions by removing account-related methods.
Build and push multi-arch images (amd64 + arm64) in parallel to Docker Hub
on push to main and version tags. Uses SHA-pinned actions, provenance
attestation, and env-based variable passing to prevent script injection.
The help string for --seeders referenced args before it was defined,
causing the script to crash on startup.
…rror

Prevents crash when Alembic migrations have already created tables
before SQLAlchemy metadata.create_all runs at startup.
The CORS_ORIGINS setting was declared as List[str] with a fallback that
split a CSV env var, but main.py configures CORSMiddleware with a hardcoded
allow_origins=["*"] and never references settings.CORS_ORIGINS.

On top of being dead code, the field made the app crash at boot when
CORS_ORIGINS was set to a CSV string in the environment: pydantic-settings
v2 tries to JSON-decode List[str] fields from env sources before any
field_validator runs, raising SettingsError.

Removing the field eliminates the boot crash and drops a dozen lines of
unused code.
…k-config-and-channel-tooltips

chore(settings): remove unused CORS_ORIGINS field
Adds OpenAI Codex OAuth as an alternative to static API keys.
This enables the OAuth 2.0 device code flow for authenticating
with OpenAI, supporting automatic token refresh and encrypted
token storage.

Changes:
- models.py: Added auth_type and oauth_data columns to ApiKey
- schemas.py: Added OAuth device code request/response schemas
- crypto.py: Added encrypt/decrypt functions for OAuth data
- apikey_service.py: Added OAuth key handling and get_api_key_record
- agent_utils.py: Updated get_api_key to support OAuth Codex tokens
- llm_agent_builder.py: Added OAuth Codex api_base override for LiteLlm
- oauth_constants.py: New file with Codex OAuth configuration
- oauth_codex_service.py: New file with full device code flow service
- oauth_routes.py: New file with OAuth Codex API endpoints
- Alembic migration for new columns
…ration

1. oauth_routes.py: device-code endpoint now takes client_id from
   x-client-id header (consistent with other endpoints) and derives
   a deterministic UUID from user context if not provided. Fixes 422.

2. llm_agent_builder.py: for OAuth Codex keys, writes tokens to
   ~/.config/litellm/chatgpt/auth.json instead of passing api_key.
   LiteLLM's chatgpt/ provider ignores api_key and only reads from
   auth.json. Uses file locking for thread safety.

3. oauth_codex_service.py: added get_raw_oauth_tokens() and
   write_chatgpt_auth_json() utility functions.
Version 1.68.2 is vulnerable to CVE-2026-35030 (OIDC auth bypass, fixed in
1.83.0) and affected by the March 2026 TeamPCP supply-chain attack on
versions 1.82.7 and 1.82.8.

Version 1.83.3 also introduces the native 'chatgpt/' provider with its
own Authenticator that reads tokens from ~/.config/litellm/chatgpt/auth.json,
which is required for the OAuth Codex integration to work at runtime.
oauth_constants.py:
- Expose CODEX_AUTH_URL and CODEX_REDIRECT_URI as env-overridable
  constants instead of hardcoding the URLs in the service.
- Default scope now matches the upstream Codex CLI
  ('openid profile email offline_access api.connectors.read
  api.connectors.invoke') so the id_token carries organization claims
  needed by _extract_account_id and future api.connectors features
  work without a re-consent.
- New CODEX_ID_TOKEN_ADD_ORGS toggle (default true) appends the
  id_token_add_organizations authorize parameter.

oauth_codex_service.py:
- complete_auth_flow(): validate the 'state' query parameter from
  the callback URL against the value stored encrypted in
  oauth_data.pending during generate_auth_url(). A mismatch is
  treated as a CSRF / authorization-code-injection attempt and the
  exchange is aborted without consuming the pending record.
- complete_auth_flow() and disconnect_oauth() now SELECT ... FOR
  UPDATE on the ApiKey row so concurrent callers cannot interleave
  reads and writes of oauth_data / is_active.
- get_fresh_token(): the happy path (token still valid) stays
  lock-free, but when we need to refresh we take a row-level lock
  and re-read, so only one worker actually calls OpenAI's refresh
  endpoint while concurrent callers observe the freshly stored
  tokens.
- Honor the new redirect_uri/auth_url constants on both the
  authorize URL and the token exchange so they stay in sync.
- Emit the id_token_add_organizations parameter when enabled.
- plan_type is no longer hardcoded as 'plus'; best-effort extracted
  from id_token/access_token claims (chatgpt_plan_type, plan_type,
  https://api.openai.com/plan) and omitted entirely when absent.
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 18, 2026

Reviewer's Guide

Implements a full OAuth PKCE Codex browser flow integrated with LiteLLM’s native chatgpt provider, introduces encrypted OAuth token storage and locking for safe refresh/disconnect semantics, exposes corresponding API routes and schemas, adds ORM/migration support for OAuth-backed keys, updates agent key handling to distinguish OAuth from static keys, and bumps LiteLLM to 1.83.3 for security and chatgpt support.

Sequence diagram for OAuth Codex PKCE browser flow and token refresh

sequenceDiagram
    actor User
    participant Frontend
    participant EvoAPI as Evo_API_Backend
    participant OauthRoutes as oauth_routes
    participant OauthService as oauth_codex_service
    participant DB
    participant OpenAIAuth as auth.openai.com

    rect rgb(230,230,255)
    Note over User,OpenAIAuth: Start OAuth PKCE flow
    User->>Frontend: Click Connect Codex
    Frontend->>EvoAPI: POST /api/v1/agents/oauth/codex/auth-start
    EvoAPI->>OauthRoutes: auth_start(name)
    OauthRoutes->>OauthService: generate_auth_url(db, client_id, name)
    OauthService->>DB: INSERT ApiKey(auth_type=oauth_codex, oauth_data=pending_verifier+state, is_active=false)
    DB-->>OauthService: ApiKey(id)
    OauthService-->>OauthRoutes: {authorize_url, key_id}
    OauthRoutes-->>Frontend: OAuthAuthStartResponse(authorize_url, key_id)
    Frontend-->>User: Redirect browser to authorize_url
    User->>OpenAIAuth: Complete login and consent
    OpenAIAuth-->>User: Redirect to CODEX_REDIRECT_URI with code,state
    User->>Frontend: Paste callback_url (from browser)
    Frontend->>EvoAPI: POST /api/v1/agents/oauth/codex/auth-complete
    EvoAPI->>OauthRoutes: auth_complete(key_id, callback_url)
    OauthRoutes->>OauthService: complete_auth_flow(db, key_id, callback_url)
    OauthService->>DB: SELECT ApiKey FOR UPDATE WHERE id=key_id
    DB-->>OauthService: ApiKey(oauth_data=pending_verifier+state)
    OauthService->>OauthService: Validate state from callback_url matches stored state
    OauthService->>OpenAIAuth: POST CODEX_TOKEN_URL (code, code_verifier, client_id, redirect_uri)
    OpenAIAuth-->>OauthService: {access_token, refresh_token, id_token, expires_in}
    OauthService->>OauthService: Extract account_id, expires_at, plan_type from tokens
    OauthService->>DB: UPDATE ApiKey SET oauth_data=encrypted_tokens, is_active=true
    DB-->>OauthService: Commit
    OauthService-->>OauthRoutes: {status=ok, key_id, message}
    OauthRoutes-->>Frontend: OAuthAuthCompleteResponse
    Frontend-->>User: Show Codex connected
    end

    rect rgb(230,255,230)
    Note over EvoAPI,OpenAIAuth: Use and refresh access token
    participant AgentRuntime as Agent_execution
    participant LiteLLM

    AgentRuntime->>DB: Load Agent(api_key_id, model)
    AgentRuntime->>EvoAPI: get_api_key(agent)
    EvoAPI->>DB: get_api_key_record(api_key_id)
    DB-->>EvoAPI: ApiKey(auth_type)
    alt auth_type == oauth_codex
        EvoAPI->>OauthService: get_fresh_token(db, key_id)
        OauthService->>DB: SELECT ApiKey WHERE id=key_id
        DB-->>OauthService: ApiKey(oauth_data)
        OauthService->>OauthService: Check expires_at (5 minute refresh window)
        opt Token_near_expiry
            OauthService->>DB: SELECT ApiKey FOR UPDATE WHERE id=key_id
            DB-->>OauthService: ApiKey(oauth_data)
            alt Other_worker_already_refreshed
                OauthService->>OauthService: Use updated oauth_data
            else Need_refresh
                OauthService->>OpenAIAuth: POST CODEX_TOKEN_URL (grant_type=refresh_token)
                OpenAIAuth-->>OauthService: {access_token, refresh_token, expires_in}
                OauthService->>DB: UPDATE ApiKey.oauth_data with new tokens
                DB-->>OauthService: Commit
            end
        end
        OauthService-->>EvoAPI: access_token, account_id
        EvoAPI->>OauthService: get_raw_oauth_tokens(db, key_id)
        OauthService->>DB: SELECT ApiKey WHERE id=key_id AND is_active=true
        DB-->>OauthService: ApiKey(oauth_data)
        OauthService-->>EvoAPI: tokens dict
        EvoAPI->>OauthService: write_chatgpt_auth_json(tokens)
        OauthService-->>EvoAPI: auth.json written
        EvoAPI->>LiteLLM: LiteLlm(model=chatgpt/..., api_key omitted)
        LiteLLM->>OpenAIAuth: Call chatgpt.com backend using auth.json tokens
        OpenAIAuth-->>LiteLLM: Chat response
    else Static_API_key
        EvoAPI->>DB: get_decrypted_api_key(api_key_id)
        DB-->>EvoAPI: decrypted api_key
        EvoAPI->>LiteLLM: LiteLlm(model=..., api_key=api_key)
        LiteLLM->>OpenAIAuth: Call OpenAI API with api_key
        OpenAIAuth-->>LiteLLM: Chat response
    end
    LiteLLM-->>AgentRuntime: Response
    end
Loading

Entity relationship diagram for updated ApiKey OAuth Codex support

erDiagram
    EVO_CORE_API_KEYS {
        uuid id PK
        string name
        string provider
        string key "nullable for OAuth keys"
        string auth_type "api_key or oauth_codex"
        text oauth_data "encrypted OAuth tokens and metadata"
        timestamptz created_at
        timestamptz updated_at
        boolean is_active
    }

    AGENTS {
        uuid id PK
        string name
        string model
        uuid api_key_id FK
    }

    EVO_CORE_API_KEYS ||--o{ AGENTS : provides_credentials_for
Loading

Class diagram for OAuth Codex models, services, and routes

classDiagram
    class ApiKeyModel {
        +uuid id
        +str name
        +str provider
        +str key
        +str auth_type
        +str oauth_data
        +datetime created_at
        +datetime updated_at
        +bool is_active
    }

    class ApiKeySchema {
        +UUID4 id
        +str name
        +str provider
        +str auth_type
        +datetime created_at
        +datetime updated_at
        +bool is_active
        +bool oauth_connected
    }

    class OAuthAuthStartRequest {
        +str name = "OpenAI Codex"
    }

    class OAuthAuthStartResponse {
        +str authorize_url
        +UUID4 key_id
    }

    class OAuthAuthCompleteRequest {
        +UUID4 key_id
        +str callback_url
    }

    class OAuthAuthCompleteResponse {
        +str status
        +str key_id
        +str message
    }

    class OAuthStatusResponse {
        +UUID4 key_id
        +bool connected
        +datetime expires_at
        +str account_id
        +str plan_type
    }

    class CryptoUtils {
        +str decrypt_api_key(encrypted_key)
        +str encrypt_api_key(plain_key)
        +str encrypt_oauth_data(oauth_dict)
        +dict decrypt_oauth_data(encrypted_data)
    }

    class ApiKeyService {
        +ApiKeyModel get_api_key(db, key_id)
        +ApiKeyModel get_api_key_record(db, key_id)
        +str get_decrypted_api_key(db, key_id, agent)
    }

    class OAuthCodexService {
        +dict generate_auth_url(db, client_id, name)
        +dict complete_auth_flow(db, key_id, callback_url)
        +dict get_oauth_status(db, key_id)
        +dict disconnect_oauth(db, key_id)
        +tuple~str,str~ get_fresh_token(db, key_id)
        +dict get_raw_oauth_tokens(db, key_id)
        +void write_chatgpt_auth_json(tokens)
        +str _extract_account_id(token)
        +str _extract_token_expiry(token)
        +str _extract_plan_type(token)
        +dict _refresh_access_token(refresh_token)
    }

    class OAuthRoutes {
        +UUID client_id_from_header_or_user(x_client_id, current_user)
        +auth_start(request, x_client_id, current_user, db)
        +auth_complete(request, current_user, db)
        +get_status(key_id, current_user, db)
        +disconnect(key_id, current_user, db)
        +get_internal_token(key_id, request, db)
    }

    class Agent {
        +UUID api_key_id
        +str name
        +str model
        +str description
    }

    class AgentUtils {
        +str get_api_key(db, agent)
    }

    class LlmAgentBuilder {
        +combined_callback(callback_context)
    }

    class LiteLlmWrapper {
        +LiteLlm(model, api_key)
    }

    class OAuthConstants {
        +str CODEX_CLIENT_ID
        +str CODEX_AUTH_URL
        +str CODEX_DEVICE_AUTH_URL
        +str CODEX_TOKEN_URL
        +str CODEX_USERINFO_URL
        +str CODEX_API_BASE
        +str CODEX_REDIRECT_URI
        +str CODEX_SCOPES
        +bool CODEX_ID_TOKEN_ADD_ORGS
        +str CODEX_GRANT_TYPE_DEVICE
        +str CODEX_GRANT_TYPE_REFRESH
    }

    ApiKeySchema --> ApiKeyModel : maps_from
    ApiKeyService --> ApiKeyModel : queries
    ApiKeyService --> CryptoUtils : uses

    OAuthCodexService --> ApiKeyModel : reads_writes
    OAuthCodexService --> CryptoUtils : encrypts_decrypts
    OAuthCodexService --> OAuthConstants : uses

    OAuthRoutes --> OAuthCodexService : calls
    OAuthRoutes --> ApiKeyModel : references_ids

    Agent --> ApiKeyModel : uses_api_key
    AgentUtils --> ApiKeyService : uses
    AgentUtils --> OAuthCodexService : uses_get_fresh_token

    LlmAgentBuilder --> Agent : builds_from
    LlmAgentBuilder --> ApiKeyService : get_api_key_record
    LlmAgentBuilder --> OAuthCodexService : get_raw_oauth_tokens, write_chatgpt_auth_json
    LlmAgentBuilder --> LiteLlmWrapper : instantiates
Loading

File-Level Changes

Change Details Files
Add PKCE-based OAuth Codex service and HTTP API to create, complete, query, and disconnect OAuth-backed API keys, plus an internal token endpoint for CRM.
  • Implement PKCE auth URL generation that persists a pending oauth_codex ApiKey with encrypted state and verifier, and returns authorize_url + key_id
  • Implement auth-complete handler that validates state, exchanges code for tokens, derives account_id/plan_type/expiry from JWTs, stores encrypted oauth_data, and activates the key under row lock
  • Add helpers to get OAuth connection status, disconnect and clear tokens with row locking, and to refresh/rotate access tokens with a single-writer concurrency pattern
  • Expose FastAPI routes for auth-start, auth-complete, status, disconnect, and an x-api-token–protected internal token endpoint wired to the service functions
src/services/oauth_codex_service.py
src/api/oauth_routes.py
Extend ApiKey persistence and crypto to support OAuth Codex tokens alongside traditional API keys.
  • Add nullable key, auth_type (default api_key), and oauth_data columns to the ApiKey ORM model and via Alembic migration
  • Introduce encrypt_api_key, encrypt_oauth_data, and decrypt_oauth_data helpers for symmetric encryption of static keys and OAuth payloads
  • Add get_api_key_record accessor to retrieve full ApiKey ORM records, and ensure get_decrypted_api_key returns None for oauth_codex keys so they are not treated as static keys
src/models/models.py
migrations/versions/a1b2c3d4e5f6_add_oauth_codex_support.py
src/utils/crypto.py
src/services/apikey_service.py
Adjust agent key resolution and LiteLLM integration to use OAuth Codex tokens and LiteLLM’s chatgpt provider instead of api_key/api_base remapping.
  • Extend the ApiKey schema to include auth_type and oauth_connected plus new OAuth request/response schemas for auth-start, auth-complete, and status
  • Update get_api_key to detect oauth_codex keys, call get_fresh_token for a current access token, and fall back to the existing decrypted static key logic
  • Update LLM agent builder to branch on auth_type: for oauth_codex, read raw oauth tokens, write chatgpt auth.json, normalize model to chatgpt/*, and construct LiteLlm without api_key; otherwise keep existing model+api_key usage
src/schemas/schemas.py
src/services/adk/agents/agent_utils.py
src/services/adk/agents/llm_agent_builder.py
src/services/oauth_codex_service.py
Centralize and align Codex OAuth constants and scopes with upstream Codex CLI, and wire the OAuth router into the application.
  • Introduce oauth_constants with env-overridable client ID, OAuth endpoints, redirect URI, scopes, flags (id_token_add_organizations), and grant types aligned with Codex CLI
  • Use the new constants in the OAuth Codex service instead of hardcoded URLs and scopes
  • Register the oauth_routes router on the FastAPI app under the API prefix
src/config/oauth_constants.py
src/services/oauth_codex_service.py
src/main.py
Upgrade LiteLLM to 1.83.3 to pick up security fixes and the native chatgpt provider required by the new OAuth flow, and make minor logging cleanup.
  • Bump litellm from 1.68.x to 1.83.3 in pyproject.toml and requirements.txt to avoid the OIDC auth bypass CVE and get chatgpt provider support
  • Slightly tweak logging text in get_sub_agents to remove emoji while preserving context
pyproject.toml
requirements.txt
src/services/adk/agents/agent_utils.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • In generate_auth_url the client_id argument is never used or persisted on the ApiKey record, so either wire it into the key (e.g., a client/owner field) or drop the parameter to avoid confusion about tenant scoping.
  • complete_auth_flow raises bare ValueErrors for user-driven errors (e.g., missing code, state mismatch, provider error), which surface as 500s; consider translating these into HTTPException with appropriate 4xx statuses so the API returns clear, client-actionable responses.
  • There are a few unused or redundant elements you can trim for clarity, e.g., the unused CODEX_API_BASE import in llm_agent_builder.py and the inner re-imports of ApiKey/decrypt_oauth_data in get_raw_oauth_tokens, and the unused OAuthAuthCompleteResponse schema.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `generate_auth_url` the `client_id` argument is never used or persisted on the `ApiKey` record, so either wire it into the key (e.g., a client/owner field) or drop the parameter to avoid confusion about tenant scoping.
- `complete_auth_flow` raises bare `ValueError`s for user-driven errors (e.g., missing code, state mismatch, provider error), which surface as 500s; consider translating these into `HTTPException` with appropriate 4xx statuses so the API returns clear, client-actionable responses.
- There are a few unused or redundant elements you can trim for clarity, e.g., the unused `CODEX_API_BASE` import in `llm_agent_builder.py` and the inner re-imports of `ApiKey`/`decrypt_oauth_data` in `get_raw_oauth_tokens`, and the unused `OAuthAuthCompleteResponse` schema.

## Individual Comments

### Comment 1
<location path="src/services/oauth_codex_service.py" line_range="120-94" />
<code_context>
+async def complete_auth_flow(
</code_context>
<issue_to_address>
**issue (bug_risk):** Handle DB errors and rollbacks when completing the auth flow.

In `complete_auth_flow`, exceptions from `httpx` (after `raise_for_status()`) or `db.commit()` will escape without a rollback or a clear HTTP error, potentially leaving the session in an inconsistent state. Please wrap the DB mutation block (from acquiring the locked row through `db.commit()`) in try/except, call `db.rollback()` on any exception, and translate failures into appropriate `HTTPException`s (e.g., 502 for upstream errors, 500 for commit failures) with safe, user-facing messages.
</issue_to_address>

### Comment 2
<location path="src/api/oauth_routes.py" line_range="162-171" />
<code_context>
+    summary="Internal token retrieval for CRM service-to-service calls",
+    description="Internal endpoint for CRM service-to-service OAuth token retrieval. Authenticated via x-api-token header.",
+)
+async def get_internal_token(
+    key_id: uuid.UUID,
+    request: Request,
+    db: Session = Depends(get_db),
+):
+    """Internal endpoint for CRM service-to-service token retrieval.
+    Authenticated via EVOAI_CRM_API_TOKEN header.
+    """
+    api_token = request.headers.get("x-api-token")
+    expected = os.getenv("EVOAI_CRM_API_TOKEN", "")
+    if not api_token or api_token != expected:
+        raise HTTPException(status_code=401, detail="Invalid service token")
+
+    access_token, account_id = await get_fresh_token(db, key_id)
+    return {"access_token": access_token, "account_id": account_id}
</code_context>
<issue_to_address>
**issue (bug_risk):** Return an error when no fresh token is available instead of `access_token: null`.

When `get_fresh_token` returns `(None, None)` (e.g. key not found, inactive, or tokens missing), this endpoint still responds with 200 and `{ "access_token": null, "account_id": null }`. That prevents the CRM from distinguishing a valid token from configuration/auth issues and can hide problems like expired/removed keys. Instead, map these cases to explicit HTTP errors (e.g. 404 for missing/inactive key, 400/409 for unavailable tokens) rather than returning a null `access_token`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.


try:
db.add(api_key_record)
db.commit()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Handle DB errors and rollbacks when completing the auth flow.

In complete_auth_flow, exceptions from httpx (after raise_for_status()) or db.commit() will escape without a rollback or a clear HTTP error, potentially leaving the session in an inconsistent state. Please wrap the DB mutation block (from acquiring the locked row through db.commit()) in try/except, call db.rollback() on any exception, and translate failures into appropriate HTTPExceptions (e.g., 502 for upstream errors, 500 for commit failures) with safe, user-facing messages.

Comment thread src/api/oauth_routes.py
Comment on lines +162 to +171
async def get_internal_token(
key_id: uuid.UUID,
request: Request,
db: Session = Depends(get_db),
):
"""Internal endpoint for CRM service-to-service token retrieval.
Authenticated via EVOAI_CRM_API_TOKEN header.
"""
api_token = request.headers.get("x-api-token")
expected = os.getenv("EVOAI_CRM_API_TOKEN", "")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Return an error when no fresh token is available instead of access_token: null.

When get_fresh_token returns (None, None) (e.g. key not found, inactive, or tokens missing), this endpoint still responds with 200 and { "access_token": null, "account_id": null }. That prevents the CRM from distinguishing a valid token from configuration/auth issues and can hide problems like expired/removed keys. Instead, map these cases to explicit HTTP errors (e.g. 404 for missing/inactive key, 400/409 for unavailable tokens) rather than returning a null access_token.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants