Skip to content

Conversation

@jay-dhamale
Copy link

Summary

Fixes #2652 - Makes model parameter optional in Beta AsyncRealtime/Realtime classes to enable transcription mode

Problem

The Beta AsyncRealtime and Realtime classes require the model parameter, but the OpenAI transcription API explicitly rejects requests containing model when intent=transcribe is specified.

Error from API:

{
  "error": {
    "message": "You must not provide a model parameter for transcription sessions.",
    "type": "invalid_request_error",
    "code": "invalid_model"
  }
}

Root Cause

  • Beta API has model: str (required parameter)
  • Standard API correctly has model: str | Omit = omit (optional parameter)
  • Model was always included in URL query params: "model": self.__model

Solution

Made the Beta Realtime API match the Standard Realtime API pattern:

  1. Made model parameter optional: Changed from model: str to model: str | Omit = omit
  2. Conditional URL inclusion: Only include model in query params when provided: **({"model": self.__model} if self.__model is not omit else {})
  3. Azure validation: Added validation for Azure clients which require model
  4. Both sync and async: Applied identical fixes to Realtime and AsyncRealtime

Changes

Modified src/openai/resources/beta/realtime/realtime.py:

  • Added Omit and omit imports from _types
  • Updated Realtime.connect() method signature (line 96)
  • Updated AsyncRealtime.connect() method signature (line 152)
  • Updated RealtimeConnectionManager.__init__ (line 516)
  • Updated RealtimeConnectionManager.__enter__ URL building (lines 549-561)
  • Updated AsyncRealtimeConnectionManager.__init__ (line 330)
  • Updated AsyncRealtimeConnectionManager.__aenter__ URL building (lines 363-375)
  • Added Azure client model validation in both sync and async versions

Backward Compatibility

Zero breaking changes:

  • Existing code that explicitly passes model continues to work
  • Type union str | Omit accepts string values (no type errors)
  • URL building preserves existing behavior when model is provided
  • Azure validation prevents accidental omission in Azure contexts

Testing

✅ All verification checks passed:

  • Linting passes (ruff check)
  • Formatting passes (ruff format)
  • Module imports correctly
  • Azure validation logic added for safety

Usage Examples

Before (fails for transcription):

# This would fail with API error
async with client.beta.realtime.connect(
    model="gpt-4o-realtime-preview",  # Required, but API rejects it
    extra_query={"intent": "transcribe"}
) as conn:
    pass

After (transcription works):

# Transcription mode - no model needed
async with client.beta.realtime.connect(
    extra_query={"intent": "transcribe"}
) as conn:
    # Works! Model parameter not included in URL
    pass

# Standard mode - model still works
async with client.beta.realtime.connect(
    model="gpt-4o-realtime-preview-2024-12-17"
) as conn:
    # Works as before (backward compatible)
    pass

Edge Cases Handled

  • Azure Client: Validates model is required, raises OpenAIError if omitted
  • None vs Omit: Uses omit sentinel (not None) for clarity
  • Extra Query: Can override model via extra_query if needed
  • Type Safety: str | Omit union maintains type checking

Related Issues

  • Aligns Beta API with Standard API behavior
  • Enables transcription functionality in Beta API
  • Maintains full backward compatibility

Fixes openai#2796

The error handling logic for `sse.event == "error"` was incorrectly
nested inside the `if sse.event.startswith("thread.")` condition,
making it logically unreachable (dead code). A string cannot both
equal "error" AND start with "thread." simultaneously.

This was a regression introduced in commit abc2596 which attempted
to fix indentation but accidentally moved the error handler into the
wrong conditional block.

The fix restructures the logic to:
1. Check for error events first (before any event type routing)
2. Handle thread.* events with their special data structure
3. Handle all other events in the else block

This ensures error events are properly caught and raise APIError
with the appropriate error message.
Fixes openai#2652

The Beta AsyncRealtime and Realtime classes required the model parameter,
but the OpenAI transcription API rejects requests with model when
intent=transcribe is specified.

Changes:
- Made model parameter optional (str | Omit = omit) in all connect() methods
- Conditionally include model in URL query params only when provided
- Added Azure client validation to require model for Azure Realtime API
- Applied fix to both sync (Realtime) and async (AsyncRealtime) versions

This enables transcription mode while maintaining backward compatibility
with existing code that explicitly passes model parameter.
@jay-dhamale jay-dhamale requested a review from a team as a code owner December 25, 2025 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AsyncRealtime class incompatible with transcription mode due to required model parameter

1 participant