Skip to content

fix: handle non-hyphenated GPT-5 model names in detection logic#4535

Open
hztBUAA wants to merge 1 commit intocrewAIInc:mainfrom
hztBUAA:fix/gpt5-model-detection
Open

fix: handle non-hyphenated GPT-5 model names in detection logic#4535
hztBUAA wants to merge 1 commit intocrewAIInc:mainfrom
hztBUAA:fix/gpt5-model-detection

Conversation

@hztBUAA
Copy link

@hztBUAA hztBUAA commented Feb 20, 2026

Summary

Fixes #4478

  • Add "gpt5" prefix to model detection logic alongside "gpt-" so non-hyphenated GPT-5 model names (e.g. gpt5, gpt5nano, gpt5mini) are correctly recognized as OpenAI models
  • Update is_openai_model check in AzureCompletion.__init__ to detect gpt5* variants
  • Update supports_stop_words() to treat gpt5* models the same as gpt-5* models
  • Update _is_model_from_provider() for both openai and azure providers

Context

When using Azure OpenAI with a deployment named gpt5nano, gpt5, etc., the model detection logic only checked for the "gpt-" prefix. This caused is_openai_model to be False, which in turn caused response_model (Pydantic structured output) to be ignored.

Test plan

  • Added test_azure_gpt5_non_hyphenated_model_detection — verifies is_openai_model and supports_function_calling() for gpt5, gpt5nano, gpt5mini
  • Added test_azure_gpt5_non_hyphenated_models_do_not_support_stop_words — verifies stop words are correctly disabled for non-hyphenated GPT-5 names
  • All 59 Azure tests pass

Model name detection for `is_openai_model`, `supports_stop_words`, and
`_is_model_from_provider` only checked for "gpt-" prefix, missing
non-hyphenated variants like "gpt5", "gpt5nano", and "gpt5mini" that
are used in Azure deployment names. This caused `response_model` to be
ignored for these models.

Add "gpt5" to prefix lists alongside "gpt-" so both naming conventions
are recognized.

Fixes crewAIInc#4478
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@hztBUAA
Copy link
Author

hztBUAA commented Feb 25, 2026

Thanks for the review and feedback. I am following up on this PR now and will either push the requested changes or reply point-by-point shortly.

@hztBUAA
Copy link
Author

hztBUAA commented Feb 25, 2026

Quick follow-up: I am reviewing the feedback and will update this PR shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] LLM call does not adhere to pydantic response_model fails for "gpt5nano"

1 participant