Conversation
…ude the `output[].content[].annotations` path
There was a problem hiding this comment.
Pull request overview
This PR improves compatibility with non-OpenAI APIs that implement OpenAI-compatible interfaces (such as LM Studio) by making the annotations field in OutputTextContent optional during deserialization. The change uses the #[serde(default)] attribute to ensure that if the field is missing in the API response, it will default to an empty vector instead of causing a deserialization error.
Key Changes
- Added
#[serde(default)]attribute to theannotationsfield in theOutputTextContentstruct to allow it to be missing from API responses while defaulting to an empty Vec
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | ||
| pub struct OutputTokenDetails { | ||
| /// The number of reasoning tokens. | ||
| pub reasoning_tokens: u32, | ||
| } | ||
|
|
||
| /// Usage statistics for a response. | ||
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | ||
| #[serde(default)] | ||
| pub struct ResponseUsage { | ||
| /// The number of input tokens. | ||
| pub input_tokens: u32, | ||
| /// A detailed breakdown of the input tokens. | ||
| pub input_tokens_details: InputTokenDetails, | ||
| /// The number of output tokens. | ||
| pub output_tokens: u32, | ||
| /// A detailed breakdown of the output tokens. | ||
| pub output_tokens_details: OutputTokenDetails, | ||
| /// The total number of tokens used. | ||
| pub total_tokens: u32, | ||
| } | ||
|
|
There was a problem hiding this comment.
These structs (InputTokenDetails, OutputTokenDetails, and ResponseUsage) already exist in async-openai/src/types/shared/response_usage.rs and are re-exported by the responses module in mod.rs (lines 23-24, 29).
The duplicate definitions should be removed to avoid:
- Maintenance issues with two versions of the same types
- Potential type conflicts and compilation errors
- Confusion about which version to use
Instead, you should use the existing shared types that are already imported at the top of this file (line 5: ResponseUsage).
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | |
| pub struct OutputTokenDetails { | |
| /// The number of reasoning tokens. | |
| pub reasoning_tokens: u32, | |
| } | |
| /// Usage statistics for a response. | |
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | |
| #[serde(default)] | |
| pub struct ResponseUsage { | |
| /// The number of input tokens. | |
| pub input_tokens: u32, | |
| /// A detailed breakdown of the input tokens. | |
| pub input_tokens_details: InputTokenDetails, | |
| /// The number of output tokens. | |
| pub output_tokens: u32, | |
| /// A detailed breakdown of the output tokens. | |
| pub output_tokens_details: OutputTokenDetails, | |
| /// The total number of tokens used. | |
| pub total_tokens: u32, | |
| } |
Non-OpenAI APIs that implement a similar interface don't always include the
output[].content[].annotationsJSON path. Specifically, I ran into this issue with LM Studio, which doesn't include that attribute.This pull request makes a minor change to the
OutputTextContentstruct inasync-openai/src/types/responses/response.rs. The change ensures that theannotationsfield will now default to an empty vector if it is missing during deserialization, which improves robustness when handling API responses for non-OpenAI APIs.