-
Notifications
You must be signed in to change notification settings - Fork 101
Open
Labels
priority: p3Desirable enhancement or fix. May not be included in next release.Desirable enhancement or fix. May not be included in next release.status:awaiting user responsetype: questionRequest for information or clarification. Not an issue.Request for information or clarification. Not an issue.
Description
Hi.
Im using google-genai:1.24 and I want to create transcriptions with timestamps and speaker diarization.
For that I use gemini-2.5-flash.
My content config looks like this:
GenerateContentConfig contentConfig = GenerateContentConfig.builder()
.audioTimestamp(true)
.responseSchema(responseSchema)
.build();
GenerateContentResponse contentResponse = client.models.generateContent(MODEL, requestContent,contentConfig)
But this code throws:
java.lang.IllegalArgumentException: audioTimestamp parameter is not supported in Gemini API.
Why is there and audioTimestamp flag when I can not use it?
I read that this is used for the VertexAI Speech-To-Text but I couldn't find any documentation where this function or even this library is used.
Isn't the library for Speech-To-Text Vertex AI "google-cloud-speech" ?
Im really confused about all those google genai librarys.
Metadata
Metadata
Assignees
Labels
priority: p3Desirable enhancement or fix. May not be included in next release.Desirable enhancement or fix. May not be included in next release.status:awaiting user responsetype: questionRequest for information or clarification. Not an issue.Request for information or clarification. Not an issue.