Open
Conversation
… on one request with GPT4, whereas GPT3.5 Turbo is SIGNIFICANTLY cheaper, like under $0.01 for the same request and also MUCH faster
…ax tracks weren't being used
…ot tagged as `latest` unless its master, as its useful for generating builds on PR pending items
…d such aren't lost to time
Owner
|
I would instead specify the model at env variable/docker compose level instead of making it exposed to UI, what do you think? Thanks for contribution! |
Owner
|
https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs any of those models can be specified for use, I think we could expose that as env variable instead of hardcoding and it will enable desired level of customization |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
LLM Model Selection Feature (depends on #2)
Overview
This PR adds the ability to select different LLM models when generating playlists, allowing users to choose between higher quality (GPT-4) or more affordable (GPT-3.5 Turbo) options. It also fixes a bug with the min/max track count parameters.
This change also embeds the metadata about the generation (prompt, model used, etc.) into the summary of the generated playlist, so that this data isn't lost to time.
Changes
Technical Details
app/services/llm_service.pyto properly use min/max track parametersapp/models.pyto clarify model options in field descriptionMotivation
GPT-4 requests can be expensive ($0.30+ per request) while GPT-3.5 Turbo is significantly more affordable (under $0.01 for the same request) and faster. This change gives users the flexibility to choose based on their needs and budget constraints.
Testing
Screenshot