fix: switch default AI model to gemini-2.5-flash to avoid quota limits#288
fix: switch default AI model to gemini-2.5-flash to avoid quota limits#288omsherikar wants to merge 1 commit intobubblelabai:mainfrom
Conversation
- Update RECOMMENDED_MODELS (BEST/PRO/FAST) from gemini-3-pro-preview to gemini-2.5-flash - Update COFFEE_DEFAULT_MODEL from gemini-3-pro-preview to gemini-2.5-flash - Update BubbleFlowGeneratorWorkflow model from gemini-3-flash-preview to gemini-2.5-flash - Fix documentation references and comments to reflect new default This resolves quota limit errors on fresh installations while maintaining quality and reducing costs. Users can still explicitly select other models.
📝 WalkthroughWalkthroughConfiguration updates swapping default AI model references from Gemini 3 variants to Gemini 2.5-flash across BubbleFlow generation prompts, workflow configurations, and shared schemas. These changes standardize the default model selection for the generation system without altering control flow or logic. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~10 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR updates the project’s default/recommended Gemini model selections to google/gemini-2.5-flash to reduce quota-limit errors on fresh installs while keeping acceptable quality/cost.
Changes:
- Switch
COFFEE_DEFAULT_MODELtogoogle/gemini-2.5-flash. - Update
RECOMMENDED_MODELS(BEST/PRO/FAST) togoogle/gemini-2.5-flash. - Update BubbleFlowGeneratorWorkflow to use
google/gemini-2.5-flashand adjust related prompt/documentation text.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| packages/bubble-shared-schemas/src/coffee.ts | Updates Coffee agent’s default model constant. |
| packages/bubble-shared-schemas/src/bubbleflow-generation-prompts.ts | Updates shared recommended model constants used in instructions/config. |
| apps/bubblelab-api/src/services/ai/bubbleflow-generator.workflow.ts | Switches BubbleFlow generator agent model to gemini-2.5-flash and changes backup model. |
| apps/bubblelab-api/src/config/bubbleflow-generation-prompts.ts | Updates embedded prompt documentation/examples to reflect the new default model. |
| model: 'google/gemini-2.5-flash', | ||
| temperature: 0.3, | ||
| backupModel: { | ||
| model: 'anthropic/claude-sonnet-4-5', | ||
| model: 'google/gemini-2.5-flash-lite', | ||
| temperature: 0.3, | ||
| }, |
There was a problem hiding this comment.
The generator agent’s backupModel was changed to another Google Gemini variant. If the primary model fails due to provider-wide issues or quota/rate limits, falling back to the same provider is unlikely to recover. Consider keeping a cross-provider backup (e.g., the previous Anthropic model) or making the backup configurable so fallback can still succeed when Gemini is unavailable.
| // add date ranges, or filter by publication type. Currently using gemini-2.5-flash for thorough | ||
| // multi-step research; current default is gemini-2.5-flash for balanced speed and cost. |
There was a problem hiding this comment.
The updated example comment repeats the same model name twice and still describes gemini-2.5-flash as being used for “thorough multi-step research”, which can be confusing for readers. Suggest rewriting this to state a single clear rationale for the chosen model (e.g., default/balanced) without repetition.
| // add date ranges, or filter by publication type. Currently using gemini-2.5-flash for thorough | |
| // multi-step research; current default is gemini-2.5-flash for balanced speed and cost. | |
| // add date ranges, or filter by publication type. Uses gemini-2.5-flash as the default model for | |
| // balanced speed and cost. |
| task: \`Find research papers about \${ topic }...\`, | ||
| model: 'google/gemini-2.5-flash', |
There was a problem hiding this comment.
In the TypeScript example snippet, the template literal interpolation is written as \${ topic } instead of the usual \${topic} used elsewhere. Even though it’s in a documentation snippet, this odd formatting may confuse users—consider normalizing it.
This resolves quota limit errors on fresh installations while maintaining quality and reducing costs. Users can still explicitly select other models.
Summary
Related Issues
Type of Change
Checklist
pnpm checkand all tests passScreenshots (Required)
For New Bubble Integrations
.integration.flow.ts) covers all operationsAdditional Context
Summary by CodeRabbit