Skip to content

feat(extension): upgrade AI SDK to v3 and improve Ollama integration#124

Merged
lcomplete merged 2 commits intomainfrom
dev
Mar 16, 2026
Merged

feat(extension): upgrade AI SDK to v3 and improve Ollama integration#124
lcomplete merged 2 commits intomainfrom
dev

Conversation

@lcomplete
Copy link
Copy Markdown
Owner

Summary

  • Upgrade AI SDK packages from v1 to v3
  • Replace ollama-ai-provider with OpenAI-compatible API for better Ollama support
  • Add thinking mode persistence across sessions
  • Fix streaming preview for new AI SDK chunk types

Test plan

  • Test AI provider connection with OpenAI, Anthropic, Google, DeepSeek, Groq
  • Test Ollama with local instance
  • Verify thinking mode toggle persists after extension restart
  • Test streaming preview for reasoning models

🤖 Generated with Claude Code

lcomplete and others added 2 commits March 16, 2026 23:48
- Upgrade @ai-sdk/* packages from v1 to v3
- Remove ollama-ai-provider dependency, use OpenAI-compatible API instead
- Add getOllamaBaseUrl and getOllamaOpenAIBaseUrl for URL normalization
- Update streaming preview to handle text field and reasoning-delta chunks
- Change maxTokens to maxOutputTokens for AI SDK v3 compatibility
- Add thinking mode persistence (save/getThinkingModeEnabled)
- Add tests for Ollama URL normalization
- Upgrade TypeScript to v5.9.3

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 16, 2026

🤖 Augment PR Summary

Summary: Upgrades the browser extension’s AI integration to the newer AI SDK provider packages and improves Ollama support via an OpenAI-compatible endpoint.

Changes:

  • Bump @ai-sdk/* provider packages to v3 and update provider model creation to use the new LanguageModel type.
  • Replace ollama-ai-provider usage with createOpenAI pointed at an Ollama OpenAI-compatible /v1 base URL, plus helper URL normalization utilities.
  • Add helper + tests for normalizing Ollama base URLs for both /api/tags and OpenAI-compatible chat endpoints.
  • Persist “thinking mode” toggle state in chrome.storage.sync and wire it into both popup and preview UIs.
  • Update streaming preview handling to account for new chunk shapes (adding chunk.text) and reasoning delta chunk types.
  • Introduce a shared max-output-token constant used across streaming and OpenAI-compatible streaming paths.

Technical Notes: Streaming is processed via fullStream and preview state is derived from incremental chunk application; Ollama model discovery now uses a normalized base URL for /api/tags.

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 1 suggestion posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

): StreamingPreviewState {
const includeReasoning = options.includeReasoning ?? true;
const textDelta = chunk.textDelta || "";
const textDelta = chunk.text || chunk.textDelta || "";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

streamText().fullStream chunks are documented to use type: "text" (with text) for text deltas; applyStreamingPreviewChunk currently only treats "text-delta" as response text, so "text" chunks would be ignored and the preview may never show the actual answer.

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

@lcomplete lcomplete merged commit 46fa354 into main Mar 16, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant