Skip to content

fix: per-model context window mapping#23

Closed
tbouquet wants to merge 1 commit intograykode:mainfrom
tbouquet:fix/context-window-mapping
Closed

fix: per-model context window mapping#23
tbouquet wants to merge 1 commit intograykode:mainfrom
tbouquet:fix/context-window-mapping

Conversation

@tbouquet
Copy link
Copy Markdown
Contributor

@tbouquet tbouquet commented Apr 3, 2026

Summary

  • Replace the binary 200K/1M heuristic with a model-aware lookup
  • GPT/O-series models: 128K context window
  • Gemini models: 1M context window
  • Claude models: 200K default (auto-detects 1M when tokens exceed 200K or [1m] suffix present)
  • Unknown models fall back to 200K

Fixes incorrect context percentage display for non-Claude models used via Codex.

Test plan

  • cargo test — all tests pass, including 4 new model-specific assertions
  • Existing behavior preserved for Claude models
  • [1m] suffix and token-based 1M detection unchanged

Replace the binary 200K/1M heuristic with a model-aware lookup
table. Handles GPT (128K), Gemini (1M), and all Claude variants
correctly. Falls back to 200K for unknown models, and still
auto-detects 1M when token usage exceeds 200K.
@tbouquet tbouquet closed this Apr 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant