Skip to content

feat: add MiniMax as alternative LLM provider for RAG dataflows#1523

Open
octo-patch wants to merge 1 commit intoapache:mainfrom
octo-patch:feat/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider for RAG dataflows#1523
octo-patch wants to merge 1 commit intoapache:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as an alternative LLM provider alongside OpenAI in both faiss_rag and conversational_rag contrib dataflows, using Hamilton's native @config.when pattern for provider switching.

  • Add MiniMax M2.7 (1M token context window) via OpenAI-compatible API
  • Use @config.when_not(provider="minimax") for OpenAI default (backward-compatible)
  • Use @config.when(provider="minimax") for MiniMax provider variant
  • Update valid_configs.jsonl, tags.json, and README.md for both dataflows
  • Add 35 unit tests + 6 integration tests (all passing)

Usage

Switch to MiniMax by setting MINIMAX_API_KEY and passing {"provider": "minimax"} in config:

from hamilton import driver
dr = (
    driver.Builder()
    .with_modules(faiss_rag)
    .with_config({"provider": "minimax"})  # or {} for default OpenAI
    .build()
)

Why MiniMax?

MiniMax offers high-performance models with large context windows (up to 1M tokens) via an OpenAI-compatible API, making it a drop-in alternative for OpenAI in RAG pipelines. The M2.7 model provides strong reasoning capabilities at competitive pricing.

Files Changed (10 files)

faiss_rag (5 files):

  • init.py: Multi-provider LLM client + response via @config.when
  • README.md: MiniMax usage docs and config table
  • valid_configs.jsonl: Added minimax config
  • tags.json: Added minimax tag
  • test_faiss_rag.py: 17 unit + 3 integration tests

conversational_rag (5 files):

  • init.py: Multi-provider LLM client, standalone_question, response via @config.when
  • README.md: MiniMax usage docs and config table
  • valid_configs.jsonl: Added minimax config
  • tags.json: Added minimax tag
  • test_conversational_rag.py: 18 unit + 3 integration tests

Test plan

  • All 35 unit tests pass (mocked LLM clients)
  • All 6 integration tests pass (real MiniMax API calls)
  • Default config ({}) still resolves to OpenAI (backward compatible)
  • {"provider": "minimax"} correctly resolves to MiniMax M2.7
  • Hamilton driver builds successfully with both configs
  • Verify CI passes on PR

Add MiniMax M2.7 as an alternative LLM provider alongside OpenAI in both
faiss_rag and conversational_rag dataflows using Hamilton's @config.when
pattern.

Changes:
- Use @config.when_not(provider="minimax") for OpenAI (backward-compatible default)
- Use @config.when(provider="minimax") for MiniMax via OpenAI-compatible API
- Update valid_configs.jsonl with minimax configuration
- Update tags.json with minimax tag
- Update README.md with MiniMax usage documentation
- Add 35 unit tests + 6 integration tests

MiniMax M2.7 features:
- 1M token context window
- OpenAI-compatible API at https://api.minimax.io/v1
- Configurable via MINIMAX_API_KEY environment variable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant