feat: add MiniMax as alternative LLM provider for RAG dataflows#1523
Open
octo-patch wants to merge 1 commit intoapache:mainfrom
Open
feat: add MiniMax as alternative LLM provider for RAG dataflows#1523octo-patch wants to merge 1 commit intoapache:mainfrom
octo-patch wants to merge 1 commit intoapache:mainfrom
Conversation
Add MiniMax M2.7 as an alternative LLM provider alongside OpenAI in both faiss_rag and conversational_rag dataflows using Hamilton's @config.when pattern. Changes: - Use @config.when_not(provider="minimax") for OpenAI (backward-compatible default) - Use @config.when(provider="minimax") for MiniMax via OpenAI-compatible API - Update valid_configs.jsonl with minimax configuration - Update tags.json with minimax tag - Update README.md with MiniMax usage documentation - Add 35 unit tests + 6 integration tests MiniMax M2.7 features: - 1M token context window - OpenAI-compatible API at https://api.minimax.io/v1 - Configurable via MINIMAX_API_KEY environment variable
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add MiniMax as an alternative LLM provider alongside OpenAI in both faiss_rag and conversational_rag contrib dataflows, using Hamilton's native @config.when pattern for provider switching.
Usage
Switch to MiniMax by setting MINIMAX_API_KEY and passing {"provider": "minimax"} in config:
Why MiniMax?
MiniMax offers high-performance models with large context windows (up to 1M tokens) via an OpenAI-compatible API, making it a drop-in alternative for OpenAI in RAG pipelines. The M2.7 model provides strong reasoning capabilities at competitive pricing.
Files Changed (10 files)
faiss_rag (5 files):
conversational_rag (5 files):
Test plan