octo-patch opened a new pull request, #1523:
URL: https://github.com/apache/hamilton/pull/1523

   ## Summary
   
   Add [MiniMax](https://www.minimax.io/) as an alternative LLM provider 
alongside OpenAI in both faiss_rag and conversational_rag contrib dataflows, 
using Hamilton's native @config.when pattern for provider switching.
   
   - Add MiniMax M2.7 (1M token context window) via OpenAI-compatible API
   - Use @config.when_not(provider="minimax") for OpenAI default 
(backward-compatible)
   - Use @config.when(provider="minimax") for MiniMax provider variant
   - Update valid_configs.jsonl, tags.json, and README.md for both dataflows
   - Add 35 unit tests + 6 integration tests (all passing)
   
   ### Usage
   
   Switch to MiniMax by setting MINIMAX_API_KEY and passing {"provider": 
"minimax"} in config:
   
   ```python
   from hamilton import driver
   dr = (
       driver.Builder()
       .with_modules(faiss_rag)
       .with_config({"provider": "minimax"})  # or {} for default OpenAI
       .build()
   )
   ```
   
   ### Why MiniMax?
   
   [MiniMax](https://www.minimax.io/) offers high-performance models with large 
context windows (up to 1M tokens) via an OpenAI-compatible API, making it a 
drop-in alternative for OpenAI in RAG pipelines. The M2.7 model provides strong 
reasoning capabilities at competitive pricing.
   
   ### Files Changed (10 files)
   
   **faiss_rag** (5 files):
   - __init__.py: Multi-provider LLM client + response via @config.when
   - README.md: MiniMax usage docs and config table
   - valid_configs.jsonl: Added minimax config
   - tags.json: Added minimax tag
   - test_faiss_rag.py: 17 unit + 3 integration tests
   
   **conversational_rag** (5 files):
   - __init__.py: Multi-provider LLM client, standalone_question, response via 
@config.when
   - README.md: MiniMax usage docs and config table
   - valid_configs.jsonl: Added minimax config
   - tags.json: Added minimax tag
   - test_conversational_rag.py: 18 unit + 3 integration tests
   
   ## Test plan
   
   - [x] All 35 unit tests pass (mocked LLM clients)
   - [x] All 6 integration tests pass (real MiniMax API calls)
   - [x] Default config ({}) still resolves to OpenAI (backward compatible)
   - [x] {"provider": "minimax"} correctly resolves to MiniMax M2.7
   - [x] Hamilton driver builds successfully with both configs
   - [ ] Verify CI passes on PR
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to