skrawcz commented on code in PR #1523:
URL: https://github.com/apache/hamilton/pull/1523#discussion_r3005277163
##########
contrib/hamilton/contrib/dagworks/faiss_rag/__init__.py:
##########
@@ -94,11 +114,29 @@ def rag_response(rag_prompt: str, llm_client:
openai.OpenAI) -> str:
return response.choices[0].message.content
[email protected](provider="minimax")
+def rag_response__minimax(rag_prompt: str, llm_client: openai.OpenAI) -> str:
+ """Creates the RAG response using MiniMax M2.7.
+
+ MiniMax M2.7 is a high-performance model with 1M token context window.
+
+ :param rag_prompt: the prompt to send to the LLM.
+ :param llm_client: the LLM client to use.
+ :return: the response from the LLM.
+ """
+ response = llm_client.chat.completions.create(
+ model="MiniMax-M2.7",
Review Comment:
can we move this up as an input parameter and simplify this code a little.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]