try-agaaain opened a new pull request, #277:
URL: https://github.com/apache/incubator-hugegraph-ai/pull/277

   Link to #234
   
   I'm working on replacing the `.env` configuration method with YAML and have 
completed a preliminary implementation. (I'll switch to OmegaConf later)
   
   The generated configuration file looks like this:
   
   <details>
   <summary>config.yaml</summary>
   
   ```yaml
   AdminConfig:
     ADMIN_TOKEN: xxxx
     ENABLE_LOGIN: 'False'
     USER_TOKEN: '4321'
   HugeGraphConfig:
     EDGE_LIMIT_PRE_LABEL: 8
     GRAPH_NAME: hugegraph
     GRAPH_PWD: xxx
     GRAPH_SPACE: null
     GRAPH_URL: 127.0.0.1:8080
     GRAPH_USER: admin
     LIMIT_PROPERTY: 'False'
     MAX_GRAPH_ITEMS: 30
     MAX_GRAPH_PATH: 10
     TOPK_PER_KEYWORD: 1
     TOPK_RETURN_RESULTS: 20
     VECTOR_DIS_THRESHOLD: 0.9
   LLMConfig:
     CHAT_LLM_TYPE: openai
     COHERE_BASE_URL: https://api.cohere.com/v1/rerank
     EMBEDDING_TYPE: openai
     EXTRACT_LLM_TYPE: openai
     LITELLM_CHAT_API_BASE: null
     LITELLM_CHAT_API_KEY: null
     LITELLM_CHAT_LANGUAGE_MODEL: gemini-2.0-flash
     LITELLM_CHAT_TOKENS: 8192
     LITELLM_EMBEDDING_API_BASE: null
     LITELLM_EMBEDDING_API_KEY: null
     LITELLM_EMBEDDING_MODEL: openai/text-embedding-3-small
     LITELLM_EXTRACT_API_BASE: null
     LITELLM_EXTRACT_API_KEY: null
     LITELLM_EXTRACT_LANGUAGE_MODEL: gemini-2.0-flash
     LITELLM_EXTRACT_TOKENS: 256
     LITELLM_TEXT2GQL_API_BASE: null
     LITELLM_TEXT2GQL_API_KEY: null
     LITELLM_TEXT2GQL_LANGUAGE_MODEL: gemini-2.0-flash
     LITELLM_TEXT2GQL_TOKENS: 4096
     OLLAMA_CHAT_HOST: 127.0.0.1
     OLLAMA_CHAT_LANGUAGE_MODEL: null
     OLLAMA_CHAT_PORT: 11434
     OLLAMA_EMBEDDING_HOST: 127.0.0.1
     OLLAMA_EMBEDDING_MODEL: null
     OLLAMA_EMBEDDING_PORT: 11434
     OLLAMA_EXTRACT_HOST: 127.0.0.1
     OLLAMA_EXTRACT_LANGUAGE_MODEL: null
     OLLAMA_EXTRACT_PORT: 11434
     OLLAMA_TEXT2GQL_HOST: 127.0.0.1
     OLLAMA_TEXT2GQL_LANGUAGE_MODEL: null
     OLLAMA_TEXT2GQL_PORT: 11434
     OPENAI_CHAT_API_BASE: 
https://generativelanguage.googleapis.com/v1beta/openai
     OPENAI_CHAT_API_KEY: null
     OPENAI_CHAT_LANGUAGE_MODEL: gemini-2.0-flash
     OPENAI_CHAT_TOKENS: 8192
     OPENAI_EMBEDDING_API_BASE: 
https://generativelanguage.googleapis.com/v1beta/openai
     OPENAI_EMBEDDING_API_KEY: null
     OPENAI_EMBEDDING_MODEL: text-embedding-004
     OPENAI_EXTRACT_API_BASE: 
https://generativelanguage.googleapis.com/v1beta/openai
     OPENAI_EXTRACT_API_KEY: null
     OPENAI_EXTRACT_LANGUAGE_MODEL: gemini-2.0-flash
     OPENAI_EXTRACT_TOKENS: 8192
     OPENAI_TEXT2GQL_API_BASE: 
https://generativelanguage.googleapis.com/v1beta/openai
     OPENAI_TEXT2GQL_API_KEY: null
     OPENAI_TEXT2GQL_LANGUAGE_MODEL: gemini-2.0-flash
     OPENAI_TEXT2GQL_TOKENS: 8192
     QIANFAN_CHAT_ACCESS_TOKEN: null
     QIANFAN_CHAT_API_KEY: null
     QIANFAN_CHAT_LANGUAGE_MODEL: ERNIE-Speed-128K
     QIANFAN_CHAT_SECRET_KEY: null
     QIANFAN_CHAT_URL: 
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/
     QIANFAN_EMBEDDING_API_KEY: null
     QIANFAN_EMBEDDING_MODEL: embedding-v1
     QIANFAN_EMBEDDING_SECRET_KEY: null
     QIANFAN_EMBED_URL: 
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/embeddings/
     QIANFAN_EXTRACT_ACCESS_TOKEN: null
     QIANFAN_EXTRACT_API_KEY: null
     QIANFAN_EXTRACT_LANGUAGE_MODEL: ERNIE-Speed-128K
     QIANFAN_EXTRACT_SECRET_KEY: null
     QIANFAN_TEXT2GQL_ACCESS_TOKEN: null
     QIANFAN_TEXT2GQL_API_KEY: null
     QIANFAN_TEXT2GQL_LANGUAGE_MODEL: ERNIE-Speed-128K
     QIANFAN_TEXT2GQL_SECRET_KEY: null
     QIANFAN_URL_PREFIX: 
https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop
     RERANKER_API_KEY: null
     RERANKER_MODEL: null
     RERANKER_TYPE: null
     TEXT2GQL_LLM_TYPE: openai
   ```
   
   </details>
   
   The `LLMConfig` section has a large number of configuration items. Do there 
any suggestions on how to better organize and manage these configurations?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to