coolboywcm commented on issue #297:
URL: 
https://github.com/apache/incubator-hugegraph-ai/issues/297#issuecomment-3157148264

   I've set ENABLE_LOGIN=False in the .env file and confirmed it's working:
   
   /home/work/hugegraph-llm/.env
   GRAPH_URL=127.0.0.1:8080
   GRAPH_NAME=hugegraph
   GRAPH_USER=admin
   GRAPH_PWD=xxx
   GRAPH_SPACE=
   LIMIT_PROPERTY=False
   MAX_GRAPH_PATH=10
   MAX_GRAPH_ITEMS=30
   EDGE_LIMIT_PRE_LABEL=8
   VECTOR_DIS_THRESHOLD=0.9
   TOPK_PER_KEYWORD=1
   TOPK_RETURN_RESULTS=20
   **ENABLE_LOGIN=False**
   USER_TOKEN=4321
   ADMIN_TOKEN=xxxx
   CHAT_LLM_TYPE=openai
   EXTRACT_LLM_TYPE=openai
   TEXT2GQL_LLM_TYPE=openai
   EMBEDDING_TYPE=openai
   
   log:
   [root@luqmvbs2g8cf8g ~]# docker logs rag | grep "Authentication is"
                       INFO     llm: (Status) Authentication is disabled 
app.py:159
                       INFO     llm: (Status) Authentication is disabled 
app.py:159
   
   /home/work/hugegraph-llm/src/hugegraph_llm/demo/rag_demo/app.py
   
   def create_app():
       app = FastAPI(lifespan=lifespan)
       # we don't need to manually check the env now
       # settings.check_env()
       prompt.update_yaml_file()
       auth_enabled = admin_settings.enable_login.lower() == "true"
       **log.info("(Status) Authentication is %s now.", "enabled" if 
auth_enabled else "disabled")  # line 159**
       api_auth = APIRouter(dependencies=[Depends(authenticate)] if 
auth_enabled else [])
   
   However, when I submit a query (e.g., "Who is Sarah?"), the system still 
returns **401 Error**:
   
   <img width="2176" height="400" alt="Image" 
src="https://github.com/user-attachments/assets/fcb7692b-9fc7-4575-953b-b5926a79eb18";
 />
   
   <img width="516" height="142" alt="Image" 
src="https://github.com/user-attachments/assets/453d704e-2753-45c1-bd62-493a389b756f";
 />
   
   log content:
   `[08/06/25 01:01:22] INFO     llm: Token usage:                      
openai.py:73
                                {"completion_tokens":205,"prompt_token          
   
                                s":429,"total_tokens":634,"completion_          
   
                                tokens_details":{"accepted_prediction_          
   
                                tokens":null,"audio_tokens":null,"reas          
   
                                oning_tokens":199,"rejected_prediction          
   
                                _tokens":null},"prompt_tokens_details"          
   
                                :{"audio_tokens":null,"cached_tokens":          
   
                                0}}                                             
   
                       DEBUG    llm: Keyword extraction time: 
keyword_extract.py:65
                                7.59 seconds                                    
   
                       INFO     llm: User Query: Who is Sarah 
keyword_extract.py:72
                                ?                                               
   
                                                                                
   
                                Keywords: ['Sarah']                             
   
                       DEBUG    llm: Operator KeywordExtract       
decorators.py:74
                                finished in 7.67 seconds                        
   
                       DEBUG    llm: Running operator:             
decorators.py:68
                                SemanticIdQuery                                 
   
                       CRITICAL llm: Error code: 401 - {'message': 
rag_block.py:227
                                '请先登录', 'status': 10025,                       
                                'timestamp': 1754442082654}                     
   
   Traceback (most recent call last):
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/demo/rag_demo/rag_block.py", line 
204, in rag_answer_streaming
       context = rag.run(verbose=True, query=text, vector_search=vector_search, 
graph_search=graph_search)
     File "/home/work/hugegraph-llm/src/hugegraph_llm/utils/decorators.py", 
line 50, in sync_wrapper
       result = func(*args, **kwargs)
     File "/home/work/hugegraph-llm/src/hugegraph_llm/utils/decorators.py", 
line 96, in sync_wrapper
       result = func(*args, **kwargs)
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/operators/graph_rag_task.py", line 
254, in run
       context = self._run_operator(operator, context)
     File "/home/work/hugegraph-llm/src/hugegraph_llm/utils/decorators.py", 
line 70, in wrapper
       result = func(*args, **kwargs)
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/operators/graph_rag_task.py", line 
259, in _run_operator
       return operator.run(context)
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/operators/index_op/semantic_id_query.py",
 line 101, in run
       fuzzy_match_vids = self._fuzzy_match_vids(unmatched_vids)
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/operators/index_op/semantic_id_query.py",
 line 78, in _fuzzy_match_vids
       keyword_vector = self.embedding.get_texts_embeddings([keyword])[0]
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/models/embeddings/openai.py", line 
62, in get_texts_embeddings
       response = self.client.embeddings.create(input=texts, 
model=self.embedding_model_name)
     File 
"/home/work/.venv/lib/python3.10/site-packages/openai/resources/embeddings.py", 
line 125, in create
       return self._post(
     File 
"/home/work/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 
1283, in post
       return cast(ResponseT, self.request(cast_to, opts, stream=stream, 
stream_cls=stream_cls))
     File 
"/home/work/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 
960, in request
       return self._request(
     File 
"/home/work/.venv/lib/python3.10/site-packages/openai/_base_client.py", line 
1064, in _request
       raise self._make_status_error_from_response(err.response) from None
   openai.AuthenticationError: Error code: 401 - {'message': '请先登录', 'status': 
10025, 'timestamp': 1754442082654}
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
     File "/home/work/.venv/lib/python3.10/site-packages/gradio/queueing.py", 
line 625, in process_events
       response = await route_utils.call_process_api(
     File 
"/home/work/.venv/lib/python3.10/site-packages/gradio/route_utils.py", line 
322, in call_process_api
       output = await app.get_blocks().process_api(
     File "/home/work/.venv/lib/python3.10/site-packages/gradio/blocks.py", 
line 2103, in process_api
       result = await self.call_function(
     File "/home/work/.venv/lib/python3.10/site-packages/gradio/blocks.py", 
line 1662, in call_function
       prediction = await utils.async_iteration(iterator)
     File "/home/work/.venv/lib/python3.10/site-packages/gradio/utils.py", line 
735, in async_iteration
       return await anext(iterator)
     File "/home/work/.venv/lib/python3.10/site-packages/gradio/utils.py", line 
840, in asyncgen_wrapper
       response = await iterator.__anext__()
     File 
"/home/work/hugegraph-llm/src/hugegraph_llm/demo/rag_demo/rag_block.py", line 
228, in rag_answer_streaming
       raise gr.Error(f"An unexpected error occurred: {str(e)}")
   gradio.exceptions.Error: "An unexpected error occurred: Error code: 401 - 
{'message': '请先登录', 'status': 10025, 'timestamp': 1754`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to