jason810496 commented on code in PR #64209:
URL: https://github.com/apache/airflow/pull/64209#discussion_r2998790109


##########
shared/providers_discovery/src/airflow_shared/providers_discovery/providers_discovery.py:
##########
@@ -29,12 +30,11 @@
 from time import perf_counter
 from typing import Any, NamedTuple, ParamSpec, Protocol, cast
 
-import structlog
 from packaging.utils import canonicalize_name
 
 from ..module_loading import entry_points_with_dist
 
-log = structlog.getLogger(__name__)
+log = logging.getLogger(__name__)

Review Comment:
   I change it intentionally, I found that the `structlog.getLogger(__name__)` 
is not respecting the `AIRFLOW__LOGGING__LOGGING_LEVEL` log level, switching 
from `structlog` to `logging` does make the logging respect 
`AIRFLOW__LOGGING__LOGGING_LEVEL` again.
   
   Or I will get the debug logging from `airflow config list --default`, which 
cause the `prek run check-default-configuration --all-files` fail in the 
previous CI run.
   
   ---
   
   Root cause: providers_discovery.py used structlog.getLogger() directly. 
Before structlog.configure() is called (which happens later in 
settings.py:726), structlog's default PrintLogger writes to stdout with no 
level filtering. So debug logs during early provider
     discovery pollute the stdout of airflow config list --default, corrupting 
the generated config file.
   
   Fix: Switched to logging.getLogger() (stdlib). stdlib logging defaults to 
WARNING level and writes to stderr, so debug logs are suppressed and stdout 
stays clean. This is also the correct pattern for shared library code — 
structlog configuration is the application's
     responsibility.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to