rv2931 opened a new issue, #64263:
URL: https://github.com/apache/airflow/issues/64263
### Apache Airflow version
3.1.8
### What happened and how to reproduce it?
I''m struggling to get a docker compose stack working Airflow+Celery but
with an internal compagny proxy configured (~/.docker/config.json)
My proxies:
```yaml
{
"proxies": {
"http-proxy": "http://XXX.XXX.XXX.XXX:8080",
"https-proxy": "http://XXX.XXX.XXX.XXX:8080",
"no-proxy": "localhost,127.0.0.1,.airflow"
}
}
```
in my docker compose, I've configured a network and domainname
```yaml
services:
airflow-apiserver:
...
command: ["api-server"]
hostname: airflow-apiserver
domainname: airflow
network:
- airflow:
volumes:
- dags:/opt/airflow/dags
...
airflow-worker:
...
command: ["celery","worker"]
hostname: airflow-apiserver
domainname: airflow
volumes:
- dags:/opt/airflow/dags
network:
- airflow:
...
networks:
airflow:
```
So with these configuration, all containers are revolved by the others
@airflow-worker: `ping airflow-apiserver.airflow => OK`
@airflow-apiserver: `ping airflow-worker.airflow => OK`
but when I start a simple dag using celery executor, I get this error on the
UI
`Log message source details sources=["Could not read served logs: Invalid
URL
'http://:8793/log/dag_id=example_external_python_dag/run_id=manual__2026-03-26T14:30:26.949787+00:00/task_id=run_external_python/attempt=1.log':
No host supplied"] `
as soon as I override http_proxy/https_proxy/HTTP_PROXY/HTTPS_PROXY on the
airflow-worker container to reset env vars and disable proxy, logs are
correctly recovered.
```
airflow-worker:
...
environment:
<<: *airflow-common-env
DUMB_INIT_SETSID: "0"
http_proxy: ""
https_proxy: ""
HTTP_PROXY: ""
HTTPS_PROXY: ""
...
```
```
I am using default AIRFLOW__CORE__HOSTNAME_CALLABLE:
airflow.utils.net.getfqdn
I modified the file airflow.utils.net.py to check the fqdn returned by the
fonction => in all configuration the hostname return is the good one
`airflow-apiserver:.airflow` and `airflow-worker.airflow` so the hostname is
not empty at these step
So when I configure proxy on celery worker, the hostname is empty, when I
remove specifically proxy variables on celery worker, logs are correctly
recovered
the no_proxy varaible is good on all containers
getfqdn return the good name in all configuration
another problem is that if it is a problem with no_proxy configuration that
is not took in account, so I should have timeout errors due to an internal
docker compsoe request sent to my campany proxy. Here the error is an empty
hostname... I don't see the link between http_proxy and hostname resolution
### What you think should happen instead?
enabling http_proxy/https_proxy AND no_proxy correctly configured with
`.airflow` domain name with the solution of network+domainname should be
transparent and internal services airflow and celery should work identically
with or without proxy configured
### Operating System
debian 12 bookworm (Docker)
### Versions of Apache Airflow Providers
apache-airflow==3.1.8
apache-airflow-core==3.1.8
apache-airflow-providers-celery==3.17.0
apache-airflow-providers-common-compat==1.14.0
apache-airflow-providers-common-io==1.7.1
apache-airflow-providers-common-sql==1.32.0
apache-airflow-providers-fab==3.5.0
apache-airflow-providers-smtp==2.4.2
apache-airflow-providers-standard==1.12.0
apache-airflow-task-sdk==1.1.8
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]