You are right. Daniel did mention putting it into the image. I was refusing
to think in that way for reasons I will explain.
I am not new to k8s, but I am new to Airflow, Python, and Helm. I am not
trying to get this to work with a single deployment, but I work for SAS, and we
are looking into how Airflow might be made to work seamlessly with a SAS Viya
deployment at a customer site or in a cloud. I don't really want to have to
tell the customer to create a special airflow image just to configure it to use
the SAS oauth provider. I would prefer to be able to handle this by editing a
configmap or deployment as a pre-deployment step.
The block that I want to edit is the following:
# When using OAuth Auth, uncomment to setup provider(s) info
# Google OAuth example:
# OAUTH_PROVIDERS = [{
# 'name':'google',
# 'token_key':'access_token',
# 'icon':'fa-google',
# 'remote_app': {
# 'api_base_url':'https://www.googleapis.com/oauth2/v2/',
# 'client_kwargs':{
# 'scope': 'email profile'
# },
# 'access_token_url':'https://accounts.google.com/o/oauth2/token',
# 'authorize_url':'https://accounts.google.com/o/oauth2/auth',
# 'request_token_url': None,
# 'client_id': GOOGLE_KEY,
# 'client_secret': GOOGLE_SECRET_KEY,
# }
# }]
Would it be possible to do this configuration in the webserverConfig that you
mention, or is the only way to make a custom image?
Thanks for your help,
John
From: Jed Cunningham <[email protected]>
Sent: Friday, February 24, 2023 12:59 PM
To: [email protected]
Subject: Re: oauth configuration for airflow in k8s
You don't often get email from
[email protected]<mailto:[email protected]>. Learn why this is
important<https://aka.ms/LearnAboutSenderIdentification>
EXTERNAL
Daniel means adding the webserver_config via your Dockerfile, so it is always
in any container from the start, e.g.
```
FROM apache/airflow:2.5.1
COPY webserver_config.py /opt/airflow
```
My "don't worry about reload, delete" was more of a general comment. However
you get your webserver_config into the container, don't spend time trying to
reload it in a running webserver, just delete the pod and let a fresh one come
up. You still need to get the webserver_config in the container somehow (and
not via `kubectl cp` or `kubectl exec`, but some way that'll happen in new pods
automatically). If you use `webserverConfig` in the chart, or bake it into an
image and you tag changes, the helm chart will do the restart for you
automatically.
I take it you are new to k8s, so let me say that `kubectl cp` and `kubectl
exec` should only be debugging aids. You really need things to be
durable/resilient to the unexpected happening. Your pods could be killed
because the node it's on comes under resource pressure, or the node itself may
die. Fresh pods should be ready to go by their config alone, with no manual
intervention needed. In this case, this means getting your webserver_config
into the image itself, or mounted into the container (via a configmap or
secret).
Hope that helps!
(p.s. I'm happy to chat on slack in #helm-chart-official about this stuff as
well)