Hi all,

I'm interested in using the Airflow CLI as a thin client so that I can run
DAG-management commands like pause, unpause, trigger_dag, run, etc. from a
local machine against a remote airflow cluster (e.g., running in Google
Container Engine).

I have tried pointing [core]sql_alchemy_conn at the remote database, but
without a shared view of the DAGs folder, the different components don't
seem to be able to sync up. For example, list_dags looks at the local DAGs
folder, but not at the database; and using trigger_dag with a local DAG
file seems to put the DAG in the database, but its task instances never
execute, presumably because none of the nodes in the cluster have a copy of
the DAG file.

I think in order for the CLI to be used as a thin client, the database,
rather than the DAGs folder needs to be used as the source of truth for
DAGs (and possibly other objects). Can anyone provide an estimate of how
heavyweight such a change would be?

I'm also curious what people think about delegating the pointer to the
current config file to a higher-level config file that contains references
to different configurations and a pointer to the "current" config.

Reply via email to