If this is a common pattern, then I think it's worth considering the level
of effort to bake this into airflow, both for developer convenience and
safety. If it's baked into airflow, a developer working on dags in the
cluster will be saved from accidentally overriding important callbacks that
feed
I'm not opposed to the idea of building on the existing statsd metrics, but
they are really not anywhere close to the richness and configurability that
I think I would want for my use case.
If you look at the existing metrics, they are all about looking at the
health of an airflow cluster in aggre
Hey Stephan, not sure if you've seen it or not, but Airflow has some built
in support for exporting statsd metrics. We run everything in Kubernetes
and are also heavy users of Prometheus. We've had a pretty solid experience
using the statsd-exporter (https://github.com/prometheus/statsd_exporter),
My answer is like his.
It still requires all devs making dags to use the same mechanism, but it’s
realy easy (assuming you’re not already using those hooks).
We have a single python utility function that is set as “on_success_callback”
AND for failure. Set it in “default_args” once for the
Hi-
I'm pretty new to airflow, and I'm trying to work on getting
visibility/observability into what airflow is doing.
I would like to be able to observe things about dag runs and task
instances. I would like to be able to send metrics to time series databases
(possibly extending the existing airf