When i had a similar issue it turned out that the way the task(s) were
written, they'd RAPIDLY open a large number of new RDS connections.
AWS RDS - particularly if you're using the cluster endpoint, is
performing a 'dns' lookup (4 hops if i recall correctly) before your
connection request actuall
As a dev manager, we generally ban the use of assertions or like constructs in
any language.
If you, as an engineer, think you should check a value... then check it and
take an intentional path afterwards. This is a known path, it can be tested
(or more likely seen as never tested as it’s so r
Could we ship a standard set of maintenance dags in the examples?
Then it’s easy for most people to not deploy them, but they’re part of the
intro package and could be tested?
Sent from a device with less than stellar autocorrect
> On Aug 23, 2019, at 4:14 PM, Aizhamal Nurmamat kyzy
> wrote
My answer is like his.
It still requires all devs making dags to use the same mechanism, but it’s
realy easy (assuming you’re not already using those hooks).
We have a single python utility function that is set as “on_success_callback”
AND for failure. Set it in “default_args” once for the
My quick guess - both running with local executor? That runs all af processes
in one... so you’d have 2 schedulers.
It’s perfectly acceptable to run AF across a couple EC2 instances, but you
have to select which parts of the stack run in which one, and support multiple
airflow Configs in some
I’d agree with Ash as well - and the externally triggered Dag model works well
and still allows you to use airflow for “normal” scheduled tasks.
Admittedly we struggled with this for a while, working really hard to use
schedules and xcom etc.. this is really “state management” in my opinion, an
This is actually the majority of our airflow work.
We use this pattern:
Sensor pings api (fairly quickly, in a dag that’s constrained to only run one
instance, every minute)
If the sensor gets a valid response, the next task is a custom operator that
extends Trigger, builds up the DagRun contex
You could have 2 “trigger dags” on different schedules triggering the one that
you’re interested in.
If you needed task selection in the target based on schedule, you could
branch around tasks based on the DagRun.
Sent from a device with less than stellar autocorrect
> On Jan 13, 2019, at 9
I’m not sure package structure based on whether major providers will fund
development is the right approach. My $.02
Sent from a device with less than stellar autocorrect
> On Jan 7, 2019, at 3:44 PM, Tim Swast wrote:
>
> In general it’s easier for cloud providers to fund development of opera