For completeness, there is also support for a DAG run timeout, which is yet
another mechanism - I haven't used it though and I believe it was
introduced in 1.7.x.
-s
On Fri, Oct 28, 2016 at 9:42 AM, siddharth anand wrote:
> 1.6.x had an infinite retry problem. If you specified a retry count
> g
1.6.x had an infinite retry problem. If you specified a retry count greater
than 1, the tasks would get retried ad infinitum.
This was fixed in 1.7.x (1.7.1.3 is most recent release).
We use and have been using the *execution_timeout* for over a year.
build_sender_models_spark_job = BashOperator
Hello,
I'm having a big, showstopping problem on my airflow installation. When
a task reaches its execution_timeout, I can see the error message in
the task's log, but it never actually fails the task, leaving it in a
running state forever. This is true of any task that has an
execution_timeout se
I don't think so.
As I understand it the way the ExternalTaskSensor works is a simple,
pull-based dependency; its polls the metadata database until its associated
'external' TaskInstance has a state it expects (e.g. "SUCCESS"), so from
the ExternalTaskSensor's point of view a cleared 'external' Ta
You can setup ssl and encryption for all these connections. For celery you need
a broker that supports encryption (i.e. Rabbitmq) and for the db both Postgres
and MySQL support ssl connections.
I don't think a change in airflow is needed for this except configuration.
Bolke
Sent from my iPho