Re: Task is Stuck in Up_For_Retry

2018-10-03 Thread raman gupta
Hi All, On furter investigation we found that this issue is reproduced if task retry_delay is set to very low number say 10 seconds. All the retries of a taks get the same key by executor (LocalExecutor ) which is composed of dagid, taskid and execution date. So we are hitting the scenario where

Re: Task is Stuck in Up_For_Retry

2018-08-24 Thread ramandumcs
Hi All, Any pointer on this would be helpful. We have added extra logs and are trying few thing to get the root cause. But we are getting logs like "Task is not able to run". And we are not getting any resource usage related error. Thanks, Raman Gupta On 2018/08/21 16:46:56, ramandu...@gmail.

Re: Task is Stuck in Up_For_Retry

2018-08-21 Thread ramandumcs
Hi All, As per http://docs.sqlalchemy.org/en/latest/core/connections.html link db engine is not portable across process boundaries "For a multiple-process application that uses the os.fork system call, or for example the Python multiprocessing module, it’s usually required that a separate Engine

Re: Task is Stuck in Up_For_Retry

2018-08-21 Thread raman gupta
One possibility is the unavailability of session while calling self.task_instance._check_and_change_state_before_execution function. (Session is provided via @provide_session decorator) On Tue, Aug 21, 2018 at 7:09 PM vardangupta...@gmail.com < vardangupta...@gmail.com> wrote: > Is there any poss

Re: Task is Stuck in Up_For_Retry

2018-08-21 Thread vardanguptacse
Is there any possibility that on call of function _check_and_change_state_before_execution at https://github.com/apache/incubator-airflow/blob/v1-9-stable/airflow/jobs.py#L2500, this method is not actually being called https://github.com/apache/incubator-airflow/blob/v1-9-stable/airflow/models.

Re: Task is Stuck in Up_For_Retry

2018-08-17 Thread ramandumcs
We are getting the logs like {local_executor.py:43} INFO - LocalWorker running airflow run {models.py:1595} ERROR - Executor reports task instance %s finished (%s) although the task says its %s. Was the task killed externally? {models.py:1616} INFO - Marking task as UP_FOR_RETRY It seems that

Re: Task is Stuck in Up_For_Retry

2018-08-17 Thread Matthias Huschle
Hi Raman, Does it happen only occasionally, or can it be easily reproduced? What happens if you start it with "airflow run" or " airflow test"? What is in the logs about it? What is your user process limit ("ulimit -u") on that machine? 2018-08-17 15:39 GMT+02:00 ramandu...@gmail.com : > Thanks

Re: Task is Stuck in Up_For_Retry

2018-08-17 Thread ramandumcs
Thanks Taylor, We are getting this issue even after restart. We are observing that task instance state is transitioned from scheduled->queued->up_for_retry and dag gets stuck in up_for_retry state. Behind the scenes executor keep on retrying the dag's task exceeding the max retry limit. In norm

Re: Task is Stuck in Up_For_Retry

2018-08-16 Thread Taylor Edmiston
Does a scheduler restart make a difference? *Taylor Edmiston* Blog | CV | LinkedIn | AngelList | Stack Overflow

Task is Stuck in Up_For_Retry

2018-08-16 Thread ramandumcs
Hi All, We are using airflow 1.9 with Local Executor more. Intermittently we are observing that tasks are getting stuck in "up_for_retry" mode and are getting retried again and again exceeding their configured max retries count. like we have configured max retries as 2 but task is retried 15 ti