, Fokko
Op vr 16 nov. 2018 om 21:45 schreef Abhishek Sinha :
> Airflow Version 1.8.2, Celery and Postgres
>
> Task (and its downstream nodes) got executed even when it's only
immediate
> parent was in upstream failed state. The trigger rule for all nodes was
set
> to "all_succ
Airflow Version 1.8.2, Celery and Postgres
Task (and its downstream nodes) got executed even when it's only immediate
parent was in upstream failed state. The trigger rule for all nodes was set
to "all_success".
Looks like a bug in version 1.8.2. Can someone help me on this?
Regards,
Found a similar JIRA raised already:
https://issues.apache.org/jira/browse/AIRFLOW-1370
Looks like the issue has been there around for sometime now.
Regards,
Abhishek | Infoworks.io | M: +91-9035191078
On 2 November 2018 at 12:49:40 PM, Abhishek Sinha (abhis...@infoworks.io)
wrote:
Max
On 9 November 2018 at 6:39:30 AM, Abhishek Sinha (abhis...@infoworks.io)
wrote:
Hi,
I get the following error if a script inside bash operator gives out
unicode characters:
Traceback (most recent call last):
File "/home/ec2-user/resources/python27/lib/python2.7/threading.py&q
Hi,
I get the following error if a script inside bash operator gives out
unicode characters:
Traceback (most recent call last):
File "/home/ec2-user/resources/python27/lib/python2.7/threading.py", line
810, in __bootstrap_inner
self.run()
File
condition, but doesn't the stack trace prove the existence of a
race condition?
Max
On Fri, Nov 2, 2018 at 10:19 AM Abhishek Sinha
wrote:
> Max,
>
> If check+insert works correctly, then even multiple instances of scheduler
> running in parallel should not throw this error. I am not sure then
fail hard. The schedule logic that tries to
insert the new task instance should only insert a new one if it doesn't
exist already and isolate that check+insert inside a database transaction.
Max
On Fri, Nov 2, 2018 at 5:38 AM Abhishek Sinha
wrote:
> Brian,
>
> We use the trigger dag CL
Brian,
We use the trigger dag CLI command to trigger it manually.
Even when you have custom operators, the duplicate key error should not
happen right? Shouldn't the combination of task id, dag id and execution
date be unique?
On 30 October 2018 at 10:23:27 PM, Abhishek Sinha (abhis
plicate keys for the dag run and it would fail
>> to kick off.
>>
>> One scheduler, but we saw it repeatedly and have it noted as a thing to
>> watch out for.
>>
>> Brian
>>
>> Sent from a device with less than stellar autocorrect
>>
Attaching the scheduler crash logs as well.
https://pastebin.com/B2WEJKRB
Regards,
Abhishek Sinha | m: +919035191078 | e: abhis...@infoworks.io
On Tue, Oct 30, 2018 at 12:19 AM Abhishek Sinha
wrote:
> Max,
>
> We always trigger the DAG externally. I am not sure if there is
:
The stacktrace seems to be pointing in that direction. Id check that first.
It seems like it **could** be a race condition with a backfill as well,
unclear.
It's still a bug though, and the scheduler should make sure to handle this
and not raise/crash.
On Mon, Oct 29, 2018, 10:05 AM Abhishek Sinha wrote
October 2018 at 9:30:56 PM, Maxime Beauchemin (
maximebeauche...@gmail.com) wrote:
Abhishek, are you running more than one scheduler instance at once?
Max
On Mon, Oct 29, 2018 at 8:17 AM Abhishek Sinha
wrote:
> The issue is happening more frequently now. Can someone please l
The issue is happening more frequently now. Can someone please look into
this?
On 24 September 2018 at 12:42:49 PM, Abhishek Sinha (abhis...@infoworks.io)
wrote:
Can someone please help in looking into this issue? It is critical since
this has come up in one of our production environment
Can someone please help in looking into this issue? It is critical since this
has come up in one of our production environment. Also, this issue has appeared
only once till now.
Regards,
Abhishek
> On 20-Sep-2018, at 10:18 PM, Abhishek Sinha wrote:
>
> A
Any update on this?
Regards,
Abhishek
> On 18-Sep-2018, at 12:48 AM, Abhishek Sinha wrote:
>
> Pastebin: https://pastebin.com/K6BMTb5K <https://pastebin.com/K6BMTb5K>
>
>
>
>
> Regards,
>
> Abhishek
>
>> On 18-Sep-2018, at 12:31 AM, Stef
Pastebin: https://pastebin.com/K6BMTb5K
Regards,
Abhishek
> On 18-Sep-2018, at 12:31 AM, Stefan Seelmann wrote:
>
> On 9/17/18 8:19 PM, Abhishek Sinha wrote:
>> Any update on this?
>>
>>> Please find the scheduler error log attached.
>>>
>&g
Any update on this?
Regards,
Abhishek
> On 14-Sep-2018, at 6:09 PM, Abhishek Sinha wrote:
>
> Maxime,
>
> Please find the scheduler error log attached.
>
>
>
>
>
> Regards,
>
> Abhishek
>
>
> On Thu, Sep 13, 2018 at 10:07 AM Maxime
Maxime,
Please find the scheduler error log attached.
Regards,
Abhishek
On Thu, Sep 13, 2018 at 10:07 AM Maxime Beauchemin <
maximebeauche...@gmail.com> wrote:
> Can you share the full python stack trace?
>
> On Wed, Sep 12, 2018 at 5:31 PM Abhishek Sinha
>
Got the following error on Airflow 1.8.2 version:
duplicate key value violates unique constraint "task_instance_pkey"
DETAIL: Key (task_id, dag_id, execution_date)=(PB_BPNZ, master_v2, 2018-09-12
03:00:37) already exists.n [SQL: 'INSERT INTO task_instance (task_id, dag_id,
execution_date,
fashion), possibly
> check whether the same environement variable is exported on all of them?
> This may explain why the behaviour is random in your environment.
>
> Folks please correct me if I’m wrong. Thanks.
>
>
> XD
>
> On Wed, Aug 8, 2018 at 01:04 Ab
I am trying to use: MySqlToHiveTransfer Operator.
From the base_hook code, I see that there is a way to pass connection URL via
environment variables. The variable needs to be pre-fixed with: AIRFLOW_CONN_
followed by the connection ID.
I have tried exporting this variable but the behaviour
Hi,
The context that is being passed to the python callable on failure of the
task still shows the task state to be in running. This should ideally give
the state of the task as failed.
In the case of success callback the state information is correct (success).
Is there any way to get the
Hi,
We are on version 1.8.0 of Airflow.
Adding new dags in the dag bag folder while the services(webserver and
scheduler) are up and executing them via trigger_dag option does not work.
Scheduler/Webserver is not able to recognise the new dag until the services
are restarted. Has it been fixed
Is log rotation supported via some configuration for Scheduler and
Webserver?
Regards,
Abhishek
Mob: +919035191078
Email: abhis...@infoworks.io
24 matches
Mail list logo