potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-2242284389
No. There is nothing against it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1958098338
> @Taragolis was wondering if you have a patch/workaround for this issue?
Airflow 2.8.2 (RC likely tomorrow) - should have the fix applied from #35653
- if your problem @kurtq
kurtqq commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1951598471
@Taragolis was wondering if you have a patch/workaround for this issue?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
xmariachi commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1824683187
> @xmariachi Hi, I also use mwaa and have come across this behaviour. What
is the status of the task when it gets stuck? We had the task in a queued
state, and noticed the sqs ag
simonprydden commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-181774
@xmariachi Hi, I also use mwaa and have come across this behaviour. What is
the status of the task when it gets stuck? We had the task in a queued state,
and noticed the sqs a
xmariachi commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1806078456
Thank you guys.
I don't have enough knowledge on Airflow internals to chip in much, but your
solution sounds sensible.
--
This is an automated message from the Apache G
Taragolis commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1802022263
> It happend few times in my life that process did not die after SIGKILL and
that was at times when the whole OS/installation got heavily broken)
![giphy](https://github.c
Taragolis commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1802016155
Seems we know about the nature and have a plan how to resolve it, so let me
pick this issue then.
--
This is an automated message from the Apache Git Service.
To respond to the
potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1801988073
> That is why I suggest also add grace period before kill, maybe even
configurable
Oh absolutely. the signal "dance" I was mentioning should involve several
grace periods. U
potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1801908221
YEp. @Taragolis . That would be me idea. It comes from the assumption that
in order to REALLY be able to handle all timeouts you need to do it from a
separate process - because as y
Taragolis commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1801897704
I guess this should be implemented in top of the current implementation?
Correct me if I wrong.
1. Try to raise exception AirflowTaskTimeout
2. Heartbeat checker also ch
potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1801807403
How about another option. I think we already use (depends on runner -
couldbe also spawned and cgroups migh be involved - but generally it's the
default) fork local task process exe
Taragolis commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1801592051
I think yes we need an additional escalation level for execution timeout.
The problem of current implementation that we raise an error in handler
function and have no idea
potiuk commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1800106090
This is ilikely something we cannot do anything about you need to raise it
to Kafka developers. The problem is that if you have C library that hangs and
does not periodically check
Taragolis commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1799121097
In general we accept bugs which could be reproduce in Open Source
implementation on latest stable version of Airflow, for that purpose you could
try to use provided Docker Compos
xmariachi commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1798826822
Thanks @jscheffl , will do. However, this is AWS MWAA - is that feasible to
do there as well?
--
This is an automated message from the Apache Git Service.
To respond to the mes
jscheffl commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1796960451
Hi @xmariachi - as debugging of your reported problem is hard and can be
caused by multiple circumstances also outside of control of Airflow can you
check when the node/POD ha
boring-cyborg[bot] commented on issue #35474:
URL: https://github.com/apache/airflow/issues/35474#issuecomment-1794383797
Thanks for opening your first issue here! Be sure to follow the issue
template! If you are willing to raise PR to address this issue please do so, no
need to wait for ap
xmariachi opened a new issue, #35474:
URL: https://github.com/apache/airflow/issues/35474
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Version: 2.5.1
Run env: MWAA on AWS
Summary: Once every ~500-1000 runs approxima
19 matches
Mail list logo