trlopes1974 commented on issue #41276:
URL: https://github.com/apache/airflow/issues/41276#issuecomment-2272228174

   There is something messed up.
   Today I had a filesystem sensor that failed by timeout (airflow task) but
   the celery task was successful ...
   
   A terça, 6/08/2024, 20:15, scaoupgrade ***@***.***> escreveu:
   
   > Maybe, it makes some sense as we do not have that setting in the
   > configuration. But, what is bothering me is WHY? why did it timeout after
   > being queued? We have no exhaustion anywhere, not in CPU, Memory, pool
   > slots, concurrency. I'd say that at the current time we have a very light
   > system...
   >
   > So, I believe that the real question is, why is the task queued but never
   > started?
   >
   > +-1year with some cleanups [image: image]
   > 
<https://private-user-images.githubusercontent.com/16724800/355507694-6aa42a5d-8980-4f8f-b31b-3202f99f276f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI5NzE4NDksIm5iZiI6MTcyMjk3MTU0OSwicGF0aCI6Ii8xNjcyNDgwMC8zNTU1MDc2OTQtNmFhNDJhNWQtODk4MC00ZjhmLWIzMWItMzIwMmY5OWYyNzZmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA2VDE5MTIyOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI2NjYzZDk2YzNkZmNhNjNjNzM3YWM0MDczMmY0Nzk4MDI2ZTM5OTQyYjEzNjU3ODZlMGU1ZDFmYTA0MTZhMzgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.KROI5i_Ta7uAa9f4BEntuF3Ibz3TbkhZ5FPdi1-ZkGk>
   >
   > All timeouts on our config: dagbag_import_timeout = 120.0
   > dag_file_processor_timeout = 180 default_task_execution_timeout =
   > web_server_master_timeout = 120 web_server_worker_timeout = 120
   > smtp_timeout = 30 operation_timeout = 2 stalled_task_timeout = 0
   > default_timeout = 604800
   >
   > It's the same case for me when the incident happened the other day.
   > Workers are all online, but no task gets executed. I notice something
   > abnormal in scheduler log when this happens: all tasks in a dag was
   > repeatedly queued for thousands of times in one second. Looks like
   > scheduler gets into a strange state
   >
   > —
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/airflow/issues/41276#issuecomment-2271973022>,
   > or unsubscribe
   > 
<https://github.com/notifications/unsubscribe-auth/AD7TGQF6SPFC3OJWACXLBN3ZQEOEPAVCNFSM6AAAAABMBY37NCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZRHE3TGMBSGI>
   > .
   > You are receiving this because you authored the thread.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to