[ 
https://issues.apache.org/jira/browse/SPARK-30325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haiyangyu updated SPARK-30325:
------------------------------
    Attachment: image-2019-12-21-17-11-38-565.png

> Stage retry and executor crashed cause app hung up forever
> ----------------------------------------------------------
>
>                 Key: SPARK-30325
>                 URL: https://issues.apache.org/jira/browse/SPARK-30325
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0, 2.4.4
>            Reporter: haiyangyu
>            Priority: Major
>         Attachments: image-2019-12-21-17-11-38-565.png
>
>
> Kill tasks which succeeded in origin stage when new retry stage has started 
> the same task and hasn't finished.
> This can decreate stage run time, resouce cost and most important of all, it 
> can avoid a bug which can cause app hung.
> The bugs occurs in the coren case as follows:
> 1. The stage occurs for fetchFailed and some task hasn't finished, scheduler 
> will resubmit a new stage as retry with those unfinished tasks.
> 2. The unfinished task in origin stage finished and the same task on the new 
> retry stage hasn't finished, it will mark the task partition on the new retry 
> stage as succesuful. 
> 3. The executor running those 'successful task' crashed, it cause 
> taskSetManager run executorLost to rescheduler the task on the executor, here 
> will cause copiesRunning decreate 1 twice, beause those 'successful task' are 
> not finished, the variable copiesRunning will decreate to -1 as result.
> 4. 'dequeueTaskFromList' will use copiesRunning equal 0 as reschedule basis 
> when rescheduler tasks, and now it is -1, can't to reschedule, and the app 
> will hung forever.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to