[ 
https://issues.apache.org/jira/browse/SPARK-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14583560#comment-14583560
 ] 

Mridul Muralidharan commented on SPARK-8297:
--------------------------------------------


I am not sure why the spark application would hang - if you pull the cable for 
a worker. What exactly was the behavior you observed ?

Note that we observe yarn detecting the missing node and deallocating all 
containers for all applications on the node - and notifies the corresponding 
application master's.
In spark-yarn, we clean up the yarn specific state for that. We just do not 
propagate that to scheduler backend (which, for example, spark-mesos scheduler 
does).

To elaborate, the exact scenario where we fairly regularly (about once a month) 
encounter is like this :

We run the spark application on about 600+ nodes on a much larger cluster, and 
during the course of the job, one or more nodes will fail [1].
The job typically is a cascade of maps followed by reduces - and so other than 
initial task, everything else pretty much runs on process local locality level 
(for maps).
When an executor goes MIA (does not respond to ping, etc [2]), shuffle fetches 
will fail - causing repeated attempts at reexecution, and eventual application 
hang [3].



[1] Not all node failures trigger this issue - which makes reproducing this 
unpredictable - hence needing to rely on logs.

[2] In our specific app, the timeout's for heartbeat is increased due to gc 
issues we see in spark - when executors are repeatedly killed just cos they 
were slower in responding to heartbeat.

[3] We have high threshold for task and application failures, since it is a 
long running job and there are usually frequent transient failures 
(particularly due to yarn aggresively police'ing the resource limits). 



> Scheduler backend is not notified in case node fails in YARN
> ------------------------------------------------------------
>
>                 Key: SPARK-8297
>                 URL: https://issues.apache.org/jira/browse/SPARK-8297
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.2.2, 1.3.1, 1.4.1, 1.5.0
>         Environment: Spark on yarn - both client and cluster mode.
>            Reporter: Mridul Muralidharan
>            Priority: Critical
>
> When a node crashes, yarn detects the failure and notifies spark - but this 
> information is not propagated to scheduler backend (unlike in mesos mode, for 
> example).
> It results in repeated re-execution of stages (due to FetchFailedException on 
> shuffle side), resulting finally in application failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to