Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8007#discussion_r37355322
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
 ---
    @@ -91,6 +92,68 @@ private[spark] abstract class YarnSchedulerBackend(
       }
     
       /**
    +   * Override the DriverEndpoint to add extra logic for the case when an 
executor is disconnected.
    +   * We should check the cluster manager and find if the loss of the 
executor was caused by YARN
    +   * force killing it due to preemption.
    +   */
    +  private class YarnDriverEndpoint(rpcEnv: RpcEnv, sparkProperties: 
ArrayBuffer[(String, String)])
    +      extends DriverEndpoint(rpcEnv, sparkProperties) {
    +
    +    private val pendingDisconnectedExecutors = new HashSet[String]
    +    private val handleDisconnectedExecutorThreadPool =
    +      
ThreadUtils.newDaemonCachedThreadPool("yarn-driver-handle-lost-executor-thread-pool")
    +
    +    /**
    +     * When onDisconnected is received at the driver endpoint, the 
superclass DriverEndpoint
    +     * handles it by assuming the Executor was lost for a bad reason and 
removes the executor
    +     * immediately.
    +     *
    +     * In YARN's case however it is crucial to talk to the application 
master and ask why the
    +     * executor had exited. In particular, the executor may have exited 
due to the executor
    +     * having been preempted. If the executor "exited normally" according 
to the application
    +     * master then we pass that information down to the TaskSetManager to 
inform the
    +     * TaskSetManager that tasks on that lost executor should not count 
towards a job failure.
    +     */
    +    override def onDisconnected(rpcAddress: RpcAddress): Unit = {
    +      addressToExecutorId.get(rpcAddress).foreach({ executorId =>
    +        // onDisconnected could be fired multiple times from the same 
executor while we're
    +        // asynchronously contacting the AM. So keep track of the 
executors we're trying to
    +        // find loss reasons for and don't duplicate the work
    +        if (!pendingDisconnectedExecutors.contains(executorId)) {
    +          pendingDisconnectedExecutors.add(executorId)
    +          handleDisconnectedExecutorThreadPool.submit(new Runnable() {
    +            override def run(): Unit = {
    +              val executorLossReason =
    +              // Check for the loss reason and pass the loss reason to 
driverEndpoint
    +                
yarnSchedulerEndpoint.askWithRetry[Option[ExecutorLossReason]](
    +                    GetExecutorLossReason(executorId))
    +              executorLossReason match {
    +                case Some(reason) =>
    +                  
driverEndpoint.askWithRetry[Boolean](RemoveExecutor(executorId, reason))
    +                case None =>
    +                  logWarning(s"Attempted to get executor loss reason" +
    +                    s" for $rpcAddress but got no response. Marking as 
slave lost.")
    +                  
driverEndpoint.askWithRetry[Boolean](RemoveExecutor(executorId, SlaveLost()))
    --- End diff --
    
    Definitely don't think that's thread safe. It touches things like 
addressToExecutorId, which as we can see in the onDisconnected method itself is 
accessed in the event loop.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to