agrawaldevesh commented on a change in pull request #29032:
URL: https://github.com/apache/spark/pull/29032#discussion_r457770003



##########
File path: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
##########
@@ -871,7 +871,7 @@ private[deploy] class Master(
         logInfo("Telling app of decommission executors")
         exec.application.driver.send(ExecutorUpdated(
           exec.id, ExecutorState.DECOMMISSIONED,
-          Some("worker decommissioned"), None, workerLost = false))
+          Some("worker decommissioned"), None, workerLost = true))

Review comment:
       On second thoughts, this is a bigger change: I will just explain why 
workerLost = true better in the code. 
   
   The main problem is that the Deploy messages are b/w the Master and the 
Driver, while the ExecutorDecommissionInfo are b/w the Driver and the 
Executors. In any case, each cluster manager would need to have its own way of 
somehow communicating the decommissioning metadata to the driver, and this 
workerLost could be the Master's internal way.
   
   The alternatives would be to either introduce another 
`ExecutorDecommissioned(id, MasterToDriverExecutorDecommissionInfo)` struct, or 
enhance `ExecutorUpdated.workerLost` to be a full Struct/Any that can take the 
MasterToDriverExecutorDecommissionInfo. 
   
   This all seems like a bit too much machinery for now, so I will just comment 
it inline in the code. 
   
   Thank you for raising this question :-) 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to