Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5479#discussion_r28202997
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
 ---
    @@ -128,10 +128,14 @@ private[spark] class YarnClientSchedulerBackend(
         assert(client != null && appId != null, "Application has not been 
submitted yet!")
         val t = new Thread {
           override def run() {
    -        val (state, _) = client.monitorApplication(appId, 
logApplicationReport = false)
    -        logError(s"Yarn application has already exited with state $state!")
    -        sc.stop()
    -        Thread.currentThread().interrupt()
    +        try {
    +          val (state, _) = client.monitorApplication(appId, 
logApplicationReport = false)
    --- End diff --
    
    Duplicate of https://github.com/apache/spark/pull/5451 so it would have 
been better to collaborate on that rather than open a new PR. However, this is 
closer to the right fix, so maybe we can converge on this PR.
    
    Why is `Thread.currentThread().interrupt()` called here? I thought that 
would only be done to preserve the interrupt state, but then that should only 
happen in the `catch` block right? The thread isn't waited-on by anything else 
and is terminating otherwise in the non-interrupted code path.
    
    Also, it's correct to not stop the `SparkContext` in this case?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to