Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16189#discussion_r91671465
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -432,6 +435,57 @@ private[spark] class Executor(
       }
     
       /**
    +   * Supervises the killing / cancellation of a task by sending the 
interrupted flag, optionally
    +   * sending a Thread.interrupt(), and monitoring the task until it 
finishes.
    +   */
    +  private class TaskReaper(taskRunner: TaskRunner, interruptThread: 
Boolean) extends Runnable {
    +
    +    private[this] val killPollingFrequencyMs: Long =
    +      conf.getTimeAsMs("spark.task.killPollingFrequency", "10s")
    +
    +    private[this] val killTimeoutMs: Long = 
conf.getTimeAsMs("spark.task.killTimeout", "2m")
    +
    +    private[this] val takeThreadDump: Boolean =
    +      conf.getBoolean("spark.task.threadDumpKilledTasks", true)
    +
    +    override def run(): Unit = {
    +      val startTimeMs = System.currentTimeMillis()
    +      def elapsedTimeMs = System.currentTimeMillis() - startTimeMs
    +
    +      while (!taskRunner.isFinished && elapsedTimeMs < killTimeoutMs) {
    +        taskRunner.kill(interruptThread = interruptThread)
    +        taskRunner.synchronized {
    +          Thread.sleep(killPollingFrequencyMs)
    +        }
    +        if (!taskRunner.isFinished) {
    +          logWarning(s"Killed task ${taskRunner.taskId} is still running 
after $elapsedTimeMs ms")
    +          if (takeThreadDump) {
    +            try {
    +              val threads = Utils.getThreadDump()
    +              threads.find(_.threadName == taskRunner.threadName).foreach 
{ thread =>
    +                logWarning(s"Thread dump from task 
${taskRunner.taskId}:\n${thread.stackTrace}")
    +              }
    +            } catch {
    +              case NonFatal(e) =>
    +                logWarning("Exception thrown while obtaining thread dump: 
", e)
    +            }
    +          }
    +        }
    +      }
    +      if (!taskRunner.isFinished && killTimeoutMs > 0 && elapsedTimeMs > 
killTimeoutMs) {
    --- End diff --
    
    I thought about this and it seems like there are only two possibilities 
here:
    
    1. We're running in local mode, in which case we don't actually want to 
throw an exception to kill the JVM and even if we did throw then it would keep 
on running because there's not an uncaught exception handler here.
    2. We're running in a separate JVM, in which case any exception thrown in 
this thread and not caught will cause the JVM to exit. The only place in the 
body of this code that might actually throw unexpected exceptions is the 
taskThreadDump, which is already in a `try-catch` block to prevent exceptions 
from bubbling up.
    
    Thus the only purpose of a finally block would be to detect whether it was 
reached via an exception patch and to log a warning to state that task kill 
progress will no longer be monitored. Basically, I'm not sure what the finally 
block is buying us in terms of actionable / useful logs and it's only going to 
add complexity because then we need to be careful to not throw from the finally 
block in case it was entered via an exception, etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to