Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16189#discussion_r91672368
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -432,6 +458,78 @@ private[spark] class Executor(
       }
     
       /**
    +   * Supervises the killing / cancellation of a task by sending the 
interrupted flag, optionally
    +   * sending a Thread.interrupt(), and monitoring the task until it 
finishes.
    +   */
    +  private class TaskReaper(
    +      taskRunner: TaskRunner,
    +      val interruptThread: Boolean)
    +    extends Runnable {
    +
    +    private[this] val taskId: Long = taskRunner.taskId
    +
    +    private[this] val killPollingFrequencyMs: Long =
    +      conf.getTimeAsMs("spark.task.killPollingFrequency", "10s")
    +
    +    private[this] val killTimeoutMs: Long = 
conf.getTimeAsMs("spark.task.killTimeout", "2m")
    +
    +    private[this] val takeThreadDump: Boolean =
    +      conf.getBoolean("spark.task.threadDumpKilledTasks", true)
    +
    +    override def run(): Unit = {
    +      val startTimeMs = System.currentTimeMillis()
    +      def elapsedTimeMs = System.currentTimeMillis() - startTimeMs
    +      try {
    +        while (!taskRunner.isFinished && (elapsedTimeMs < killTimeoutMs || 
killTimeoutMs <= 0)) {
    +          taskRunner.kill(interruptThread = interruptThread)
    --- End diff --
    
    In the case where `interruptThread = false` `taskRunner.kill()` will be 
idempotent and subsequent calls won't have any effect. In the case where we 
_do_ interrupt, however, the introduction of this polling loop means that we'll 
interrupt the same task multiple times. Note that this could, in principle, 
have happened before, but in practice I think it never would.
    
    Are there cases where back-to-back interrupts could cause user code to 
break in really bad ways? I'm wondering whether you might have a case where 
you, say, are interrupted when issuing a SQL query and then use a finally block 
to do some kind of rollback and then have that rollback / cleanup step itself 
itself be interrupted a little while later. If this is a scenario that's 
concerning, then maybe the subsequent polls should only periodically thread 
dump and not interrupt. In that case, however, we'd need to make sure that 
back-to-back `killTask(interrupt=true)` calls both interrupt once on the first 
time, so the logic of avoiding the creation of a second TaskReaper would need 
to change a bit so that we issue a one-time interrupt in the case where we 
decide that the current TaskReaper subsumes the interrupt=true one that we'd 
otherwise create.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to