Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20449#discussion_r165051362
  
    --- Diff: 
core/src/main/scala/org/apache/spark/shuffle/BlockStoreShuffleReader.scala ---
    @@ -104,9 +104,18 @@ private[spark] class BlockStoreShuffleReader[K, C](
             
context.taskMetrics().incMemoryBytesSpilled(sorter.memoryBytesSpilled)
             context.taskMetrics().incDiskBytesSpilled(sorter.diskBytesSpilled)
             
context.taskMetrics().incPeakExecutionMemory(sorter.peakMemoryUsedBytes)
    +        // Use completion callback to stop sorter if task was cancelled.
    +        context.addTaskCompletionListener(tc => {
    +          // Note: we only stop sorter if cancelled as sorter.stop 
wouldn't be called in
    +          // CompletionIterator. Another way would be making sorter.stop 
idempotent.
    +          if (tc.isInterrupted()) { sorter.stop() }
    --- End diff --
    
    seems we can remove this `if` if we don't return a `CompletionIterator`.
    
    BTW I think we need to check all the places that use `CompletionIterator`, 
to see if they consider job canceling.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to