Github user advancedxy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20449#discussion_r171489737
  
    --- Diff: core/src/test/scala/org/apache/spark/JobCancellationSuite.scala 
---
    @@ -40,6 +41,10 @@ class JobCancellationSuite extends SparkFunSuite with 
Matchers with BeforeAndAft
       override def afterEach() {
         try {
           resetSparkContext()
    +      // Reset semaphores if used by multiple tests.
    +      // Note: if other semaphores are shared by multiple tests, please 
reset them in this block
    +      JobCancellationSuite.taskStartedSemaphore.drainPermits()
    +      JobCancellationSuite.taskCancelledSemaphore.drainPermits()
    --- End diff --
    
    >for simplicity, I'd like to reset all semaphores here, instead of thinking 
about which one are shared.
    
    Another way to avoid this problem is: don't reuse semaphores. But that's 
too verbose.
    
    As for your suggestion, if new semaphores are added by others, how could he 
know that he's supposed to reset the semaphores?  Maybe some comments are 
needed in semaphore declaration
    
    >or we can make all semaphores local, so that we don't need to care about 
it.
    
    No, Global semaphore is required when being shared between driver and 
executor(another thread in local mode).
    See related pr https://github.com/apache/spark/pull/4180 for details



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to