Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17325#discussion_r107821258
  
    --- Diff: 
core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala 
---
    @@ -481,27 +481,39 @@ class BlockManagerProactiveReplicationSuite extends 
BlockManagerReplicationBehav
         assert(blockLocations.size === replicationFactor)
     
         // remove a random blockManager
    -    val executorsToRemove = blockLocations.take(replicationFactor - 1)
    +    val executorsToRemove = blockLocations.take(replicationFactor - 
1).toSet
         logInfo(s"Removing $executorsToRemove")
    -    executorsToRemove.foreach{exec =>
    -      master.removeExecutor(exec.executorId)
    +    initialStores.filter(bm => 
executorsToRemove.contains(bm.blockManagerId)).foreach { bm =>
    +      master.removeExecutor(bm.blockManagerId.executorId)
    +      bm.stop()
           // giving enough time for replication to happen and new block be 
reported to master
    -      Thread.sleep(200)
    +      eventually(timeout(5 seconds), interval(100 millis)) {
    +        val newLocations = master.getLocations(blockId).toSet
    +        assert(newLocations.size === replicationFactor)
    +      }
         }
     
    -    val newLocations = eventually(timeout(5 seconds), interval(10 millis)) 
{
    +    val newLocations = eventually(timeout(5 seconds), interval(100 
millis)) {
           val _newLocations = master.getLocations(blockId).toSet
           assert(_newLocations.size === replicationFactor)
           _newLocations
         }
         logInfo(s"New locations : $newLocations")
    -    // there should only be one common block manager between initial and 
new locations
    -    assert(newLocations.intersect(blockLocations.toSet).size === 1)
     
    -    // check if all the read locks have been released
    +    // new locations should not contain stopped block managers
    +    assert(newLocations.forall(bmId => !executorsToRemove.contains(bmId)),
    +      "New locations contain stopped block managers.")
    +
    +    // this is to ensure the last read lock gets released before we try to
    +    // check for read-locks. The check for read-locks using the method 
below is not
    +    // idempotent, and therefore can't be used in an `eventually` block.
    +    Thread.sleep(500)
    --- End diff --
    
    Do you think it's better to just add a private[spark] method to check for 
read locks? I'm worried this test will still be brittle and it seems relatively 
easy to just add that method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to