This is an automated email from the ASF dual-hosted git repository.

jiangxb1987 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new ec80e4b  [SPARK-31784][CORE][TEST] Fix test 
BarrierTaskContextSuite."share messages with allGather() call"
ec80e4b is described below

commit ec80e4b5f80876378765544e0c4c6a4444f1a704
Author: yi.wu <yi...@databricks.com>
AuthorDate: Thu May 21 23:34:11 2020 -0700

    [SPARK-31784][CORE][TEST] Fix test BarrierTaskContextSuite."share messages 
with allGather() call"
    
    ### What changes were proposed in this pull request?
    
    Change from `messages.toList.iterator` to 
`Iterator.single(messages.toList)`.
    
    ### Why are the changes needed?
    
    In this test, the expected result of `rdd2.collect().head` should actually 
be `List("0", "1", "2", "3")` but is `"0"` now.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Updated test.
    
    Thanks WeichenXu123 reported this problem.
    
    Closes #28596 from Ngone51/fix_allgather_test.
    
    Authored-by: yi.wu <yi...@databricks.com>
    Signed-off-by: Xingbo Jiang <xingbo.ji...@databricks.com>
    (cherry picked from commit 83d0967dcc6b205a3fd2003e051f49733f63cb30)
    Signed-off-by: Xingbo Jiang <xingbo.ji...@databricks.com>
---
 .../org/apache/spark/scheduler/BarrierTaskContextSuite.scala   | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala
index b5614b2..6191e41 100644
--- 
a/core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala
@@ -69,12 +69,12 @@ class BarrierTaskContextSuite extends SparkFunSuite with 
LocalSparkContext with
       // Pass partitionId message in
       val message: String = context.partitionId().toString
       val messages: Array[String] = context.allGather(message)
-      messages.toList.iterator
+      Iterator.single(messages.toList)
     }
-    // Take a sorted list of all the partitionId messages
-    val messages = rdd2.collect().head
-    // All the task partitionIds are shared
-    for((x, i) <- messages.view.zipWithIndex) assert(x.toString == i.toString)
+    val messages = rdd2.collect()
+    // All the task partitionIds are shared across all tasks
+    assert(messages.length === 4)
+    assert(messages.forall(_ == List("0", "1", "2", "3")))
   }
 
   test("throw exception if we attempt to synchronize with different blocking 
calls") {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to