cloud-fan commented on a change in pull request #24892: [SPARK-25341][Core] 
Support rolling back a shuffle map stage and re-generate the shuffle files
URL: https://github.com/apache/spark/pull/24892#discussion_r299353529
 
 

 ##########
 File path: 
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
 ##########
 @@ -2751,17 +2746,141 @@ class DAGSchedulerSuite extends SparkFunSuite with 
LocalSparkContext with TimeLi
     assert(failedStages.collect {
       case stage: ResultStage => stage
     }.head.findMissingPartitions() == Seq(0, 1))
-
     scheduler.resubmitFailedStages()
+    (shuffleId1, shuffleId2)
+  }
+
+  test("SPARK-25341: abort stage while using old fetch protocol") {
+    // reset the test context with using old fetch protocol
+    afterEach()
+    val conf = new SparkConf()
+    conf.set(config.SHUFFLE_USE_OLD_FETCH_PROTOCOL.key, "true")
+    init(conf)
+
+    val (shuffleId1, _) = constructIndeterminateStageRetryScenario()
+    // The second task of the `shuffleMapRdd2` failed with fetch failure
 
 Review comment:
   This is a little misleading. `shuffleMapRdd2` only miss its first partition, 
it's weird to have the second task fail. Can we fail the first task?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to