Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/18393#discussion_r126273069 --- Diff: core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala --- @@ -2277,6 +2277,29 @@ class DAGSchedulerSuite extends SparkFunSuite with LocalSparkContext with Timeou (Success, 1))) } + test("task end event should have updated accumulators (SPARK-20342)") { + val accumIds = new HashSet[Long]() + val listener = new SparkListener() { + override def onTaskEnd(event: SparkListenerTaskEnd): Unit = { + event.taskInfo.accumulables.foreach { acc => accumIds += acc.id } + } + } + sc.addSparkListener(listener) + + // Try a few times in a loop to make sure. This is not guaranteed to fail when the bug exists, + // but it should at least make the test flaky. If the bug is fixed, this should always pass. + (1 to 10).foreach { _ => + accumIds.clear() + + val accum = sc.longAccumulator + sc.parallelize(1 to 10, 10).foreach { _ => --- End diff -- The bug is, a task may lose the accumulator updates, but this test can only fail if all these 10 tasks lose the accumulator updates. Shall we use fewer partitions to make this test easier to fail?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org