Github user JoshRosen commented on the pull request: https://github.com/apache/spark/pull/3009#issuecomment-63873718 @kayousterhout > Is this still the expected behavior? (this happened from running "val rdd = sc.parallelize(1 to 10, 2).map((, 1)).reduceByKey(+_)" and then counting the elements twice) Yes, this is expected (I should probably write a Selenium test that explicitly defines this behavior in order to detect if it inadvertently changes). Given that we have an overestimate of which stages will be run when the job starts, how would you change this? One approach would be to just prune the stages that weren't run and advance the progress bar to 100% (e.g. it would show 1/1 stages and 2/2/ tasks for your example). Another approach would be to enrich the listener API so that the UI can determine which stages are likely to be skipped and display a progress bar that's a potential underestimate. If we do this, though, I think we'd want to have the progress bar update itself to show more remaining tasks once it learns that more stages need to be run. This is going to require a _lot_ more testing to make sure that it doesn't run into any corner-cases.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org