Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11720#discussion_r57668558
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -286,6 +286,7 @@ class DAGScheduler(
           case None =>
             // We are going to register ancestor shuffle dependencies
             getAncestorShuffleDependencies(shuffleDep.rdd).foreach { dep =>
    +          assert(!shuffleToMapStage.get(dep.shuffleId).isDefined)
    --- End diff --
    
    Yes, I tend to agree that the way what really should be the same Stage gets 
recreated as a new Stage is undesirable in the DAGScheduler, but the more 
important point is that, even though it is not optimal, it does produce correct 
results, and we can't change that for the sake of code that seems more 
desirable.
    
    That's not to say that some of the efforts that were begun to fix this kind 
of behavior while still generating correct results shouldn't be driven to 
completion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to