Github user squito commented on the pull request:

    https://github.com/apache/spark/pull/8090#issuecomment-130093251
  
    Hi @andrewor14 , no that is not quite what I was saying.  I was basically 
just repeating my comments from the earlier PR, and how I was confused on what 
we the goal is here, and that we can probably do something much simpler: 
https://github.com/apache/spark/pull/7770#discussion_r36189276.  Admittedly I 
never even considered this interaction w/ skipped stages, but I was in general 
wary of making a change that seemed more complicated than it needed to be.
    
    In fact, the questions you are asking are *exactly* my questions about the 
initial approach.  I don't understand when & why we want to reset those values 
-- there are so many seemingly similar cases which result in different 
behavior, I can't figure out what the desired effect for the end user really 
is.  The global value of the accumulator is going to be a mix of different 
stages no matter what, since you can have multiple stages executing 
simultaneously, and even multiple attempts for one stage.  It seems like you 
just want the value you see in the SparkListener `StageInfo`s to have the value 
for just the stage.  But that is possible with the much simpler implementation 
of just changing the internal accumulators in `Stage` to:
    
    ```scala
    private var _internalAccumulators: Seq[Accumulator[Long]] = 
InternalAccumulator.create()
    ```
    
    That also passes all your tests, so I'm not sure why we want something more 
complex.  It'll just keep incrementing the value across multiple attempts, but 
unless we clearly we define the desired behavior (across multiple attempts, 
partial recomputations, full recomputations, full recomputations but that get 
broken out into 2 attempts, recomputations via a skipped stage, etc.) I don't 
understand doing anything else.  At least this would be doing something that is 
well defined.
    
    "safe" is an interesting word to use ... the safe thing to do is to not 
touch the DAGScheduler (as it seems we do even when we know its broken) and let 
the metric be incorrect in cases we don't properly understand.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to