Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/4021#issuecomment-76542760
  
    I think I've figured it out: consider the lifecycle of an accumulator in a 
task, say ShuffleMapTask: on the executor, each task deserializes its own copy 
of the RDD inside of its `runTask` method, so the strong reference to the RDD 
disappears at the end of `runTask`.  In `Executor.run()`, we call 
`Accumulators.values` after `runTask` has exited, so there's a small window in 
which the tasks's RDD can be GC'd, causing accumulators to be GC'd as well 
because there are no longer any strong references to them.
    
    The fix is to keep strong references in `localAccums`, since we clear this 
at the end of each task anyways.  I'm glad that I was able to figure out 
precisely _why_ this was necessary and sorry that I missed this during review; 
I'll submit a fix shortly.  In terms of preventative measures, it might be a 
good idea to write up the lifetime / lifecycle of objects' strong references 
whenever we're using WeakReferences, since the process of explicitly writing 
that out would prevent these sorts of mistakes in the future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to