What do you mean when you say "the overhead of spark shuffles start to
accumulate"? Could you elaborate more?

In newer versions of Spark shuffle data is cleaned up automatically
when an RDD goes out of scope. It is safe to remove shuffle data at
this point because the RDD can no longer be referenced. If you are
seeing a large build up of shuffle data, it's possible you are
retaining references to older RDDs inadvertently. Could you explain
what your job actually doing?

- Patrick

On Mon, Dec 22, 2014 at 2:36 PM, Ganelin, Ilya
<ilya.gane...@capitalone.com> wrote:
> Hi all, I have a long running job iterating over a huge dataset. Parts of
> this operation are cached. Since the job runs for so long, eventually the
> overhead of spark shuffles starts to accumulate culminating in the driver
> starting to swap.
>
> I am aware of the spark.cleanup.tll parameter that allows me to configure
> when cleanup happens but the issue with doing this is that it isn't done
> safely, e.g. I can be in the middle of processing a stage when this cleanup
> happens and my cached RDDs get cleared. This ultimately causes a
> KeyNotFoundException when I try to reference the now cleared cached RDD.
> This behavior doesn't make much sense to me, I would expect the cached RDD
> to either get regenerated or at the very least for there to be an option to
> execute this cleanup without deleting those RDDs.
>
> Is there a programmatically safe way of doing this cleanup that doesn't
> break everything?
>
> If I instead tear down the spark context and bring up a new context for
> every iteration (assuming that each iteration is sufficiently long-lived),
> would memory get released appropriately?
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates. The information transmitted herewith
> is intended only for use by the individual or entity to which it is
> addressed.  If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to