Did you unpersist the broadcast objects?

On Mon, Oct 17, 2016 at 10:02 AM lev <kat...@gmail.com> wrote:

> Hello,
>
> I'm in the process of migrating my application to spark 2.0.1,
> And I think there is some memory leaks related to Broadcast joins.
>
> the application has many unit tests,
> and each individual test suite passes, but when running all together, it
> fails on OOM errors.
>
> In the begging of each suite I create a new spark session with the session
> builder:
> /val spark = sessionBuilder.getOrCreate()
> /
> and in the end of each suite, I call the stop method:
> /spark.stop()/
>
> I added a profiler to the application, and looks like broadcast objects are
> taking most of the memory:
> <
> http://apache-spark-user-list.1001560.n3.nabble.com/file/n27910/profiler.png
> >
>
> Since each test suite passes when running by itself,
> I think that the broadcasts are leaking between the tests suites.
>
> Any suggestions on how to resolve this?
>
> thanks
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Possible-memory-leak-after-closing-spark-context-in-v2-0-1-tp27910.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to