I guess it "needs" to be this way to benefit from caching of RDDs in
memory. It would be nice however if the RDD cache can be dissociated from
the JVM heap so that in cases where garbage collection is difficult to
tune, one could choose to discard the JVM and run the next operation in a
few one.


On Mon, May 19, 2014 at 10:06 PM, Matei Zaharia <matei.zaha...@gmail.com>wrote:

> They’re tied to the SparkContext (application) that launched them.
>
> Matei
>
> On May 19, 2014, at 8:44 PM, Koert Kuipers <ko...@tresata.com> wrote:
>
> from looking at the source code i see executors run in their own jvm
> subprocesses.
>
> how long to they live for? as long as the worker/slave? or are they tied
> to the sparkcontext and life/die with it?
>
> thx
>
>
>

Reply via email to