I strongly recommend spawning a new process for the Spark jobs. Much
cleaner separation. Your driver program won't be clobbered if the Spark job
dies, etc. It can even watch for failures and restart.

In the Scala standard library, the sys.process package has classes for
constructing and interoperating with external processes. Perhaps Java has
something similar these days?

dean

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Tue, Apr 21, 2015 at 2:15 PM, Steve Loughran <ste...@hortonworks.com>
wrote:

>
>  On 21 Apr 2015, at 17:34, Richard Marscher <rmarsc...@localytics.com>
> wrote:
>
> - There are System.exit calls built into Spark as of now that could kill
> your running JVM. We have shadowed some of the most offensive bits within
> our own application to work around this. You'd likely want to do that or to
> do your own Spark fork. For example, if the SparkContext can't connect to
> your cluster master node when it is created, it will System.exit.
>
>
> people can block "errant" System.exit calls by running under a
> SecurityManager. Less than ideal (and there's a small performance hit) -but
> possible
>

Reply via email to