We are starting to use Spark, but we don't have any existing infrastructure
related to big-data, so we decided to setup the standalone cluster, rather
than mess around with Yarn or Mesos.

But it appears like the driver program has to stay up on the client for the
full duration of the job ("client mode").

What is the simplest way to setup "cluster" submission mode, to allow our
client boxes to submit jobs and then move on with the other work they need
to do without keeping a potentially long running java process up?

Thanks,
Chris

Reply via email to