2016-03-24 9:54 GMT+01:00 Max Schmidt <m...@datapath.io>:
> we're using with the java-api (1.6.0) a ScheduledExecutor that
continuously
> executes a SparkJob to a standalone cluster.
I'd recommend Scala.

> After each job we close the JavaSparkContext and create a new one.
Why do that? You can happily reuse it. Pretty sure that also causes
the other problems, because you have a race condition on waiting for
the job to finish and stopping the Context.

Reply via email to