That was it! Thanks so much Andreas. Can't believe I had overlooked that
drop down in the interpreter settings. Mohit and Mich probably assumed I
had tried that already.

Thanks everyone.

Mark

On Thu, Oct 6, 2016 at 8:35 AM, Andreas Lang <andreas.l...@aquilainsight.com
> wrote:

> Hi Mark,
>
> you may want to check the spark interpreter settings. In the most recent
> version of zeppelin you can set it to shared, isolated or scoped.
>
> Shared: single interpreter and spark context (and the queuing you see)
> Isolated: every notebook has its own interpreter and spark context
> Scoped: every notebook has its own interpreter but they share a spark
> context
> https://zeppelin.apache.org/docs/latest/interpreter/spark.html
>
> Isolated is the most stable for what you want to do and shared the more
> resource efficient for the machine you run zeppelin on.
>
> The comment of Mohit might be important if you have
> spark.dynamicAllocation.enabled set to true and no limits on the number
> and resources of executors.
>
> Andreas
>
> On Thu, 6 Oct 2016 at 16:28 Mark Libucha <mlibu...@gmail.com> wrote:
>
>> Mich, thanks for the suggestion. I tried your settings, but they did not
>> solve the problem.
>>
>> I'm running in yarn-client mode, not local or standalone, so the
>> resources in the Spark cluster (which is very large) should not be an
>> issue. Right?
>>
>> The problem seems to be that Zeppelin is not submitting the 2nd job to
>> the Spark cluster.
>>
>

Reply via email to