For 1 you need to have both zeppelin web HA and zeppelin deamon HA
For 2 I guess you can use HDFS if you implement the storage interface for
HDFS. But i am not sure.
For 3 I mean that if you connect to an external cluster for example a spark
cluster you need to make sure your spark cluster is HA. Otherwise you will
have zeppelin running but your notebook will fail as no spark cluster
available.
HTH
Eran


On Tue, 5 Apr 2016 at 20:20 ashish rawat <dceash...@gmail.com> wrote:

> Thanks Eran for your reply.
> For 1) I am assuming that it would similar to HA of any other web
> application, i.e. running multiple instances and switching to the backup
> server when master is down, is it not the case?
> For 2) is it also possible to save it on hdfs?
> Can you please explain 3, are you referring to interpreter config? If I am
> using Spark interpreter and submitting jobs to it, and if zeppelin master
> node goes down, then what could be the problem in slave node pointing to
> the same cluster and submitting jobs?
>
> Regards,
> Ashish
>
> On Tue, Apr 5, 2016 at 10:08 PM, Eran Witkon <eranwit...@gmail.com> wrote:
>
>> I would say you need to account for these things
>> 1) availability of the zeppelin deamon
>> 2) availability of the notebookd files
>> 3) availability of the interpreters used.
>>
>> For 1 i don't know of out-of-box solution
>> For 2 any ha storage will do, s3 or any ha external mounted disk
>> For 3 it is up to the interpreter and your big data ha solution
>>
>> On Tue, 5 Apr 2016 at 19:29 ashish rawat <dceash...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Is there a suggested architecture to run Zeppelin in high availability
>>> mode. The only option I could find was by saving notebooks to S3. Are there
>>> any options if one is not using AWS?
>>>
>>> Regards,
>>> Ashish
>>>
>>
>

Reply via email to