Hello Pradeep,

Quoting from

https://spark.apache.org/docs/0.9.0/spark-standalone.html

In order to schedule new applications or add Workers to the cluster, they
need to know the IP address of the current leader. This can be accomplished
by simply passing in a list of Masters where you used to pass in a single
one. For example, you might start your SparkContext pointing to
spark://host1:port1,host2:port2. This would cause your SparkContext to try
registering with both Masters - if host1 goes down, this configuration
would still be correct as we'd find the new leader, host2.

Thanks,

Jagat Singh


On Thu, Apr 10, 2014 at 8:08 AM, Pradeep Ch <pradeep.chanum...@gmail.com>wrote:

> Hi,
>
> I want to enable Spark Master HA in spark. Documentation specifies that we
> can do this with the help of Zookeepers. But what I am worried is how to
> configure one master with the other and similarly how do workers know that
> the have two masters? where do you specify the multi-master information?
>
> Thanks for the help.
>
> Thanks,
> Pradeep
>

Reply via email to