In my limited understanding, there must be single leader master in the
cluster. If there are multiple leaders, it will lead to unstable cluster as
each masters will keep scheduling independently. You should use zookeeper
for HA, so that standby masters can vote to find new leader if the primary
Thanks for the response.
But no this does not answer the question.
The question was: Is there a way (via some API call) to query the number
and type of daemons currently running in the Spark cluster.
Regards
On Sun, Apr 26, 2015 at 10:12 AM, ayan guha guha.a...@gmail.com wrote:
In my
If I have 5 nodes and I wish to maintain 1 Master and 2 Workers on each
node, so in total I will have 5 master and 10 Workers.
Now to maintain that setup I would like to query spark regarding the number
Masters and Workers that are currently available using API calls and then
take some
The Spark web UI offers a JSON interface with some of this information.
http://stackoverflow.com/a/29659630/877069
It's not an official API, so be warned that it may change unexpectedly
between versions, but you might find it helpful.
Nick
On Sun, Apr 26, 2015 at 9:46 AM
Not sure if there's a spark native way but we've been using consul for this.
M
On Apr 26, 2015, at 5:17 AM, James King jakwebin...@gmail.com wrote:
Thanks for the response.
But no this does not answer the question.
The question was: Is there a way (via some API call) to query the
Understood.
On 26 Apr 2015 19:17, James King jakwebin...@gmail.com wrote:
Thanks for the response.
But no this does not answer the question.
The question was: Is there a way (via some API call) to query the number
and type of daemons currently running in the Spark cluster.
Regards
On