Thanks Paul,
So your reply prevented me from looking in the wrong direction, but I an
back to my original problem with zookeeper.
"Leadership had been revoked master shutting down"

Can anyone provide some feedback or add to this.
Regards
Raghvendra
On 20-Jan-2016 2:31 pm, "Paul Leclercq" <paul.lecle...@tabmo.io> wrote:

> Hi Raghvendra and Spark users,
>
> I also have trouble activating my stand by master when my first master is
> shutdown (via a ./sbin/stop-master.sh or via a instance shut down) and
> just want to share with you my thoughts.
>
> To answer your question Raghvendra, in *spark-env.sh*, if 2 IPs are set
> for SPARK_MASTER_IP(SPARK_MASTER_IP='W.X.Y.Z,A.B.C.D'), the standalone
> cluster cannot be launched.
>
> So I only use only one IP there, as the Spark context can know other
> masters with a other way, as written in the Standalone Zookeeper HA
> <http://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper>
> doc, "you might start your SparkContext pointing to
> spark://host1:port1,host2:port2"
>
> In my opinion, we should not have to set a SPARK_MASTER_IP as this is
> stored in ZooKeeper :
>
> you can launch multiple Masters in your cluster connected to the same
>> ZooKeeper instance. One will be elected “leader” and the others will remain
>> in standby mode.
>
> When starting up, an application or Worker needs to be able to find and
>> register with the current lead Master. Once it successfully registers,
>> though, it is “in the system” (i.e., stored in ZooKeeper).
>
>  -
> http://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper
>
> As I understand it, after a ./sbin/stop-master.sh on both master, a
> master will be elected, and the other will be stand by.
> To launch the workers, we can use ./sbin/start-slave.sh
> spark://MASTER_ELECTED_IP:7077
> I don't think if we can use the ./sbin/start-all.sh that use the salve
> file to launch workers and masters as we cannot set 2 master IPs inside
> spark-env.sh
>
> My SPARK_DAEMON_JAVA_OPTS content :
>
> SPARK_DAEMON_JAVA_OPTS='-Dspark.deploy.recoveryMode="ZOOKEEPER"
>> -Dspark.deploy.zookeeper.url="ZOOKEEPER_IP:2181"
>> -Dspark.deploy.zookeeper.dir="/spark"'
>
>
> A good thing to check if everything went OK is the folder /spark on the
> ZooKeeper server. I could not find it on my server.
>
> Thanks for reading,
>
> Paul
>
>
> 2016-01-19 22:12 GMT+01:00 Raghvendra Singh <raghvendra.ii...@gmail.com>:
>
>> Hi, there is one question. In spark-env.sh should i specify all masters
>> for parameter SPARK_MASTER_IP. I've set SPARK_DAEMON_JAVA_OPTS already
>> with zookeeper configuration as specified in spark documentation.
>>
>> Thanks & Regards
>> Raghvendra
>>
>> On Wed, Jan 20, 2016 at 1:46 AM, Raghvendra Singh <
>> raghvendra.ii...@gmail.com> wrote:
>>
>>> Here's the complete master log on reproducing the error
>>> http://pastebin.com/2YJpyBiF
>>>
>>> Regards
>>> Raghvendra
>>>
>>> On Wed, Jan 20, 2016 at 12:38 AM, Raghvendra Singh <
>>> raghvendra.ii...@gmail.com> wrote:
>>>
>>>> Ok I Will try to reproduce the problem. Also I don't think this is an
>>>> uncommon problem I am searching for this problem on Google for many days
>>>> and found lots of questions but no answers.
>>>>
>>>> Do you know what kinds of settings spark and zookeeper allow for
>>>> handling time outs during leader election etc. When one is down.
>>>>
>>>> Regards
>>>> Raghvendra
>>>> On 20-Jan-2016 12:28 am, "Ted Yu" <yuzhih...@gmail.com> wrote:
>>>>
>>>>> Perhaps I don't have enough information to make further progress.
>>>>>
>>>>> On Tue, Jan 19, 2016 at 10:55 AM, Raghvendra Singh <
>>>>> raghvendra.ii...@gmail.com> wrote:
>>>>>
>>>>>> I currently do not have access to those logs but there were only
>>>>>> about five lines before this error. They were the same which are present
>>>>>> usually when everything works fine.
>>>>>>
>>>>>> Can you still help?
>>>>>>
>>>>>> Regards
>>>>>> Raghvendra
>>>>>> On 18-Jan-2016 8:50 pm, "Ted Yu" <yuzhih...@gmail.com> wrote:
>>>>>>
>>>>>>> Can you pastebin master log before the error showed up ?
>>>>>>>
>>>>>>> The initial message was posted for Spark 1.2.0
>>>>>>> Which release of Spark / zookeeper do you use ?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> On Mon, Jan 18, 2016 at 6:47 AM, doctorx <raghvendra.ii...@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> I am facing the same issue, with the given error
>>>>>>>>
>>>>>>>> ERROR Master:75 - Leadership has been revoked -- master shutting
>>>>>>>> down.
>>>>>>>>
>>>>>>>> Can anybody help. Any clue will be useful. Should i change
>>>>>>>> something in
>>>>>>>> spark cluster or zookeeper. Is there any setting in spark which can
>>>>>>>> help me?
>>>>>>>>
>>>>>>>> Thanks & Regards
>>>>>>>> Raghvendra
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> View this message in context:
>>>>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-1-2-0-standalone-ha-zookeeper-tp21308p25994.html
>>>>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>>>>> Nabble.com.
>>>>>>>>
>>>>>>>>
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>
>>
>

Reply via email to