Hi.
I'm trying to configure a spark standalone cluster, with three master nodes
(bigdata1, bigdata2 and bigdata3) managed by Zookeeper.
It seems there's a configuration problem, since everyone is saying it is the
cluster leader:
.
14/12/30 13:54:59 INFO Master: I have been elected leader! New state:
ALIVE
The message above is dumped by every master I start.
Zookeeper is configured identically in all of them, as follows:
dataDir=/spark
The only difference is the myid file in the /spark directory, of course.
The masters are started using the following configuration:
.
export SPARK_DAEMON_JAVA_OPTS= \
-Dspark.deploy.recoverymode=ZOOKEEPER \
-Dspark.deploy.zookeeper.url=bigdata1:2181,bigdata2:2181,bigdata3:2181
I'm not setting the spark.deploy.zookeeper.dir variable, since I'm using the
default value, /spark, configured in zookeeper, as I mentioned before.
I would like to know if there is any other thing I have to configure, in
order to make the masters to behave correctly (only one master node active
at a time).
With the current situation, I can connect workers and applications to the
whole cluster, for instance, I can connect a worker to the cluster using:
spark-class org.apache.spark.deploy.worker.Worker
spark://bigdata1:2181,bigdata2:2181,bigdata3:2181
But the worker gets registered to each of the masters independently.
If I stop one of the masters, it tries to re-register to it.
The notion of active-master is completely lost.
Do you have any idea?
Thanks a lot.
-Bob
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Standalone-Cluster-not-correctly-configured-tp20909.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org