Hi,

I have setup spark in cluster of 4 machines, with 2 masters and 2 workers.
I have zookeeper who does the election of the master. Here is my
configuration

The spark-env.sh contains

export SPARK_MASTER_IP=master1,master2

The conf/slaves contains

worker1
worker2

conf/spark-defaults.conf

spark.master=spark://master2:7077,master1:7077
spark.driver.memory=1g
spark.executor.memory=1g
spark.eventLog.enabled=true
spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties
spark.driver.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties
spark.deploy.recoveryMode=ZOOKEEPER
spark.deploy.zookeeper.url=zk1:2181,zk2:2181,zk3:2181
spark.deploy.zookeeper.dir=/spark

spark-masters

master1
master2

For ssh communication, I have added the following in .ssh/config

Host 0.0.0.0
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no

  Host localhost
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no

  Host master2
  #User ubuntu
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no

  Host master1
  #User ubuntu
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no

  Host worker1
  #User ubuntu
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no

  Host worker2
  #User ubuntu
  IdentityFile ~/.ssh/spark.key
  StrictHostKeyChecking no


These configurations works well with spark 2.0.2 and I'm able to submit a
spark job and visualize it on the webui of spark.

While working on my cluster, I suddenly wasn't able to submit anything to
spark and its web ui became unreachable.

While looking at the logs of the master I saw the following

17/01/07 10:43:04 INFO Master: Driver submitted
org.apache.spark.deploy.worker.DriverWrapper
17/01/07 10:43:04 INFO Master: Launching driver driver-20170107104304-0003
on worker worker-20170106162522-192.168.0.143-41738
17/01/07 10:43:07 INFO Master: Removing driver: driver-20170107104304-0003
17/01/07 11:11:21 INFO Master: Driver submitted
org.apache.spark.deploy.worker.DriverWrapper
17/01/07 11:11:21 INFO Master: Launching driver driver-20170107111121-0004
on worker worker-20170106162522-192.168.0.143-41738
17/01/07 11:11:25 INFO Master: Removing driver: driver-20170107111121-0004
17/01/07 11:24:02 INFO Master: Driver submitted
org.apache.spark.deploy.worker.DriverWrapper
17/01/07 11:24:02 INFO Master: Launching driver driver-20170107112402-0005
on worker worker-20170106162522-192.168.0.143-41738
17/01/07 11:24:12 INFO Master: Removing driver: driver-20170107112402-0005
17/01/07 11:39:05 INFO ClientCnxn: Client session timed out, have not heard
from server in 26678ms for sessionid 0x35975540ed40003, closing socket
connection and attempting reconnect
17/01/07 11:39:05 INFO ConnectionStateManager: State change: SUSPENDED
17/01/07 11:39:05 INFO ZooKeeperLeaderElectionAgent: We have lost leadership
17/01/07 11:39:05 ERROR Master: Leadership has been revoked -- master
shutting down.

And those of the workers

17/01/07 11:53:06 INFO Worker: Retrying connection to master (attempt # 15)
17/01/07 11:53:06 INFO Worker: Connecting to master ottawa41:7077...
17/01/07 11:53:06 WARN Worker: Failed to connect to master ottawa41:7077
org.apache.spark.SparkException: Exception thrown in awaitResult
at
org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at
org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96)
at
org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1$$anon$2.run(Worker.scala:272)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to connect to ottawa41/
192.168.0.141:7077
at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
at
org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
... 4 more
Caused by: java.net.ConnectException: Connection refused: ottawa41/
192.168.0.141:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at
io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
17/01/07 11:54:08 ERROR Worker: RECEIVED SIGNAL TERM
17/01/07 11:54:08 INFO ShutdownHookManager: Shutdown hook called
17/01/07 11:54:08 INFO ShutdownHookManager: Deleting directory
/tmp/spark-ea79d13d-fc3e-4a6d-952e-cf76c700829d

This is really confusing become when I look at zookeeper, it is up and
running.


Any clue please?
Thanks.

Reply via email to