[
https://issues.apache.org/jira/browse/ZOOKEEPER-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558594#comment-14558594
]
Alexander Shraer commented on ZOOKEEPER-2172:
---------------------------------------------
Michi, I agree that its weird, and again I see this 0xffffffffffffffff round
number which I think causes the other server to ignore the leader's messages
and continue looking even though server 1 is leading (without timing out for
some reason). This looks related to ZOOKEEPER-1732 and ZOOKEEPER-1805. [~fpj]
could you take a look ?
It also seems like there are a lot of client sessions being established and
destroyed (clients connect and disconnect).
And in particular when the reconfig adding server 3 happens (12:33:21,797 on
server 1) the client session 0x14d8b08424a0014 (this is the client that
submitted the reconfig) gets closed in the middle of the operation. Then, the
connection to server 2 is suddenly closed with the error (on server 2)
2015-05-25 12:33:25,786 [myid:2] - WARN
[QuorumPeer[myid=2]/10.0.0.2:1300:Follower@92] - Exception when following the
leader java.net.SocketTimeoutException: Read timed out
Could it be that the termination of a client session in the middle of an op
messes up server-to-server connections ?
> Cluster crashes when reconfig a new node as a participant
> ---------------------------------------------------------
>
> Key: ZOOKEEPER-2172
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2172
> Project: ZooKeeper
> Issue Type: Bug
> Components: leaderElection, quorum, server
> Affects Versions: 3.5.0
> Environment: Ubuntu 12.04 + java 7
> Reporter: Ziyou Wang
> Priority: Critical
> Attachments: node-1.log, node-2.log, node-3.log,
> zoo.cfg.dynamic.10000005d, zoo.cfg.dynamic.next, zookeeper-1.log,
> zookeeper-2.log, zookeeper-3.log
>
>
> The operations are quite simple: start three zk servers one by one, then
> reconfig the cluster to add the new one as a participant. When I add the
> third one, the zk cluster may enter a weird state and cannot recover.
>
> I found “2015-04-20 12:53:48,236 [myid:1] - INFO [ProcessThread(sid:1
> cport:-1)::PrepRequestProcessor@547] - Incremental reconfig” in node-1 log.
> So the first node received the reconfig cmd at 12:53:48. Latter, it logged
> “2015-04-20 12:53:52,230 [myid:1] - ERROR
> [LearnerHandler-/10.0.0.2:55890:LearnerHandler@580] - Unexpected exception
> causing shutdown while sock still open” and “2015-04-20 12:53:52,231 [myid:1]
> - WARN [LearnerHandler-/10.0.0.2:55890:LearnerHandler@595] - ******* GOODBYE
> /10.0.0.2:55890 ********”. From then on, the first node and second node
> rejected all client connections and the third node didn’t join the cluster as
> a participant. The whole cluster was done.
>
> When the problem happened, all three nodes just used the same dynamic
> config file zoo.cfg.dynamic.10000005d which only contained the first two
> nodes. But there was another unused dynamic config file in node-1 directory
> zoo.cfg.dynamic.next which already contained three nodes.
>
> When I extended the waiting time between starting the third node and
> reconfiguring the cluster, the problem didn’t show again. So it should be a
> race condition problem.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)