[
https://issues.apache.org/jira/browse/ZOOKEEPER-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13191710#comment-13191710
]
Patrick Hunt commented on ZOOKEEPER-1367:
-----------------------------------------
I just noticed something that doesn't look good:
I started a 2 server ensemble, connected a client, then stopped/started the
leader. I noticed that:
1) the follower does not create a snapshot after reconnecting to the new leader
(the same leader is re-elected to be the leader in epoch 3 as was lead in epoch
2)
2) the follower continues to write changes from epoch 3 into the log from epoch
2.
{noformat}
java -cp '/home/phunt/Downloads/zk1367/jars/*'
org.apache.zookeeper.server.LogFormatter log.200000001
ZooKeeper Transactional Log File with dbid 0 txnlog format version 2
1/23/12 4:39:45 PM PST session 0x1350d25cc790000 cxid 0x0 zxid 0x200000001
createSession 30000
1/23/12 4:41:02 PM PST session 0x1350d25cc790000 cxid 0x2 zxid 0x300000001
create '/foo,#626172,v{s{31,s{'world,'anyone}}},F,1
1/23/12 4:44:10 PM PST session 0x1350d25cc790000 cxid 0x4 zxid 0x300000002
closeSession null
1/23/12 4:44:26 PM PST session 0x1350d283e690000 cxid 0x0 zxid 0x300000003
createSession 30000
1/23/12 4:44:43 PM PST session 0x1350d283e690000 cxid 0x2 zxid 0x300000004
closeSession null
1/23/12 4:44:51 PM PST session 0x2350d283e5c0000 cxid 0x0 zxid 0x300000005
createSession 30000
1/23/12 4:45:10 PM PST session 0x2350d283e5c0000 cxid 0x2 zxid 0x300000006
closeSession null
1/23/12 4:45:17 PM PST session 0x1350d283e690001 cxid 0x0 zxid 0x300000007
createSession 30000
EOF reached after 8 txns.
{noformat}
> Data inconsistencies and unexpired ephemeral nodes after cluster restart
> ------------------------------------------------------------------------
>
> Key: ZOOKEEPER-1367
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1367
> Project: ZooKeeper
> Issue Type: Bug
> Components: server
> Affects Versions: 3.4.2
> Environment: Debian Squeeze, 64-bit
> Reporter: Jeremy Stribling
> Priority: Blocker
> Fix For: 3.4.3
>
> Attachments: ZOOKEEPER-1367.tgz
>
>
> In one of our tests, we have a cluster of three ZooKeeper servers. We kill
> all three, and then restart just two of them. Sometimes we notice that on
> one of the restarted servers, ephemeral nodes from previous sessions do not
> get deleted, while on the other server they do. We are effectively running
> 3.4.2, though technically we are running 3.4.1 with the patch manually
> applied for ZOOKEEPER-1333 and a C client for 3.4.1 with the patches for
> ZOOKEEPER-1163.
> I noticed that when I connected using zkCli.sh to the first node (90.0.0.221,
> zkid 84), I saw only one znode in a particular path:
> {quote}
> [zk: 90.0.0.221:2888(CONNECTED) 0] ls /election/zkrsm
> [nominee0000000011]
> [zk: 90.0.0.221:2888(CONNECTED) 1] get /election/zkrsm/nominee0000000011
> 90.0.0.222:7777
> cZxid = 0x400000027
> ctime = Thu Jan 19 08:18:24 UTC 2012
> mZxid = 0x400000027
> mtime = Thu Jan 19 08:18:24 UTC 2012
> pZxid = 0x400000027
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0xa234f4f3bc220001
> dataLength = 16
> numChildren = 0
> {quote}
> However, when I connect zkCli.sh to the second server (90.0.0.222, zkid 251),
> I saw three znodes under that same path:
> {quote}
> [zk: 90.0.0.222:2888(CONNECTED) 2] ls /election/zkrsm
> nominee0000000006 nominee0000000010 nominee0000000011
> [zk: 90.0.0.222:2888(CONNECTED) 2] get /election/zkrsm/nominee0000000011
> 90.0.0.222:7777
> cZxid = 0x400000027
> ctime = Thu Jan 19 08:18:24 UTC 2012
> mZxid = 0x400000027
> mtime = Thu Jan 19 08:18:24 UTC 2012
> pZxid = 0x400000027
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0xa234f4f3bc220001
> dataLength = 16
> numChildren = 0
> [zk: 90.0.0.222:2888(CONNECTED) 3] get /election/zkrsm/nominee0000000010
> 90.0.0.221:7777
> cZxid = 0x30000014c
> ctime = Thu Jan 19 07:53:42 UTC 2012
> mZxid = 0x30000014c
> mtime = Thu Jan 19 07:53:42 UTC 2012
> pZxid = 0x30000014c
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0xa234f4f3bc220000
> dataLength = 16
> numChildren = 0
> [zk: 90.0.0.222:2888(CONNECTED) 4] get /election/zkrsm/nominee0000000006
> 90.0.0.223:7777
> cZxid = 0x200000cab
> ctime = Thu Jan 19 08:00:30 UTC 2012
> mZxid = 0x200000cab
> mtime = Thu Jan 19 08:00:30 UTC 2012
> pZxid = 0x200000cab
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x5434f5074e040002
> dataLength = 16
> numChildren = 0
> {quote}
> These never went away for the lifetime of the server, for any clients
> connected directly to that server. Note that this cluster is configured to
> have all three servers still, the third one being down (90.0.0.223, zkid 162).
> I captured the data/snapshot directories for the the two live servers. When
> I start single-node servers using each directory, I can briefly see that the
> inconsistent data is present in those logs, though the ephemeral nodes seem
> to get (correctly) cleaned up pretty soon after I start the server.
> I will upload a tar containing the debug logs and data directories from the
> failure. I think we can reproduce it regularly if you need more info.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira