I think I've identified the problem.
In CommitProcessor line 203, we set the nextPending to null. But we never
set it to null in the else case below on lines 205-210. It doesn't appear
to be removed anywhere else in the file so the processor thinks it is
always waiting for commit and will never continue.
That seems to match the stack anyway. Not sure how this causes the xid
mismatch yet though.
On Jan 24, 2014 10:35 PM, "Thawan Kooburat (JIRA)" <j...@apache.org> wrote:

>
>     [
> https://issues.apache.org/jira/browse/ZOOKEEPER-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13881636#comment-13881636]
>
> Thawan Kooburat commented on ZOOKEEPER-1863:
> --------------------------------------------
>
> I have seen a Commit Processor getting stuck in our prod (which run our
> internal branch) I spent a few days digging into the problem but couldn't
> locate the root cause.
>
> The sequence of action that you put in description is very unlikely to
> occur in quorum mode.  First, the Follower/ObserverReuestProcessor which is
> in the front of the CommitProcessor put a request into queuedRequests even
> before sending it out to the leader.   It need at least a network round
> trip ( or a full quorum vote) before the same request will comeback from a
> leader and get put into commitRequest.  This is the assumption that even
> the original CommitProcessor (prior to ZOOKEEPER-1505) rely on. However, a
> combination of bad thread scheduling and long GC pause might break this
> assumption.
>
> Sync request is special unlike other write request because it doesn't
> require quorum voting, but I still don't think it matter in this case.
>
> Again, since I saw this in prod but I am unable to repro it. I did add a
> background thread to detect a request stuck in nextPending for extended
> period of time and kill the server if it is the case.  I can post the patch
> if we are able unable find the root cause.
>
> You can also capture a heap dump of server to inspect which request get
> stuck (at nextPending) and correlated the possible event.
>
>
>
>
> > Race condition in commit processor leading to out of order request
> completion, xid mismatch on client.
> >
> ------------------------------------------------------------------------------------------------------
> >
> >                 Key: ZOOKEEPER-1863
> >                 URL:
> https://issues.apache.org/jira/browse/ZOOKEEPER-1863
> >             Project: ZooKeeper
> >          Issue Type: Bug
> >          Components: server
> >    Affects Versions: 3.5.0
> >            Reporter: Dutch T. Meyer
> >            Priority: Blocker
> >         Attachments: stack.17512
> >
> >
> > In CommitProcessor.java processor, if we are at the primary request
> handler on line 167:
> > {noformat}
> >                 while (!stopped && !isWaitingForCommit() &&
> >                        !isProcessingCommit() &&
> >                        (request = queuedRequests.poll()) != null) {
> >                     if (needCommit(request)) {
> >                         nextPending.set(request);
> >                     } else {
> >                         sendToNextProcessor(request);
> >                     }
> >                 }
> > {noformat}
> > A request can be handled in this block and be quickly processed and
> completed on another thread. If queuedRequests is empty, we then exit the
> block. Next, before this thread makes any more progress, we can get 2 more
> requests, one get_children(say), and a sync placed on queuedRequests for
> the processor. Then, if we are very unlucky, the sync request can complete
> and this object's commit() routine is called (from
> FollowerZookeeperServer), which places the sync request on the previously
> empty committedRequests queue. At that point, this thread continues.
> > We reach line 182, which is a check on sync requests.
> > {noformat}
> >                 if (!stopped && !isProcessingRequest() &&
> >                     (request = committedRequests.poll()) != null) {
> > {noformat}
> > Here we are not processing any requests, because the original request
> has completed. We haven't dequeued either the read or the sync request in
> this processor. Next, the poll above will pull the sync request off the
> queue, and in the following block, the sync will get forwarded to the next
> processor.
> > This is a problem because the read request hasn't been forwarded yet, so
> requests are now out of order.
> > I've been able to reproduce this bug reliably by injecting a
> Thread.sleep(5000) between the two blocks above to make the race condition
> far more likely, then in a client program.
> > {noformat}
> >         zoo_aget_children(zh, "/", 0, getchildren_cb, NULL);
> >         //Wait long enough for queuedRequests to drain
> >         sleep(1);
> >         zoo_aget_children(zh, "/", 0, getchildren_cb, &th_ctx[0]);
> >         zoo_async(zh, "/", sync_cb, &th_ctx[0]);
> > {noformat}
> > When this bug is triggered, 3 things can happen:
> > 1) Clients will see requests complete out of order and fail on xid
> mismatches.
> > 2) Kazoo in particular doesn't handle this runtime exception well, and
> can orphan outstanding requests.
> > 3) I've seen zookeeper servers deadlock, likely because the commit
> cannot be completed, which can wedge the commit processor.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.1.5#6160)
>

Reply via email to