[ 
https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13179246#comment-13179246
 ] 

Jimmy Xiang commented on HBASE-5081:
------------------------------------

@Prakash, this patch seems not fixed the problem.  Assume splitLog failed, and 
the deleteNode is queued up, then the splitLog is retried, then
createTaskIfAbsent is called, the failed task is still in the tasks map, so 
it's batch info is updated, batch.installed++, and a createNode is queued up.  
Now, the deleteNode succeeds and on the deleteNodeSuccess callback, the task is 
removed from the tasks map.  Now it will be an orphan task.  batch.installed 
will never be equal to batch.done + batch.err, so the splitLog will hang.

Can we delete the task from the tasks map in the deleteNode call before calls 
zookeeper's delete?
                
> Distributed log splitting deleteNode races against splitLog retry 
> ------------------------------------------------------------------
>
>                 Key: HBASE-5081
>                 URL: https://issues.apache.org/jira/browse/HBASE-5081
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.92.0, 0.94.0
>            Reporter: Jimmy Xiang
>            Assignee: Prakash Khemani
>             Fix For: 0.92.0
>
>         Attachments: 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> distributed-log-splitting-screenshot.png, hbase-5081-patch-v6.txt, 
> hbase-5081-patch-v7.txt, hbase-5081_patch_for_92_v4.txt, 
> hbase-5081_patch_v5.txt, patch_for_92.txt, patch_for_92_v2.txt, 
> patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs 
> there forever.  Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the 
> distributed log splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a 
> hashmap (tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in 
> the callback, it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is 
> unassigned, and it is not
> in the hashmap, so it created a new orphan task.
> 7.  All three tasks failed, but that task created in step 6 is an orphan so 
> the batch.err counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to 
> finish which is never going to happen.
> So I think the problem is step 2.  The fix is to make deletion sync, instead 
> of async, so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial.  In extreme situation, 
> if async deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to