[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292589#comment-13292589
 ] 

stack commented on HBASE-6060:
------------------------------

Rajesh says:
bq. Lets suppose Region server went down after spawning OpenRegionHandler and 
before transitioning to OPENING then its SSH responsibility to assign regions 
in OFFLINE/PENDING_OPEN.

I say:
ba. Shouldn't region belong to master or regionserver without a gray area 
in-between while PENDING_OPEN is going on?

Ram says:

bq. Stack, in this case first the region belongs to master.  Only after the RS 
changes to OPENING the znode of the regions belongs to the RS.

So, Rajesh identifies a hole, I claim the hole is murky, inspecific, and Ram 
claims there is no hole really.

Below I argue that there is a hole and a small change cleans up RegionState 
states making SSH processing more clean.

Ram, what you say is true, if you are looking at znode states only.  If you are 
looking at RegionState, the in-memory reflection of what a regions' state is 
according to the master, then what PENDING_OPEN covers, a state that does not 
have a corresponding znode state, is unclear.  

I want to rely on whats in RegionState figuring what SSH should process 
(Rajesh's latest patch seems to want to walk this path too).

PENDING_OPEN spans the master sending the rpc open currently.  It is set before 
we do the rpc invocation so if the regionserver goes down, if a region's state 
is PENDING_OPEN, should it be handled by SSH or will it get retried by the 
single-assign method?  I can't tell for sure.  If the regionserver went down 
while the rpc was outstanding, the single-assign will retry.  It will actually 
set the RegionState back to OFFLINE temporarily -- which makes it even harder 
figuring whats going on if looking from another thread.  PENDING_OPEN as is is 
worse than useless.

How about this Ram and Rajesh

{code}
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
@@ -1715,14 +1715,18 @@ public class AssignmentManager extends 
ZooKeeperListener {
       try {
         LOG.debug("Assigning region " + 
state.getRegion().getRegionNameAsString() +
           " to " + plan.getDestination().toString());
-        // Transition RegionState to PENDING_OPEN
-        state.update(RegionState.State.PENDING_OPEN, 
System.currentTimeMillis(),
-            plan.getDestination());
         // Send OPEN RPC. This can fail if the server on other end is is not 
up.
         // Pass the version that was obtained while setting the node to 
OFFLINE.
         RegionOpeningState regionOpenState = serverManager.sendRegionOpen(plan
             .getDestination(), state.getRegion(), versionOfOfflineNode);
-        if (regionOpenState == RegionOpeningState.ALREADY_OPENED) {
+        if (regionOpenState.equals(RegionOpeningState.OPENED)) {
+          // Transition RegionState to PENDING_OPEN. It covers the period 
between the send of the
+          // rpc and our getting the callback setting the region state to 
OPENING.  This is
+          // in-memory only change.   Out in zk the znode is OFFLINE and we 
are waiting on
+          // regionserver to assume ownership by moving it to OPENING.
+          state.update(RegionState.State.PENDING_OPEN, 
System.currentTimeMillis(),
+            plan.getDestination());
+        } else if (regionOpenState == RegionOpeningState.ALREADY_OPENED) {
           // Remove region from in-memory transition and unassigned node from 
ZK
           // While trying to enable the table the regions of the table were
           // already enabled.
{code}

Here we set region to be PENDING_OPEN AFTER we send the open rpc.  Now we know 
that a region that is PENDING_OPEN will not be retried by the single-assign and 
the state is clear; its that period post open rpc but before we get the znode 
callback which sets the RegionState to OPENING.

Over in SSH, I can safely add PENDING_OPEN regions to the set of those to bulk 
assign if they are against the dead server currently being processed.

What do you fellas think?

I need to look at OFFLINE states too to see if they will always get retried by 
single-assign.  If so, we can leave these out of the SSH recover.
                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, 6060_alternative_suggestion.txt, 
> 6060_suggestion2_based_off_v3.patch, 6060_suggestion_based_off_v3.patch, 
> 6060_suggestion_toassign_rs_wentdown_beforerequest.patch, 
> HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to