[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291568#comment-13291568
 ] 

stack commented on HBASE-6060:
------------------------------

@Enis Yeah, its a big problem.

Ram and Rajeshbabu, I just uploaded a spin on your patch.  Its a bit more basic.

It gets rid of REGION_PLAN_ALREADY_INUSE and returns null out of getRegionPlan 
instead.

If a null region, we update the no servers flag if no regions else presume 
region assignment being done else where (in SSH or in timeoutmonitor).

In AM, I changed the processServerShutdown to instead be 
processServerShutdownStart with a corresponding processServerShutdownFinish.  
When the first is called, we add the servername being processed to a list.  
When the latter is called, we remove it.  Getting a regionplan, we now exclude 
servers being processed from the set of assignable servers.  This simplifies 
getRegionPlan.

processServerShutdownStart returns the Set of outstandingRegionPlans -- plans 
that were related to the server that is being processed by shutdown handler -- 
and the intersection of regions-in-transition with the set of regions the 
server was carrying.  The patch then does as you were doing minus the removing 
of region from the Set that was kept back in AM.

I've not verified this passes tests yet.  It still needs more work. Just 
putting it up as more feedback on your posted patch.

Regards Enis comment, I tried poking around some and its hard to follow state 
transitions across memory, zk, and then across retries and force repossessions. 
 Any ideas for how we'd simplify all of this?   I started to pull out a 
RegionsInTransition class.  It would be a 'database' backed by zk.  There would 
be no more in memory state, just whats out in zk.  You'd go to this new RIT db 
to get current state and to ask it make transistions.  Just a thought.  I 
haven't dug in enough.
                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, 6060_suggestion_based_off_v3.patch, HBASE-6060-92.patch, 
> HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to