[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289971#comment-13289971
 ] 

stack commented on HBASE-6060:
------------------------------

@Ram I looked at Chunhui's suggestion and read it as a fix in SSH that avoided 
our doing assigns in SSH but rather we punted to the timeout monitor letting it 
assign.  I now see after your explanation above that this is a change in 
AM#processServerShutdown.  Pardon me.  Thanks for the reminder on whats going 
on here and also for the reminder on hbase-5816.

The issue in hbase-5816 seems to be the one to fix first.  Making the timeout 
monitor run more promptly will always make us bump up against hbase-5816 (our 
current config just mitigates how many threads can run at the one time 
manipulating regions.  Its not a fix by any means).

bq. Now because of the retry we start an assignment and SSH also will try to do 
an assignement(Either with the soln in patch or with Chunhui's suggestion). Now 
this will cause the master abort to happen.

If we are to continue to have multiple threads doing assign, then we need to 
have each claim ownership of the region first by writing zk and then verifying 
the seqid each time they move the region state.  If they fali to grab the 
znode, then someone else is operating on it and they should give up.

Are you thinking that the patches posted earlier address this issue and 
hbase-5816?

                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to