[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291416#comment-13291416
 ] 

stack commented on HBASE-6060:
------------------------------

I think I understand this patch now.

In AM, we introduce a new Map, a Map of dead servers to their current set of 
region plans and regions-in-transition.

This Map is populated in AM when SSH calls AM to say process a server that just 
died (processServerShutdown).

If we try to assign a region in another thread at this time, a check by AM into 
this new Map will cause us to return a 'DON'T DO ANYTHING' flag because its 
being done elsewhere (in SSH eventually -- and we return a flag rather than 
null because null triggers a different kind of special handling).

If SSH is running, it is what is meant to prevail assigning regions rather than 
any balance or user assignment that may have been made concurrently.

When it finishes, SSH clears the dead server from the AM Map.

Does this sound right?

I have some feedback on details of this patch but I would first like to get 
high-level comments out of the way.

1. Is this patch even handling the root cause of this JIRA: i.e. dealing w/ 
OPENING znodes that were made against the dead server?  In 
AM#processServerShutdown, we iterate the list of regions we get back from 
AM#this.servers and from this set we will remove RIT.  But won't this original 
set of regions be missing regions that are OPENING or even OPENED but not yet 
handled because its only after the OPENED region has updated its znode and the 
znode has been removed by the OPEN handler do we add a region to 
AM#this.servers?  Or is the OPENING region handled elsewhere?

2. This patch distributes state and updating of the state across AM and SSH.  
It makes it tough to follow what is going on (least, it makes it tough for 
lesser mortals like myself to figure it).  The new Map is populated as a 
side-effect of the call to AM#processServerShutdown.  The dead server is 
cleared explicitly from the AM Map on the tail of SSH on its way out (Reading 
SSH only, you'd be clueless why this was being done).  During the processing of 
SSH, we are updating a data structure that is held by this Map over in AM.  Is 
there anything we can do to better encapsulate this new state management?  What 
is we changed the method in AM from processServerShutdown to 
startProcessingServerShutdown and closed it off with a 
finishProcessingServerShutdown?  Do we have to keep around a list of 
regionplans?  Can't we just keep list of dead servers and any attempt at 
getting a region plan related to the dead server or server being processed as 
dead is put off because SSH will get t
 o it?

Here are some minor items while they are on my mind:

1. On REGION_PLAN_ALREADY_INUSE, why not return null and if we get a null back 
from getRegionPlan in assign, check  
serverManager.createDestinationServersList(serverToExclude) is empty; if it is, 
update timeoutmonitor.  This'd be cleaner than adding this override of 
RegionState data structure to do more than just carry a plan?
2. On the tail of SSH, we do 
this.services.getAssignmentManager().removeDeadServerRegions(this.serverName);  
Just before that we clear regionsFromRegionPlansForServer and 
regionsInTransition.  The clearings are unnecessary?

Good stuff Ram and Rajeshbabu.  I think this patch can work after a little 
cleanup.  I'll have some more feedback this evening.  Thought I'd send this in 
meantime so you have something to chew on when you get up this morning.

                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to