[ https://issues.apache.org/jira/browse/HBASE-19501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16290337#comment-16290337 ]
stack commented on HBASE-19501: ------------------------------- Please do not review! I'm breaking up this patch adding pieces to other issues and will close out this one. The summary is wrong. We had a mechanism for retaining assignment but its operation was cryptic and older versions of HBASE-18946 patch frustrated our retaining old config. Messing with this issue and fixing HBASE-18946 gave me a better understanding of how this all should work. Doc and some fixes from here went to HBASE-18946. Test fixing and new facility in HTU for testing retention will be added to the parent issue here. On the items raised in the description: # It is hard to test if we retain assignments because our little minicluster gives RegionServers new ports on restart foiling our means of recognizing new instance of a server by checking hostname+port (and ensuring the startcode is larger). This is so. There is a crazy test in TestRestartCluster#testRetainAssignmentOnRestart that records old RS port numbers then starts each of the daemons one-by-one setting port individually. In parent, I add means of doing this to HTU # Some of our tests like the parent test depended on retaining assignment across restarts. They do. Retention doesn't work unless you do crazy stuff like the trick above in TestRestartCluster#testRetainAssignmentOnRestart (now a hack in HTU makes it a little easier to do). # As said in parent issue, master used to be last to go down when we did a controlled cluster shutdown. We lost that when we moved to AMv2. When we do a cluster shutdown, the RegionServers close down the Regions, not the Master as is usual in AMv2 (Master wants to do all assign ops in AMv2). This means that the Master is surprised when it gets notification of CLOSE ops that it did not initiate. Usually on CLOSE, Master updates meta with the CLOSE state. On cluster shutdown we are not doing this. Fixed this over in HBASE-18946 by keeping Master up till last so at least the noisy failed deliveries don't show in logs anymore. Also doc. the shutdown process. It can be improved. # So, on restart, we read meta and we see all regions still in OPEN state so we think the cluster crashed down so we go and do ServerCrashProcedure. Which hoses our ability to retain assign. This is mostly true. Over in HBASE-18946 we explain whats going on and why we ALWAYS run ServerCrashProcedure just-in-case and we add distinction between creating assigns that retain old location and assigns that want to be distributed (i.e. new table creation). Anyway, closing this out as won't fix or rather, the issues raised here are addressed elsewhere. > [AMv2] Retain assignment across restarts > ---------------------------------------- > > Key: HBASE-19501 > URL: https://issues.apache.org/jira/browse/HBASE-19501 > Project: HBase > Issue Type: Sub-task > Components: Region Assignment > Reporter: stack > Assignee: stack > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19501.master.001.patch, > HBASE-19501.master.002.patch, HBASE-19501.master.003.patch, HBASE-19501.patch > > > Working with replicas and the parent test in particular, I learned a few > interesting things: > # It is hard to test if we retain assignments because our little minicluster > gives RegionServers new ports on restart foiling our means of recognizing new > instance of a server by checking hostname+port (and ensuring the startcode is > larger). > # Some of our tests like the parent test depended on retaining assignment > across restarts. > # As said in parent issue, master used to be last to go down when we did a > controlled cluster shutdown. We lost that when we moved to AMv2. > # When we do a cluster shutdown, the RegionServers close down the Regions, > not the Master as is usual in AMv2 (Master wants to do all assign ops in > AMv2). This means that the Master is surprised when it gets notification of > CLOSE ops that it did not initiate. Usually on CLOSE, Master updates meta > with the CLOSE state. On cluster shutdown we are not doing this. > # So, on restart, we read meta and we see all regions still in OPEN state so > we think the cluster crashed down so we go and do ServerCrashProcedure. Which > hoses our ability to retain assign. > Some experiments: > # I can make the Master stay up so it is last to go down > # This makes it so we no longer spew the logs with failed transition > messages because Master is not up to receive the CLOSE transitions. > # I hacked in means of telling minicluster ports it should use on start; > helps fake case of new RS instances > # It is hard to tell the difference between a clean shutdown and a crash > down. It is dangerous if we get the call wrong. Currently, given that we just > let ServerCrashProcedure deal with it -- the safest option -- one experiment > is that when it goes to assign the regions that were on the crashed server, > rather than round robin, instead we should look and see if new instance of > old location and if so, just give it al lthe regions. That'd retain locality. > This seems to work. Problem is that SCP is doing assignment. Ideally balancer > would do it. > Let me put up a patch that retains assignment across restart (somehow). -- This message was sent by Atlassian JIRA (v6.4.14#64029)