[ https://issues.apache.org/jira/browse/HBASE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13113248#comment-13113248 ]
jirapos...@reviews.apache.org commented on HBASE-4455: ------------------------------------------------------ ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/2007/#review2037 ----------------------------------------------------------- Great stuff! I have some questions throughout but seems like this will make everything more resilient to root/meta servers failing. Is the general approach to always verify / always check rather than relying on cached locations or values? Have you thought about any ways that we could add some better unit tests around this stuff? There's a TestRollingRestart that is obviously not good enough :) http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4598> so we always verify the connection now? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4597> why log the cached META server here? didn't we just verify that it was not valid? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4599> why log the cached meta location here? it might be confusing since it doesn't log that we just found this meta location was invalid http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4601> nice catch http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4602> why do we have two hard-coded timeouts in this area of code? :) this code seems to always sleep 500ms at a time unless you set timeout=0 and then it loops every 50ms? that doesn't seem right... i could set timeout to 100ms and it would sleep 500ms. sleeping 50ms every time would be better but there's probably a solution with less overhead (doing remote read queries every 50ms in a loop) could we just notifyAll() on metaAvailable whenever we relocate root? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java <https://reviews.apache.org/r/2007/#comment4603> good http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java <https://reviews.apache.org/r/2007/#comment4604> add another * here, so: /** that ensure this gets picked up as javadoc http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java <https://reviews.apache.org/r/2007/#comment4605> what about this method is specific to the shutdown server? this seems specific about regions in transition. if we only use it in the context of servers being shut down then maybe name it accordingly? it does seem like a generally useful method though and just related to ZK (could put it in a ZK util class?) http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java <https://reviews.apache.org/r/2007/#comment4606> this looks like a random debug statement, what does matchZK, sn: server mean? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java <https://reviews.apache.org/r/2007/#comment4607> i'm also a bit confused by this. couldn't we just increase the thread pool size to 2? :) http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java <https://reviews.apache.org/r/2007/#comment4608> is this normal? should it be a warn? maybe a comment on why this would happen http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java <https://reviews.apache.org/r/2007/#comment4609> why this change? should this be rolled into the ZKNodeTracker rather than overriding the getData() behavior? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java <https://reviews.apache.org/r/2007/#comment4610> it seems like you're covering up for bugs in the underlying ZKNodeTracker... can we fix that instead? or if it's a matter of returning a cached value or not, can we just add a boolean flag for refresh/nocache? - Jonathan On 2011-09-23 00:15:11, Ming Ma wrote: bq. bq. ----------------------------------------------------------- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/2007/ bq. ----------------------------------------------------------- bq. bq. (Updated 2011-09-23 00:15:11) bq. bq. bq. Review request for hbase. bq. bq. bq. Summary bq. ------- bq. bq. 1. Add more logging. bq. 2. Clean up CatalogTracker. waitForMeta waits for "timeout" value. When waitForMetaServerConnectionDefault is called by MetaNodeTracker, the timeout value is large. So it doesn't retry in case .ROOT. is updated; add the proper implementation for CatalogTracker.verifyMetaRegionLocation bq. 4. Check for the latest -ROOT- and .META. region location during the handling of server shutdown. bq. 5. Right after assigning the -ROOT- or .META. in ServerShutdownHandler, don't block and wait for .META. availability. Resubmit another ServerShutdownHandler for regular regions. bq. bq. bq. This addresses bug HBASE-4455. bq. https://issues.apache.org/jira/browse/HBASE-4455 bq. bq. bq. Diffs bq. ----- bq. bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaNodeTracker.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java 1172205 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java 1172205 bq. bq. Diff: https://reviews.apache.org/r/2007/diff bq. bq. bq. Testing bq. ------- bq. bq. Keep Master up all the time, do rolling restart of RSs like this - stop RS1, wait for 2 seconds, stop RS2, start RS1, wait for 2 seconds, stop RS3, start RS2, wait for 2 seconds, etc. The program can run for couple hours until it stops. -ROOT- and .META. are available during that time. bq. bq. bq. Thanks, bq. bq. Ming bq. bq. > Rolling restart RSs scenario, -ROOT-, .META. regions are lost in > AssignmentManager > ---------------------------------------------------------------------------------- > > Key: HBASE-4455 > URL: https://issues.apache.org/jira/browse/HBASE-4455 > Project: HBase > Issue Type: Bug > Reporter: Ming Ma > Assignee: Ming Ma > Fix For: 0.92.0 > > > Keep Master up all the time, do rolling restart of RSs like this - stop RS1, > wait for 2 seconds, stop RS2, start RS1, wait for 2 seconds, stop RS3, start > RS2, wait for 2 seconds, etc. After a while, you will find the -ROOT-, .META. > regions aren't in "regions in transtion" from AssignmentManager point of > view, but they aren't assigned to any regions. Here are the issues. > 1. .-ROOT- or .META. location is stale when MetaServerShutdownHandler is > invoked to check if it contains -ROOT- region. That is due to long delay from > ZK notification and async nature of the system. Here is an example, even > though new root region server sea-lab-1,60020,1316380133656 is set at T2, at > T3 the shutdown process for sea-lab-1,60020,1316380133656, the root location > still points to old server sea-lab-3,60020,1316380037898. > T1: 2011-09-18 14:08:52,470 DEBUG org.apache.hadoop.hbase.zookeeper.ZKUtil: > master:6 > 0000-0x1327e43175e0000 Retrieved 29 byte(s) of data from znode > /hbase/root-regio > n-server and set watcher; sea-lab-3,60020,1316380037898 > T2: 2011-09-18 14:08:57,173 INFO > org.apache.hadoop.hbase.catalog.RootLocationEditor: Setting ROOT region > location in ZooKeeper as sea-lab-1,60020,1316380133656 > T3: 2011-09-18 14:10:26,393 DEBUG > org.apache.hadoop.hbase.master.ServerManager: Adde > d=sea-lab-1,60020,1316380133656 to dead servers, submitted shutdown handler > to be executed, root=false, meta=true, current Root Location: > sea-lab-3,60020,1316380037898 > T4: 2011-09-18 14:12:37,314 DEBUG org.apache.hadoop.hbase.zookeeper.ZKUtil: > master:6 > 0000-0x1327e43175e0000 Retrieved 29 byte(s) of data from znode > /hbase/root-region-server and set watcher; sea-lab-1,60020,1316380133656 > 2. The MetaServerShutdownHandler worker thread that waits for -ROOT- or > .META. availability could be blocked. If meanwhile, the new server that > -ROOT- or .META. is being assigned restarted, another instance of > MetaServerShutdownHandler is queued. Eventually, all > MetaServerShutdownHandler worker threads are filled up. It looks like > HBASE-4245. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira