[ https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13850039#comment-13850039 ]
Liu Shaohui commented on HBASE-9892: ------------------------------------ @stack {quote} One thought I had before commit is what happens for say the case where it is an existing cluster and the znode has empty data? Will we use the info port from the configuration? {quote} Yes. See the code in RegionServerTracker. if the znode has empty data or invalid data, default infoPort will be 0, and getRegionServerInfoPort in HMaster will return the REGIONSERVER_INFO_PORT from config. {quote} What if we do a rolling restart when the znode has no data in it? Who writes the znode data? Will new servers be able to work if the znode has no data in it? {quote} RegionServerTracker will handle this situation. For rs with old version, its znode has no data and hmaster will use info port from config. When new region servers start up, it will write the RSinfo data to it's znode and RegionServerTracker will watch this and use the data from znode. > Add info port to ServerName to support multi instances in a node > ---------------------------------------------------------------- > > Key: HBASE-9892 > URL: https://issues.apache.org/jira/browse/HBASE-9892 > Project: HBase > Issue Type: Improvement > Reporter: Liu Shaohui > Assignee: Liu Shaohui > Priority: Minor > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, > HBASE-9892-0.94-v3.diff, HBASE-9892-0.94-v4.diff, HBASE-9892-0.94-v5.diff, > HBASE-9892-trunk-v1.diff, HBASE-9892-trunk-v1.patch, > HBASE-9892-trunk-v1.patch, HBASE-9892-trunk-v2.patch, > HBASE-9892-trunk-v3.diff, HBASE-9892-v5.txt > > > The full GC time of regionserver with big heap(> 30G ) usually can not be > controlled in 30s. At the same time, the servers with 64G memory are normal. > So we try to deploy multi rs instances(2-3 ) in a single node and the heap of > each rs is about 20G ~ 24G. > Most of the things works fine, except the hbase web ui. The master get the RS > info port from conf, which is suitable for this situation of multi rs > instances in a node. So we add info port to ServerName. > a. at the startup, rs report it's info port to Hmaster. > b, For root region, rs write the servername with info port ro the zookeeper > root-region-server node. > c, For meta regions, rs write the servername with info port to root region > d. For user regions, rs write the servername with info port to meta regions > So hmaster and client can get info port from the servername. > To test this feature, I change the rs num from 1 to 3 in standalone mode, so > we can test it in standalone mode, > I think Hoya(hbase on yarn) will encounter the same problem. Anyone knows > how Hoya handle this problem? > PS: There are different formats for servername in zk node and meta table, i > think we need to unify it and refactor the code. -- This message was sent by Atlassian JIRA (v6.1.4#6159)