[ https://issues.apache.org/jira/browse/HBASE-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
stack resolved HBASE-2766. -------------------------- Resolution: Cannot Reproduce Resolving as stale and can't reproduce (I've been bunch of rolling restart testing lately and cluster comes back if one server has meta and root for me). > cluster may not recover if RS hosting ROOT and META together crashes > -------------------------------------------------------------------- > > Key: HBASE-2766 > URL: https://issues.apache.org/jira/browse/HBASE-2766 > Project: HBase > Issue Type: Bug > Reporter: Eugene Koontz > Assignee: Eugene Koontz > > If both -ROOT- and .META. are located on the same regionserver, and that > regionserver crashes, the master is unable to reassign these tables to a new > regionserver. > Andrew writes: > "I ran a webtable scenario up on EC2 using the latest TM-2 and a cluster with > 1 ZK, 1 master, 5 slaves, and 1 auxiliary node, using the HBase cluster > scripts at https://tm-files.s3.amazonaws.com/hbase-ec2.tar.bz2. On the aux > node I uploaded the Faulkner utility -- > https://tm-files.s3.amazonaws.com/faulkner.tar.gz -- and ran the > 'webtable.sh' script in that tarball. On the master I waited about 30 minutes > for a fair amount of regions to proliferate and then ran: > # nice -10 hbase shell > hbase> count 'TestTable' > and walked away, leaving it to chew on the heavy write load and scan. > At some point during this test scenario a region server crashed, due to a JVM > segfault. The client (Faulkner) never recovered. > As far as I can see, in this test scenario the master never reassigns regions > away from a crashed RS." -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira