Thanks for the write up Kevin. Many users will be benefited by this. Regards Ram
> -----Original Message----- > From: Kevin O'dell [mailto:kevin.od...@cloudera.com] > Sent: Tuesday, October 16, 2012 9:42 PM > To: user@hbase.apache.org > Subject: Re: hbase can't drop a table > > If you get in that situation again: > > 1.) Verify that you don't have any remnants of the tables in HDFS > hadoop fs -ls /hbase/ > > 2.) If you do have any remnants and you don't care about these tables > hadoop fs -mv /hbase/<table_name> /tmp > > 3.) ./bin/hbase hbck -fixMeta -fixAssignments > > This should clean up META to match your HDFS and allow you to move > forward. > If not you may need a restart of HBase in case you had a RIT or > something > like that hanging around. > > On Tue, Oct 16, 2012 at 5:06 AM, 唐 颖 <ivytang0...@gmail.com> wrote: > > > After checking the .META. table , the ivy test_deu and deu_ivytest > entries > > do exist . > > > > ROW COLUMN+CELL > > ivytest_deu,,1348821681817.77eb091b4753dd3b713f29c > > column=info:regioninfo, timestamp=1348821682041, value={NAME => > > 'ivytest_deu,,1348821681817.77eb091b4753dd3b713f29c4c3e0277c.', > STARTKEY => > > '', ENDKEY > > 4c3e0277c. => '', ENCODED > => > > 77eb091b4753dd3b713f29c4c3e0277c,} > > ivytest_deu,,1348821681817.77eb091b4753dd3b713f29c > column=info:server, > > timestamp=1350032968219, value=ELEX-LA-WEB10:61020 > > 4c3e0277c. > > ivytest_deu,,1348821681817.77eb091b4753dd3b713f29c > > column=info:serverstartcode, timestamp=1350032968219, > value=1350032549030 > > 4c3e0277c. > > > > ROW COLUMN+CELL > > deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82da > > column=info:regioninfo, timestamp=1348826121970, value={NAME => > > 'deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.', > STARTKEY => > > '', ENDKEY > > f523fcd45. => '', ENCODED > => > > 985d6ca9986d7d8cfaf82daf523fcd45,} > > deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82da > column=info:server, > > timestamp=1348826122164, value=ELEX-LA-WEB10:61020 > > f523fcd45. > > deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82da > > column=info:serverstartcode, timestamp=1348826122164, > value=1348648468768 > > f523fcd45. > > > > And i deleted them from .META. table.Things seem Ok. > > > > The region server won't try to load these regions. > > > > Yes,it seems that the HTableDescriptor file got deleted but the META > is > > having the entry. > > > > > > Thanks! > > > > > > 在 2012-10-16,下午4:52,"Ramkrishna.S.Vasudevan" < > > ramkrishna.vasude...@huawei.com> 写道: > > > > > What does the 'list' command show? Does it say the table exists or > not? > > > > > > What I can infer here is that the HTableDescriptor file got > deleted but > > the > > > META is having the entry. Any chance of the HTD getting accidently > > deleted > > > in your cluster? > > > > > > The hbck tool with -fixOrphanTables should atleast try to create > the > > > HTableDescriptor file I suppose. Then restart the cluster and then > see > > what > > > happens. > > > I will not be able to access the logs even if you add it to > pastebin. > > But > > > pls do it so that some one else who has access can look into it. > > > > > > Regards > > > Ram > > >> -----Original Message----- > > >> From: 张磊 [mailto:zhang...@youku.com] > > >> Sent: Tuesday, October 16, 2012 1:44 PM > > >> To: 'user@hbase.apache.org' > > >> Subject: RE: hbase can't drop a table > > >> > > >> Hope this can help you! > > >> https://issues.apache.org/jira/browse/HBASE- > > >> 3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment- > > >> tabpanel&focusedCommentId=13418790#comment-13418790 > > >> > > >> Fowler Zhang > > >> > > >> -----Original Message----- > > >> From: 唐 颖 [mailto:ivytang0...@gmail.com] > > >> Sent: 2012年10月16日 16:08 > > >> To: user@hbase.apache.org > > >> Subject: Re: hbase can't drop a table > > >> > > >> version 0.94.0, r8547 > > >> > > >> And the table is ivytest_deu. > > >> > > >> > > >> 在 2012-10-16,下午3:58,"Ramkrishna.S.Vasudevan" > > >> <ramkrishna.vasude...@huawei.com> 写道: > > >> > > >>> Which version of HBase? > > >>> > > >>> > > >>> The below logs that you have attached says about a different > table > > >> right ' > > >>> deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.' > > >>> And the one you are trying to drop is ' ivytest_deu’ > > >>> > > >>> Regards > > >>> Ram > > >>> > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: 唐 颖 [mailto:ivytang0...@gmail.com] > > >>>> Sent: Tuesday, October 16, 2012 1:23 PM > > >>>> To: user@hbase.apache.org > > >>>> Subject: hbase can't drop a table > > >>>> > > >>>> I disable this table ivytest_deu , drop it .Error occurs. > > >>>> > > >>>> > > >>>> ERROR: java.io.IOException: java.io.IOException: > HTableDescriptor > > >>>> missing for ivytest_deu > > >>>> at > > >>>> > > >> > org.apache.hadoop.hbase.master.handler.TableEventHandler.getTableDesc > > >>>> ri > > >>>> ptor(TableEventHandler.java:174) > > >>>> at > > >>>> > > >> > org.apache.hadoop.hbase.master.handler.DeleteTableHandler.<init>(Dele > > >>>> te > > >>>> TableHandler.java:44) > > >>>> at > > >>>> > > >> > org.apache.hadoop.hbase.master.HMaster.deleteTable(HMaster.java:1143) > > >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > > >>>> at > > >>>> > > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. > > >>>> ja > > >>>> va:39) > > >>>> at > > >>>> > > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces > > >>>> so > > >>>> rImpl.java:25) > > >>>> at java.lang.reflect.Method.invoke(Method.java:597) > > >>>> at > > >>>> > > >> > org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpc > > >>>> En > > >>>> gine.java:364) > > >>>> at > > >>>> > > >> > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java: > > >>>> 13 > > >>>> 76) > > >>>> > > >>>> Here is some help for this command: > > >>>> Drop the named table. Table must first be disabled. If table has > > >> more > > >>>> than one region, run a major compaction on .META.: > > >>>> > > >>>> hbase> major_compact ".META." > > >>>> > > >>>> The major_compact ".META." doesn't work. > > >>>> Then i try to create it ,but HBase says it . > > >>>> > > >>>> ERROR: Table already exists: ivytest_deu! > > >>>> > > >>>> After checking the region server log , the region server is > always > > >>>> trying to load this region. > > >>>> > > >>>> 2012-10-16 00:00:00,308 INFO > > >>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Received > request > > >>>> to open region: > > >>>> deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45. > > >>>> 2012-10-16 00:00:00,313 WARN > > >>>> org.apache.hadoop.hbase.util.FSTableDescriptors: The following > > >> folder > > >>>> is in HBase's root directory and doesn't contain a table > descriptor, > > >>>> do consider deleting it: deu_ivytest > > >>>> 2012-10-16 00:00:00,358 DEBUG > > >>>> org.apache.hadoop.hbase.regionserver.HRegion: Opening region: > {NAME > > >>>> => > 'deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.', > > >>>> STARTKEY => '', ENDKEY => '', ENCODED => > > >>>> 985d6ca9986d7d8cfaf82daf523fcd45,} > > >>>> 2012-10-16 00:00:00,358 DEBUG > > >>>> org.apache.hadoop.hbase.regionserver.HRegion: Registered > protocol > > >>>> handler: > > >>>> > region=deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45. > > >>>> > > >> > protocol=com.xingcloud.adhocprocessor.hbase.coprocessor.DEUColumnAggr > > >>>> eg > > >>>> ationProtocol > > >>>> 2012-10-16 00:00:00,358 ERROR > > >>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: > > >>>> Failed open of > > >>>> > region=deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45., > > >>>> starting to roll back the global memstore size. > > >>>> 2012-10-16 00:00:00,358 INFO > > >>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: > > >>>> Opening of region {NAME => > > >>>> 'deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.', > > >>>> STARTKEY => '', ENDKEY => '', ENCODED => > > >>>> 985d6ca9986d7d8cfaf82daf523fcd45,} failed, marking as > FAILED_OPEN in > > >>>> ZK > > >>>> > > >>>> And we have a endpoint in base .After the base tried to load > this > > >>>> table ivy test_deu for 90,000 times ,the endpoint class also has > > >> been > > >>>> loaded for 90,000 times. > > >>>> The jvm memory has been filled. > > >>>> The gcutil shows > > >>>> S0C S1C S0U S1U EC EU OC OU > > >> PC > > >>>> PU YGC YGCT FGC FGCT GCT > > >>>> 34880.0 34880.0 34648.1 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187 > > >>>> 34880.0 34880.0 34648.1 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187 > > >>>> 34880.0 34880.0 34648.1 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187 > > >>>> 34880.0 34880.0 34648.1 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187 > > >>>> 34880.0 34880.0 34880.0 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24600 28481.974 31930.165 > > >>>> 34880.0 34880.0 34880.0 0.0 209472.0 209472.0 2792768.0 > > >> 2792768.0 > > >>>> 71072.0 41461.5 129770 3448.191 24600 28481.974 31930.165 > > >>>> > > >>>> The jmap dump file shows > > >>>> > > >>>> 3982039 instances of class org.apache.hadoop.hbase.KeyValue > > >>>> 191050 instances of class org.apache.hadoop.fs.Path > > >>>> 187364 instances of class > > >>>> org.cliffc.high_scale_lib.ConcurrentAutoTable$CAT > > >>>> 187301 instances of class org.cliffc.high_scale_lib.Counter > > >>>> 102272 instances of class > > >> net.sf.ehcache.concurrent.ReadWriteLockSync > > >>>> 93652 instances of class org.apache.hadoop.hbase.HRegionInfo > > >>>> 93650 instances of class > > >>>> com.google.common.collect.MutableClassToInstanceMap > > >>>> 93650 instances of class DEUColumnAggregationEndpoint > > >>>> > > >>>> DEUColumnAggregationEndpoint is our endpoint class. > > >>>> > > >>>> We guess the 90,000 times check this table and load endpoint > class > > >>>> leads this memory leak. > > >>>> > > >>>> But how to drop this table? > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>> > > >>> > > > > > > > > > > > > > -- > Kevin O'Dell > Customer Operations Engineer, Cloudera