@Anil : Good Point. @Marco : First make sure that all the AMIs running region servers are reachable and there is no problem in DNS resolution.(As I see you are using AWS).
Regards, Mohammad Tariq On Sat, Aug 11, 2012 at 4:00 AM, anil gupta <anilgupt...@gmail.com> wrote: > Are you running a distributed cluster? > If yes, do you have localhost in /etc/hosts file? > > You are getting reference to localhost in hbck output: > ERROR: Region { meta => null, hdfs => > hdfs://localhost:9000/hbase/test2/b0d4a5f294809c94fccb3d4ce10c3b23, > deployed => } on HDFS, but not listed in META or deployed on any region > server > > ~Anil > > On Fri, Aug 10, 2012 at 3:08 PM, Marco Gallotta <ma...@gallotta.co.za>wrote: > >> Here's the output from hbck -details: http://pastebin.com/ZxVZEctY >> >> Extract: >> >> 6 inconsistencies detected. >> Status: INCONSISTENT >> >> 6 is the number of tables that appear in "list" but cannot be operated on >> (which btw, includes not being able to run disable/drop on them - both ops >> say table not found). I also just noticed "foo" does not occur in a table >> list, although I did create it at one point but was able to clear it from >> .META. when it also was reporting table not found when trying to >> disable/drop it. All these come from when I ^C'ed (i.e. killed) table >> creation when I was trying to get lzo compression working and table >> creation was hanging. >> >> Is there any way to repair this? I see hbck has repair options, but I want >> to proceed with caution. >> >> -- >> Marco Gallotta | Mountain View, California >> Software Engineer, Infrastructure | Loki Studios >> fb.me/marco.gallotta | twitter.com/marcog >> ma...@gallotta.co.za | +1 (650) 417-3313 >> >> Sent with Sparrow (http://www.sparrowmailapp.com/?sig) >> >> >> On Friday 10 August 2012 at 2:49 PM, anil gupta wrote: >> >> > Hi Marco, >> > >> > Did anything disastrous happen to cluster? >> > Can you try using hbck utility of HBase. >> > Run: 'hbase hbck -help' to get all the available options. >> > >> > ~Anil >> > >> > On Fri, Aug 10, 2012 at 2:22 PM, Marco Gallotta >> > <ma...@gallotta.co.za(mailto: >> ma...@gallotta.co.za)>wrote: >> > >> > > Hi there >> > > >> > > I have a few tables which show up in a "list" in the shell, but produce >> > > "table not found" when performing any operation on them. There is no >> > > reference of them in the .META. table. It seems to be resulting in >> some of >> > > the hbase services being killed every so often. >> > > >> > > Here are some logs from master (foo is one of the tables not found): >> > > >> > > 2012-08-09 20:40:44,301 FATAL org.apache.hadoop.hbase.master.HMaster: >> > > Master server abort: loaded coprocessors are: [] >> > > 2012-08-09 20:40:44,301 FATAL org.apache.hadoop.hbase.master.HMaster: >> > > Unexpected state : foo,,1343175078663.527bb34f4bb5e40dd42e82054d7c5485. >> > > state=PENDING_OPEN, ts=1344570044277, >> > > server=ip-10-170-150-10.us-west-1.compute.internal,60020,1344559455110 >> .. >> > > Cannot transit it to OFFLINE. >> > > >> > > >> > > There are also a number of the following types of error logs: >> > > >> > > 2012-08-09 20:10:04,308 ERROR >> > > org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment in: >> > > ip-10-170-150-10.us-west-1.compute.internal,60020,1344559455110 due to >> > > >> org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException: >> > > Received:OPEN for the >> > > region:foo,,1343175078663.527bb34f4bb5e40dd42e82054d7c5485. ,which we >> are >> > > already trying to OPEN. >> > > >> > > Any ideas how to find and remove any references to these non-existent >> > > tables? >> > > >> > > -- >> > > Marco Gallotta | Mountain View, California >> > > Software Engineer, Infrastructure | Loki Studios >> > > fb.me/marco.gallotta (http://fb.me/marco.gallotta) | >> twitter.com/marcog (http://twitter.com/marcog) >> > > ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) | +1 (650) 417-3313 >> > > >> > > Sent with Sparrow (http://www.sparrowmailapp.com/?sig) >> > >> > >> > -- >> > Thanks & Regards, >> > Anil Gupta >> > >> > >> >> >> > > > -- > Thanks & Regards, > Anil Gupta