Can you try to reboot the machine and run repair again. Might not sound logical 
but I would give it a shot.

PS: My personal experience is that hbase and hadoop has never been reliable in 
my standalone environment. I always trust the distributed cluster environment. 
AFAIK, these things are tested extensively in distributed mode.

Best Regards,
Anil

On Aug 10, 2012, at 5:00 PM, Marco Gallotta <ma...@gallotta.co.za> wrote:

> Nope, not in safe mode. Gar, this is going nowhere. :/ Thanks for the help so 
> far though!
> 
> Configured Capacity: 211474616320 (196.95 GB)
> Present Capacity: 169354764288 (157.72 GB)
> DFS Remaining: 162401554432 (151.25 GB)
> DFS Used: 6953209856 (6.48 GB)
> DFS Used%: 4.11%
> Under replicated blocks: 436
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> 
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> 
> Name: 127.0.0.1:50010
> Decommission Status : Normal
> Configured Capacity: 211474616320 (196.95 GB)
> DFS Used: 6953209856 (6.48 GB)
> Non DFS Used: 42119852032 (39.23 GB)
> DFS Remaining: 162401554432(151.25 GB)
> DFS Used%: 3.29%
> DFS Remaining%: 76.79%
> Last contact: Fri Aug 10 16:59:26 PDT 2012
> 
> 
> 
> -- 
> Marco Gallotta | Mountain View, California
> Software Engineer, Infrastructure | Loki Studios
> fb.me/marco.gallotta | twitter.com/marcog
> ma...@gallotta.co.za | +1 (650) 417-3313
> 
> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
> 
> 
> On Friday 10 August 2012 at 4:52 PM, Mohammad Tariq wrote:
> 
>> If your HDFS is in safemode you'll get something like this :
>> 
>> cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop dfsadmin -report
>> Safe mode is ON
>> Configured Capacity: 31111143424 (28.97 GB)
>> Present Capacity: 5309755392 (4.95 GB)
>> DFS Remaining: 4799320064 (4.47 GB)
>> DFS Used: 510435328 (486.79 MB)
>> DFS Used%: 9.61%
>> Under replicated blocks: 1
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>> 
>> -------------------------------------------------
>> Datanodes available: 1 (1 total, 0 dead)
>> 
>> Name: 127.0.0.1:50010
>> Decommission Status : Normal
>> Configured Capacity: 31111143424 (28.97 GB)
>> DFS Used: 510435328 (486.79 MB)
>> Non DFS Used: 25801388032 (24.03 GB)
>> DFS Remaining: 4799320064(4.47 GB)
>> DFS Used%: 1.64%
>> DFS Remaining%: 15.43%
>> Last contact: Sat Aug 11 05:19:18 IST 2012
>> 
>> See the line in red color.
>> 
>> Regards,
>> Mohammad Tariq
>> 
>> 
>> On Sat, Aug 11, 2012 at 5:20 AM, Mohammad Tariq <donta...@gmail.com 
>> (mailto:donta...@gmail.com)> wrote:
>>> You can use "bin/hadoop dfsadmin -report" to do that. Alternatively
>>> point your web browser to http://localhost:9000. It'll show all the
>>> details of your HDFS.
>>> 
>>> Regards,
>>> Mohammad Tariq
>>> 
>>> 
>>> On Sat, Aug 11, 2012 at 5:16 AM, Marco Gallotta <ma...@gallotta.co.za 
>>> (mailto:ma...@gallotta.co.za)>
>> wrote:
>>>> How do you check that?
>>>> 
>>>> --
>>>> Marco Gallotta | Mountain View, California
>>>> Software Engineer, Infrastructure | Loki Studios
>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) | twitter.com/marcog 
>>>> (http://twitter.com/marcog)
>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) | +1 (650) 417-3313
>>>> 
>>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>>>> 
>>>> 
>>>> On Friday 10 August 2012 at 4:44 PM, Mohammad Tariq wrote:
>>>> 
>>>>> This is pretty strange. I mean everything seems to be in place, but we
>>>>> are stuck. Please make a check once if your Hdfs is in safemode.
>>>>> 
>>>>> Regards,
>>>>> Mohammad Tariq
>>>>> 
>>>>> 
>>>>> On Sat, Aug 11, 2012 at 5:13 AM, Mohammad Tariq <donta...@gmail.com 
>>>>> (mailto:donta...@gmail.com)(mailto:
>> donta...@gmail.com (mailto:donta...@gmail.com))> wrote:
>>>>>> What about fs.default.name (http://fs.default.name)?????
>>>>>> 
>>>>>> Regards,
>>>>>> Mohammad Tariq
>>>>>> 
>>>>>> 
>>>>>> On Sat, Aug 11, 2012 at 5:10 AM, Marco Gallotta <ma...@gallotta.co.za 
>>>>>> (mailto:ma...@gallotta.co.za)(mailto:
>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za))> wrote:
>>>>>>> It's in /var which is persistent across reboots.
>>>>>>> 
>>>>>>> --
>>>>>>> Marco Gallotta | Mountain View, California
>>>>>>> Software Engineer, Infrastructure | Loki Studios
>>>>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) |
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> twitter.com/marcog (http://twitter.com/marcog)
>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) | +1 (650)
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 417-3313
>>>>>>> 
>>>>>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>>>>>>> 
>>>>>>> 
>>>>>>> On Friday 10 August 2012 at 4:31 PM, anil gupta wrote:
>>>>>>> 
>>>>>>>> Where are you storing your hdfs data? Is it /tmp? If it's /tmp
>> and you have
>>>>>>>> rebooted your machined then you will have problems.
>>>>>>>> 
>>>>>>>> On Fri, Aug 10, 2012 at 4:19 PM, Marco Gallotta <
>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za)>wrote:
>>>>>>>> 
>>>>>>>>> It's a pseudo-distributed cluster, as I plan to add more nodes
>> as we start
>>>>>>>>> gathering more data.
>>>>>>>>> 
>>>>>>>>> I get the following error when running hbck -repair, and then
>> it stalls:
>>>>>>>>> 
>>>>>>>>> 12/08/10 16:17:27 INFO util.HBaseFsck: Sleeping 10000ms before
>> re-checking
>>>>>>>>> after fix...
>>>>>>>>> Version: 0.94.0
>>>>>>>>> 12/08/10 16:17:37 INFO util.HBaseFsck: Loading regioninfos HDFS
>>>>>>>>> 12/08/10 16:17:37 INFO util.HBaseFsck: Loading HBase regioninfo
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> from
>>>>>>>>> HDFS...
>>>>>>>>> Exception in thread "main"
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> java.util.concurrent.RejectedExecutionException
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1956)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionDirs(HBaseFsck.java:1059)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:504)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:304)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:377)
>>>>>>>>> at
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3139)
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> Marco Gallotta | Mountain View, California
>>>>>>>>> Software Engineer, Infrastructure | Loki Studios
>>>>>>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) |
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> twitter.com/marcog (http://twitter.com/marcog)
>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) | +1 (650)
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 417-3313
>>>>>>>>> 
>>>>>>>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Friday 10 August 2012 at 4:09 PM, anil gupta wrote:
>>>>>>>>> 
>>>>>>>>>> Is it a standalone installation or pseudo-distributed?
>>>>>>>>>> I faced a similar problem a few days back in a distributed
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> cluster and
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> used
>>>>>>>>>> hbck -repair option. You might give it a try.
>>>>>>>>>> 
>>>>>>>>>> ~Anil
>>>>>>>>>> 
>>>>>>>>>> On Fri, Aug 10, 2012 at 3:39 PM, Mohammad Tariq <
>> donta...@gmail.com (mailto:donta...@gmail.com)(mailto:
>>>>>>>>> donta...@gmail.com (mailto:donta...@gmail.com))> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Could you please share your /etc/hosts file??Meantime, do a
>> manual
>>>>>>>>>>> compaction and see if ti works.
>>>>>>>>>>> 
>>>>>>>>>>> Regards,
>>>>>>>>>>> Mohammad Tariq
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Aug 11, 2012 at 4:07 AM, Marco Gallotta <
>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za)(mailto:
>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za))>
>>>>>>>>>>> wrote:
>>>>>>>>>>>> It's not a distributed cluster. I'm not processing enough
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> data yet.
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> So
>>>>>>>>>>> 
>>>>>>>>>>> the reference to localhost is correct.
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> Marco Gallotta | Mountain View, California
>>>>>>>>>>>> Software Engineer, Infrastructure | Loki Studios
>>>>>>>>>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) |
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> twitter.com/marcog (http://twitter.com/marcog)
>>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) | +1
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> (650)
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 417-3313
>>>>>>>>>>>> 
>>>>>>>>>>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Friday 10 August 2012 at 3:30 PM, anil gupta wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Are you running a distributed cluster?
>>>>>>>>>>>>> If yes, do you have localhost in /etc/hosts file?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> You are getting reference to localhost in hbck output:
>>>>>>>>>>>>> ERROR: Region { meta => null, hdfs =>
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> hdfs://localhost:9000/hbase/test2/b0d4a5f294809c94fccb3d4ce10c3b23,
>>>>>>>>>>>>> deployed => } on HDFS, but not listed in META or
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> deployed on any
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> region
>>>>>>>>>>>>> server
>>>>>>>>>>>>> 
>>>>>>>>>>>>> ~Anil
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Aug 10, 2012 at 3:08 PM, Marco Gallotta <
>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za)(mailto:
>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za))>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Here's the output from hbck -details:
>>>>>>>>> http://pastebin.com/ZxVZEctY
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Extract:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 6 inconsistencies detected.
>>>>>>>>>>>>>> Status: INCONSISTENT
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 6 is the number of tables that appear in "list" but
>> cannot be
>>>>>>>>>>> operated on
>>>>>>>>>>>>>> (which btw, includes not being able to run
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> disable/drop on them -
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> both ops
>>>>>>>>>>>>>> say table not found). I also just noticed "foo" does
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> not occur
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> in a
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> table
>>>>>>>>>>>>>> list, although I did create it at one point but was
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> able to
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> clear it
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> from
>>>>>>>>>>>>>> .META. when it also was reporting table not found
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> when trying to
>>>>>>>>>>>>>> disable/drop it. All these come from when I ^C'ed
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> (i.e. killed)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> table
>>>>>>>>>>>>>> creation when I was trying to get lzo compression
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> working and
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> table
>>>>>>>>>>>>>> creation was hanging.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Is there any way to repair this? I see hbck has
>> repair options,
>>>>>>>>> but I
>>>>>>>>>>> want
>>>>>>>>>>>>>> to proceed with caution.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Marco Gallotta | Mountain View, California
>>>>>>>>>>>>>> Software Engineer, Infrastructure | Loki Studios
>>>>>>>>>>>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta) |
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> twitter.com/marcog (http://twitter.com/marcog)
>>>>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za) |
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> +1 (650)
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 417-3313
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Friday 10 August 2012 at 2:49 PM, anil gupta wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi Marco,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Did anything disastrous happen to cluster?
>>>>>>>>>>>>>>> Can you try using hbck utility of HBase.
>>>>>>>>>>>>>>> Run: 'hbase hbck -help' to get all the available
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> options.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ~Anil
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Fri, Aug 10, 2012 at 2:22 PM, Marco Gallotta <
>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za)(mailto:
>>>>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> ))>wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Hi there
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> I have a few tables which show up in a "list" in
>> the shell,
>>>>>>>>> but
>>>>>>>>>>> produce
>>>>>>>>>>>>>>>> "table not found" when performing any operation
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> on them.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> There is
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> no
>>>>>>>>>>>>>>>> reference of them in the .META. table. It seems
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> to be
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> resulting in
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> some of
>>>>>>>>>>>>>>>> the hbase services being killed every so often.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Here are some logs from master (foo is one of the
>> tables not
>>>>>>>>>>> found):
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 2012-08-09 20:40:44,301 FATAL
>>>>>>>>>>> org.apache.hadoop.hbase.master.HMaster:
>>>>>>>>>>>>>>>> Master server abort: loaded coprocessors are: []
>>>>>>>>>>>>>>>> 2012-08-09 20:40:44,301 FATAL
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> org.apache.hadoop.hbase.master.HMaster:
>>>>>>>>>>>>>>>> Unexpected state :
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> foo,,1343175078663.527bb34f4bb5e40dd42e82054d7c5485.
>>>>>>>>>>>>>>>> state=PENDING_OPEN, ts=1344570044277,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> server=ip-10-170-150-10.us-west-1.compute.internal,60020,1344559455110
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ..
>>>>>>>>>>>>>>>> Cannot transit it to OFFLINE.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> There are also a number of the following types of
>> error logs:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 2012-08-09 20:10:04,308 ERROR
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.master.AssignmentManager:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> Failed
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> assignment in:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> ip-10-170-150-10.us-west-1.compute.internal,60020,1344559455110
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> due to
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException:
>>>>>>>>>>>>>>>> Received:OPEN for the
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> region:foo,,1343175078663.527bb34f4bb5e40dd42e82054d7c5485.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> ,which we
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>> already trying to OPEN.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Any ideas how to find and remove any references
>> to these
>>>>>>>>>>> non-existent
>>>>>>>>>>>>>>>> tables?
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Marco Gallotta | Mountain View, California
>>>>>>>>>>>>>>>> Software Engineer, Infrastructure | Loki Studios
>>>>>>>>>>>>>>>> fb.me/marco.gallotta (http://fb.me/marco.gallotta)
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> |
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> twitter.com/marcog (http://twitter.com/marcog)
>>>>>>>>>>>>>>>> ma...@gallotta.co.za (mailto:ma...@gallotta.co.za)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> | +1
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> (650)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 417-3313
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Sent with Sparrow (
>> http://www.sparrowmailapp.com/?sig)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Thanks & Regards,
>>>>>>>>>>>>>>> Anil Gupta
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Thanks & Regards,
>>>>>>>>>>>>> Anil Gupta
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> --
>>>>>>>>>> Thanks & Regards,
>>>>>>>>>> Anil Gupta
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Thanks & Regards,
>>>>>>>> Anil Gupta
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 
>> 
> 
> 

Reply via email to