So you are not using Hadoop? HBase is connecting directly to S3? As far as I know, this is not possible. If it is possible, I don't recommend it and we cannot provide much help because in general we don't really recommend using S3 at all (even with HDFS).

stack wrote:
Your log would seem to say that there are no tables in hbase:

2009-10-26 11:40:13,984 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scan of 0 row(s) of meta region {server:
10.245.82.160:60020, regionname: .META.,,1, startKey: <>} complete

Do as Jon suggests.  Do you see listing of regions?  If so, it would seem
that edits to .META. table are not persisting on your s3 hdfs.   You might
be able to add in all tables using the bin/loadtable.rb script; it reads the
.regioninfo files in all regions and per region adds to .META. an entry.

St.Ack

On Mon, Oct 26, 2009 at 9:31 AM, Ananth T. Sarathy <
ananth.t.sara...@gmail.com> wrote:

I am confused , why would I need a hadoop home if I am using s3 and the
jets3t package to write to s3?
Ananth T Sarathy


On Mon, Oct 26, 2009 at 12:25 PM, Jonathan Gray <jl...@streamy.com> wrote:

Not S3, HDFS.  Can you checkout the web ui or using the command-line
interface?

$HADOOP_HOME/bin/hadoop dfs -lsr /hbase

...would be a good start


Ananth T. Sarathy wrote:

i see all my blocks in my s3 bucket.
Ananth T Sarathy


On Mon, Oct 26, 2009 at 12:17 PM, Jonathan Gray <jl...@streamy.com>
wrote:

 Do you see the files/blocks in HDFS?

Ananth T. Sarathy wrote:

 I just restarted Hbase and when I go into the shell and type list,
none
of
my tables are listed, but I see all the data/blocks in s3.

here is the master log when it's restarted

http://pastebin.com/m1ebb7217

this happened once before, but we just started over since it was early
in
our process. This time we have a lot of data, and need to keep it.


Ananth T Sarathy




Reply via email to