when I open it, it looks like some kind of binary content
Ananth T Sarathy

On Mon, Oct 26, 2009 at 1:56 PM, stack <[email protected]> wrote:

> If you open the .regioninfo file inside in each region, you can see its
> content?  You might be able to make loadtable.rb work by making it go via
> jets3s but would take some tinkering on your part.
>
> As to why the regions were lost from .META. in first place, I'm not sure
> why; was there no flush of .META.?    Usually this is frequent so you
> should
> have some of your table info.
>
> We're about hdfs here in this channel, usually, and speaking for myself,
> I've played with hadoop on s3 but not hbase so I'm not much good to you I'm
> afraid.
>
> St.Ack
>
>
> On Mon, Oct 26, 2009 at 10:42 AM, Ananth T. Sarathy <
> [email protected]> wrote:
>
> > yeah I  see a folder with my table name. and inside that I see 3 other
> > folders named with a bunch of numbers (which i assume are the regions).
> >
> > Ananth T Sarathy
> >
> >
> > On Mon, Oct 26, 2009 at 1:38 PM, stack <[email protected]> wrote:
> >
> > > If you list the hbase homedir in s3, do you see tables and then under
> > that
> > > regions?
> > >
> > > The loadtable.rb script won't work against a fs that is other than hdfs
> > > unfortunately.
> > > St.Ack
> > >
> > > On Mon, Oct 26, 2009 at 10:36 AM, Ananth T. Sarathy <
> > > [email protected]> wrote:
> > >
> > > > well where i have hadoop installed (not running) since not using it
> for
> > > > hdfs
> > > >
> > > >  bin/hadoop fs -lsr /hbase.rootdir
> > > > lsr: Cannot access /hbase.rootdir: No such file or directory.
> > > >
> > > >
> > > >
> > > > Ananth T Sarathy
> > > >
> > > >
> > > > On Mon, Oct 26, 2009 at 1:24 PM, stack <[email protected]> wrote:
> > > >
> > > > > When you do bin/hadoop fs -lsr /hbase.rootdir what happens?
> > > > > St.Ack
> > > > >
> > > > > On Mon, Oct 26, 2009 at 10:05 AM, Ananth T. Sarathy <
> > > > > [email protected]> wrote:
> > > > >
> > > > > > @ the command line? I get a command not found.
> > > > > > Ananth T Sarathy
> > > > > >
> > > > > >
> > > > > > On Mon, Oct 26, 2009 at 1:02 PM, stack <[email protected]> wrote:
> > > > > >
> > > > > > > Its in head of the 0.20 branch (We should roll a 0.20.2 soon).
> > > > > > >
> > > > > > > What happens if you do a lsr /hbase?
> > > > > > >
> > > > > > > St.Ack
> > > > > > >
> > > > > > > On Mon, Oct 26, 2009 at 9:52 AM, Ananth T. Sarathy <
> > > > > > > [email protected]> wrote:
> > > > > > >
> > > > > > > > I don't have a loadtable.rb
> > > > > > > >
> > > > > > > > is that in 0.20.0?
> > > > > > > > This is what i have in bin
> > > > > > > >
> > > > > > > > Formatter.rb   hbase-config.sh   regionservers.sh
> >  zookeepers.sh
> > > > > > > > HBase.rb       hbase-daemon.sh   rename_table.rb
> > > > > > > > copy_table.rb  hbase-daemons.sh  start-hbase.sh
> > > > > > > > hbase          hirb.rb           stop-hbase.sh
> > > > > > > >
> > > > > > > > Ananth T Sarathy
> > > > > > > >
> > > > > > > >
> > > > > > > > On Mon, Oct 26, 2009 at 12:45 PM, stack <[email protected]>
> > > wrote:
> > > > > > > >
> > > > > > > > > Your log would seem to say that there are no tables in
> hbase:
> > > > > > > > >
> > > > > > > > > 2009-10-26 11:40:13,984 INFO
> > > > > > > org.apache.hadoop.hbase.master.BaseScanner:
> > > > > > > > > RegionManager.metaScanner scan of 0 row(s) of meta region
> > > > {server:
> > > > > > > > > 10.245.82.160:60020, regionname: .META.,,1, startKey: <>}
> > > > complete
> > > > > > > > >
> > > > > > > > > Do as Jon suggests.  Do you see listing of regions?  If so,
> > it
> > > > > would
> > > > > > > seem
> > > > > > > > > that edits to .META. table are not persisting on your s3
> > hdfs.
> > > > > You
> > > > > > > > might
> > > > > > > > > be able to add in all tables using the bin/loadtable.rb
> > script;
> > > > it
> > > > > > > reads
> > > > > > > > > the
> > > > > > > > > .regioninfo files in all regions and per region adds to
> > .META.
> > > an
> > > > > > > entry.
> > > > > > > > >
> > > > > > > > > St.Ack
> > > > > > > > >
> > > > > > > > > On Mon, Oct 26, 2009 at 9:31 AM, Ananth T. Sarathy <
> > > > > > > > > [email protected]> wrote:
> > > > > > > > >
> > > > > > > > > > I am confused , why would I need a hadoop home if I am
> > using
> > > s3
> > > > > and
> > > > > > > the
> > > > > > > > > > jets3t package to write to s3?
> > > > > > > > > > Ananth T Sarathy
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Mon, Oct 26, 2009 at 12:25 PM, Jonathan Gray <
> > > > > [email protected]
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Not S3, HDFS.  Can you checkout the web ui or using the
> > > > > > > command-line
> > > > > > > > > > > interface?
> > > > > > > > > > >
> > > > > > > > > > > $HADOOP_HOME/bin/hadoop dfs -lsr /hbase
> > > > > > > > > > >
> > > > > > > > > > > ...would be a good start
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Ananth T. Sarathy wrote:
> > > > > > > > > > >
> > > > > > > > > > >> i see all my blocks in my s3 bucket.
> > > > > > > > > > >> Ananth T Sarathy
> > > > > > > > > > >>
> > > > > > > > > > >>
> > > > > > > > > > >> On Mon, Oct 26, 2009 at 12:17 PM, Jonathan Gray <
> > > > > > > [email protected]>
> > > > > > > > > > >> wrote:
> > > > > > > > > > >>
> > > > > > > > > > >>  Do you see the files/blocks in HDFS?
> > > > > > > > > > >>>
> > > > > > > > > > >>>
> > > > > > > > > > >>> Ananth T. Sarathy wrote:
> > > > > > > > > > >>>
> > > > > > > > > > >>>  I just restarted Hbase and when I go into the shell
> > and
> > > > type
> > > > > > > list,
> > > > > > > > > > none
> > > > > > > > > > >>>> of
> > > > > > > > > > >>>> my tables are listed, but I see all the data/blocks
> in
> > > s3.
> > > > > > > > > > >>>>
> > > > > > > > > > >>>> here is the master log when it's restarted
> > > > > > > > > > >>>>
> > > > > > > > > > >>>> http://pastebin.com/m1ebb7217
> > > > > > > > > > >>>>
> > > > > > > > > > >>>> this happened once before, but we just started over
> > > since
> > > > it
> > > > > > was
> > > > > > > > > early
> > > > > > > > > > >>>> in
> > > > > > > > > > >>>> our process. This time we have a lot of data, and
> need
> > > to
> > > > > keep
> > > > > > > it.
> > > > > > > > > > >>>>
> > > > > > > > > > >>>>
> > > > > > > > > > >>>> Ananth T Sarathy
> > > > > > > > > > >>>>
> > > > > > > > > > >>>>
> > > > > > > > > > >>>>
> > > > > > > > > > >>
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to