<property>
    <name>hbase.rootdir</name>
    <value>s3://hbase20:80/hbasedata</value>
    <description>The directory shared by region servers.
    Should be fully-qualified to include the filesystem to use.
    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>
  </property>

Ananth T Sarathy


On Mon, Oct 26, 2009 at 12:57 PM, Jonathan Gray <jl...@streamy.com> wrote:

> I think it's only in 0.20.1 and above (might even only be in 0.20 branch).
>
> I'm a bit confused about what you're doing.  Reading the conf file from S3?
>
> What is the configuration value for "hbase.rootdir"
>
>
> Ananth T. Sarathy wrote:
>
>> I don't have a loadtable.rb
>>
>> is that in 0.20.0?
>> This is what i have in bin
>>
>> Formatter.rb   hbase-config.sh   regionservers.sh  zookeepers.sh
>> HBase.rb       hbase-daemon.sh   rename_table.rb
>> copy_table.rb  hbase-daemons.sh  start-hbase.sh
>> hbase          hirb.rb           stop-hbase.sh
>>
>> Ananth T Sarathy
>>
>>
>> On Mon, Oct 26, 2009 at 12:45 PM, stack <st...@duboce.net> wrote:
>>
>>  Your log would seem to say that there are no tables in hbase:
>>>
>>> 2009-10-26 11:40:13,984 INFO org.apache.hadoop.hbase.master.BaseScanner:
>>> RegionManager.metaScanner scan of 0 row(s) of meta region {server:
>>> 10.245.82.160:60020, regionname: .META.,,1, startKey: <>} complete
>>>
>>> Do as Jon suggests.  Do you see listing of regions?  If so, it would seem
>>> that edits to .META. table are not persisting on your s3 hdfs.   You
>>> might
>>> be able to add in all tables using the bin/loadtable.rb script; it reads
>>> the
>>> .regioninfo files in all regions and per region adds to .META. an entry.
>>>
>>> St.Ack
>>>
>>> On Mon, Oct 26, 2009 at 9:31 AM, Ananth T. Sarathy <
>>> ananth.t.sara...@gmail.com> wrote:
>>>
>>>  I am confused , why would I need a hadoop home if I am using s3 and the
>>>> jets3t package to write to s3?
>>>> Ananth T Sarathy
>>>>
>>>>
>>>> On Mon, Oct 26, 2009 at 12:25 PM, Jonathan Gray <jl...@streamy.com>
>>>>
>>> wrote:
>>>
>>>> Not S3, HDFS.  Can you checkout the web ui or using the command-line
>>>>> interface?
>>>>>
>>>>> $HADOOP_HOME/bin/hadoop dfs -lsr /hbase
>>>>>
>>>>> ...would be a good start
>>>>>
>>>>>
>>>>> Ananth T. Sarathy wrote:
>>>>>
>>>>>  i see all my blocks in my s3 bucket.
>>>>>> Ananth T Sarathy
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 26, 2009 at 12:17 PM, Jonathan Gray <jl...@streamy.com>
>>>>>> wrote:
>>>>>>
>>>>>>  Do you see the files/blocks in HDFS?
>>>>>>
>>>>>>>
>>>>>>> Ananth T. Sarathy wrote:
>>>>>>>
>>>>>>>  I just restarted Hbase and when I go into the shell and type list,
>>>>>>>
>>>>>> none
>>>>
>>>>> of
>>>>>>>> my tables are listed, but I see all the data/blocks in s3.
>>>>>>>>
>>>>>>>> here is the master log when it's restarted
>>>>>>>>
>>>>>>>> http://pastebin.com/m1ebb7217
>>>>>>>>
>>>>>>>> this happened once before, but we just started over since it was
>>>>>>>>
>>>>>>> early
>>>
>>>> in
>>>>>>>> our process. This time we have a lot of data, and need to keep it.
>>>>>>>>
>>>>>>>>
>>>>>>>> Ananth T Sarathy
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>

Reply via email to