Re: Enabling compression

2012-07-24 Thread Asaf Mesika
You also need to install Snappy - the Shared Object. I've done it using
"yum install snappy" on Fedora Core.

Sent from my iPad

On 25 ביול 2012, at 04:40, Dhaval Shah  wrote:



Yes you need to add the snappy libraries to hbase path (i think the
variable to set is called HBASE_LIBRARY_PATH)

--
On Wed 25 Jul, 2012 3:46 AM IST Mohit Anchlia wrote:

On Tue, Jul 24, 2012 at 2:04 PM, Dhaval Shah wrote:


I bet that your compression libraries are not available to HBase.. Run the

compression test utility and see if it can find LZO


That seems to be the case for SNAPPY. However, I do have snappy installed

and it works with hadoop just fine and HBase is running on the same

cluster. Is there something special I need to do for HBase?



Regards,

Dhaval



- Original Message -

From: Mohit Anchlia 

To: user@hbase.apache.org

Cc:

Sent: Tuesday, 24 July 2012 4:39 PM

Subject: Re: Enabling compression


Thanks! I was trying it out and I see this message when I use COMPRESSION,

but it works when I don't use it. Am I doing something wrong?



hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION

=> 'LZO'}


ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1

regions are online; retries exhausted.


hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}


0 row(s) in 1.1260 seconds



On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans  wrote:

Also, if I understand it correctly, this will enable the compression

for the new put but will not compresse the actual cells already stored

right? For that, we need to run a major compaction of the table which

will rewrite all the cells and so compact them?


Yeah, although you may not want to recompact everything all at once in

a live system. You can just let it happen naturally through cycles of

flushes and compactions, it's all fine.


J-D


Re: Enabling compression

2012-07-24 Thread Dhaval Shah


Yes you need to add the snappy libraries to hbase path (i think the variable to 
set is called HBASE_LIBRARY_PATH)

--
On Wed 25 Jul, 2012 3:46 AM IST Mohit Anchlia wrote:

>On Tue, Jul 24, 2012 at 2:04 PM, Dhaval Shah 
>wrote:
>
>> I bet that your compression libraries are not available to HBase.. Run the
>> compression test utility and see if it can find LZO
>>
>> That seems to be the case for SNAPPY. However, I do have snappy installed
>and it works with hadoop just fine and HBase is running on the same
>cluster. Is there something special I need to do for HBase?
>
>>
>> Regards,
>> Dhaval
>>
>>
>> - Original Message -
>> From: Mohit Anchlia 
>> To: user@hbase.apache.org
>> Cc:
>> Sent: Tuesday, 24 July 2012 4:39 PM
>> Subject: Re: Enabling compression
>>
>> Thanks! I was trying it out and I see this message when I use COMPRESSION,
>> but it works when I don't use it. Am I doing something wrong?
>>
>>
>> hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
>> => 'LZO'}
>>
>> ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
>> regions are online; retries exhausted.
>>
>> hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}
>>
>> 0 row(s) in 1.1260 seconds
>>
>>
>> On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans > >wrote:
>>
>> > On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
>> >  wrote:
>> > > Also, if I understand it correctly, this will enable the compression
>> > > for the new put but will not compresse the actual cells already stored
>> > > right? For that, we need to run a major compaction of the table which
>> > > will rewrite all the cells and so compact them?
>> >
>> > Yeah, although you may not want to recompact everything all at once in
>> > a live system. You can just let it happen naturally through cycles of
>> > flushes and compactions, it's all fine.
>> >
>> > J-D
>> >
>>
>>



Re: Enabling compression

2012-07-24 Thread Mohit Anchlia
On Tue, Jul 24, 2012 at 2:04 PM, Dhaval Shah wrote:

> I bet that your compression libraries are not available to HBase.. Run the
> compression test utility and see if it can find LZO
>
> That seems to be the case for SNAPPY. However, I do have snappy installed
and it works with hadoop just fine and HBase is running on the same
cluster. Is there something special I need to do for HBase?

>
> Regards,
> Dhaval
>
>
> - Original Message -
> From: Mohit Anchlia 
> To: user@hbase.apache.org
> Cc:
> Sent: Tuesday, 24 July 2012 4:39 PM
> Subject: Re: Enabling compression
>
> Thanks! I was trying it out and I see this message when I use COMPRESSION,
> but it works when I don't use it. Am I doing something wrong?
>
>
> hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
> => 'LZO'}
>
> ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
> regions are online; retries exhausted.
>
> hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}
>
> 0 row(s) in 1.1260 seconds
>
>
> On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans  >wrote:
>
> > On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
> >  wrote:
> > > Also, if I understand it correctly, this will enable the compression
> > > for the new put but will not compresse the actual cells already stored
> > > right? For that, we need to run a major compaction of the table which
> > > will rewrite all the cells and so compact them?
> >
> > Yeah, although you may not want to recompact everything all at once in
> > a live system. You can just let it happen naturally through cycles of
> > flushes and compactions, it's all fine.
> >
> > J-D
> >
>
>


Re: Enabling compression

2012-07-24 Thread Dhaval Shah
I bet that your compression libraries are not available to HBase.. Run the 
compression test utility and see if it can find LZO


Regards,
Dhaval


- Original Message -
From: Mohit Anchlia 
To: user@hbase.apache.org
Cc: 
Sent: Tuesday, 24 July 2012 4:39 PM
Subject: Re: Enabling compression

Thanks! I was trying it out and I see this message when I use COMPRESSION,
but it works when I don't use it. Am I doing something wrong?


hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
=> 'LZO'}

ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
regions are online; retries exhausted.

hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}

0 row(s) in 1.1260 seconds


On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans wrote:

> On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
>  wrote:
> > Also, if I understand it correctly, this will enable the compression
> > for the new put but will not compresse the actual cells already stored
> > right? For that, we need to run a major compaction of the table which
> > will rewrite all the cells and so compact them?
>
> Yeah, although you may not want to recompact everything all at once in
> a live system. You can just let it happen naturally through cycles of
> flushes and compactions, it's all fine.
>
> J-D
>



Re: Enabling compression

2012-07-24 Thread Mohit Anchlia
Thanks! I was trying it out and I see this message when I use COMPRESSION,
but it works when I don't use it. Am I doing something wrong?


hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
=> 'LZO'}

ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
regions are online; retries exhausted.

hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}

0 row(s) in 1.1260 seconds


On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans wrote:

> On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
>  wrote:
> > Also, if I understand it correctly, this will enable the compression
> > for the new put but will not compresse the actual cells already stored
> > right? For that, we need to run a major compaction of the table which
> > will rewrite all the cells and so compact them?
>
> Yeah, although you may not want to recompact everything all at once in
> a live system. You can just let it happen naturally through cycles of
> flushes and compactions, it's all fine.
>
> J-D
>


Re: Enabling compression

2012-07-24 Thread Jean-Daniel Cryans
On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
 wrote:
> Also, if I understand it correctly, this will enable the compression
> for the new put but will not compresse the actual cells already stored
> right? For that, we need to run a major compaction of the table which
> will rewrite all the cells and so compact them?

Yeah, although you may not want to recompact everything all at once in
a live system. You can just let it happen naturally through cycles of
flushes and compactions, it's all fine.

J-D


Re: Enabling compression

2012-07-24 Thread Jean-Marc Spaggiari
Also, if I understand it correctly, this will enable the compression
for the new put but will not compresse the actual cells already stored
right? For that, we need to run a major compaction of the table which
will rewrite all the cells and so compact them?

I'm not 100% sure about that, so it's half a comment, half a question.

2012/7/24, Rob Roland :
> Yes.
>
> You'll need to disable the table, then alter it.
>
> disable 'my_table'
> alter 'my_table', {NAME => 'my_column_family', COMPRESSION => 'snappy'}
> enable 'my_table'
>
> You don't enable compression for the whole table - you enable it per column
> family.  (At least this is the case on CDH3's HBase)
>
> On Tue, Jul 24, 2012 at 1:28 PM, Mohit Anchlia
> wrote:
>
>> Is it possible to enable compression on the table on a already existing
>> table?
>>
>


Re: Enabling compression

2012-07-24 Thread Rob Roland
Yes.

You'll need to disable the table, then alter it.

disable 'my_table'
alter 'my_table', {NAME => 'my_column_family', COMPRESSION => 'snappy'}
enable 'my_table'

You don't enable compression for the whole table - you enable it per column
family.  (At least this is the case on CDH3's HBase)

On Tue, Jul 24, 2012 at 1:28 PM, Mohit Anchlia wrote:

> Is it possible to enable compression on the table on a already existing
> table?
>


Re: Enabling compression

2012-07-24 Thread Jean-Daniel Cryans
See http://hbase.apache.org/book.html#changing.compression

J-D

On Tue, Jul 24, 2012 at 1:28 PM, Mohit Anchlia  wrote:
> Is it possible to enable compression on the table on a already existing
> table?