I've tested it extensively, it's safe to use if your tables aren't
splitting like mad eg don't try to do it during a massive import
because you forgot to set the memstore size.

J-D

On Thu, May 10, 2012 at 9:11 AM, Amandeep Khurana <ama...@gmail.com> wrote:
> From what I understand, the online schema update feature (0.92.x onwards)
> would allow you to do this without disabling tables. It's experimental in
> 0.92.
>
> On Thu, May 10, 2012 at 9:02 AM, Jeff Whiting <je...@qualtrics.com> wrote:
>
>> We really need to be able to do this type of thing online.  Taking your
>> table down just so you can change the compression/bloom/whatever isn't very
>> cool for a production cluster.
>>
>> My $0.02
>>
>> ~Jeff
>>
>>
>> On 5/9/2012 10:10 PM, Harsh J wrote:
>>
>>> Jiajun,
>>>
>>> Expanding on Jean's guideline (and perhaps the following can be used
>>> for the manual as well):
>>>
>>> 1. Ensure snappy codec works properly, following the test listed on
>>> http://hbase.apache.org/book/**snappy.compression.html<http://hbase.apache.org/book/snappy.compression.html>(On
>>>  all RSes to
>>> be sure)
>>>
>>> 2. Disable the table to prepare for alteration.
>>>
>>>  disable 't'
>>>>
>>> 2. Change the the CF properties to use Snappy. Use the alter command,
>>> for each of the colfams:
>>>
>>>  alter 'table', {NAME=>'f1', COMPRESSION=>'SNAPPY'}, {NAME=>'f2',
>>>> COMPRESSION=>'SNAPPY'} …
>>>>
>>> 3. Re-enable the table.
>>>
>>>  enable 't'
>>>>
>>> (Make sure to test the table. And do not remove away the old codec
>>> immediately. You need to wait until the whole of the table's regions
>>> have major compacted, leaving no old-codec-encoded store file traces.
>>> Correct me if am wrong there!)
>>>
>>> On Thu, May 10, 2012 at 7:26 AM, Jiajun Chen<cjjvict...@gmail.com>
>>>  wrote:
>>>
>>>> How to convert  snappy on existing table .used lzo compression ?
>>>>
>>>> On 10 May 2012 05:27, Doug 
>>>> Meil<doug.meil@**explorysmedical.com<doug.m...@explorysmedical.com>>
>>>>  wrote:
>>>>
>>>>  I'll update the RefGuide with this.  This is a good thing for everybody
>>>>> to
>>>>> know.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 5/9/12 5:08 PM, "Jean-Daniel Cryans"<jdcry...@apache.org>  wrote:
>>>>>
>>>>>  Just alter the families, the old store files will get converted during
>>>>>> compaction later on.
>>>>>>
>>>>>> J-D
>>>>>>
>>>>>> On Wed, May 9, 2012 at 2:06 PM, Otis Gospodnetic
>>>>>> <otis_gospodne...@yahoo.com>  wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Based on the example on
>>>>>>> http://hbase.apache.org/book/**snappy.compression.html<http://hbase.apache.org/book/snappy.compression.html>and
>>>>>>>  some
>>>>>>> search-hadoop.com searches I'm guessing it's not possible to switch
>>>>>>> an
>>>>>>> existing HBase table to use Snappy.
>>>>>>> i.e. a new table with snappy needs to be created and old data imported
>>>>>>> into this table.
>>>>>>>
>>>>>>> Is this correct?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Otis
>>>>>>> ----
>>>>>>> Performance Monitoring for Solr / ElasticSearch / HBase -
>>>>>>> http://sematext.com/spm
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>
>>>
>> --
>> Jeff Whiting
>> Qualtrics Senior Software Engineer
>> je...@qualtrics.com
>>
>>

Reply via email to