Why can't i use this version:
http://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/?redir=1


On 17 March 2011 17:29, Ferdy Galema <ferdy.gal...@kalooga.com> wrote:

> Updating the lzo libraries resolved the problem. Thanks for pointing it
> out and thanks to Todd Lipcon for his hadoop-lzo-packager.
>
>
> On 03/16/2011 06:35 PM, Stack wrote:
>
>> Poking in our mail archives, does it help?  For example:
>> http://search-hadoop.com/m/**QMDV41Sh1GI/lzo+compression&**
>> subj=LZO+Compression<http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression>
>> St.Ack
>>
>> On Wed, Mar 16, 2011 at 10:28 AM, Ferdy 
>> Galema<ferdy.galema@kalooga.**com<ferdy.gal...@kalooga.com>>
>>  wrote:
>>
>>> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are using
>>> 64bit machines.
>>>
>>> Starting goes great, only right after the first compaction we get this
>>> error:
>>> Uncaught exception in service thread regionserver60020.compactor
>>> java.lang.AbstractMethodError:
>>> com.hadoop.compression.lzo.**LzoCompressor.reinit(Lorg/**
>>> apache/hadoop/conf/**Configuration;)V
>>>        at
>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>> CodecPool.java:105)
>>>        at
>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>> CodecPool.java:112)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.Compression$Algorithm.**
>>> getCompressor(Compression.**java:200)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>> getCompressingStream(HFile.**java:397)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.newBlock(**
>>> HFile.java:383)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>> checkBlockBoundary(HFile.java:**354)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**HFile.java:536)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**HFile.java:501)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.StoreFile$Writer.**
>>> append(StoreFile.java:836)
>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>> Store.java:935)
>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>> Store.java:733)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>> compactStores(HRegion.java:**769)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>> compactStores(HRegion.java:**714)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread.run(**
>>> CompactSplitThread.java:81)
>>>
>>> Lzo worked fine. This is how I believe we used it.
>>> # LZO compression in Hbase will pass through three layers:
>>> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
>>> point
>>> # 2) libgplcompression.* in the hbase native lib directory; the native
>>> connectors
>>> # 3) liblzo2.so.2 in the hbase native lib directory; the base native
>>> library
>>>
>>> Anyway, it would be great if somebody could help us out.
>>>
>>>


-- 

陈加俊  项目经理
优讯时代(北京)网络技术有限公司
优讯网 www.uuwatch.com

地址:北京市海淀区上地五街7号昊海大厦207

电话:010-82895510
传真:010-82896636
手机:15110038983
电邮:*cjjvict...@gmail.com*

Reply via email to