Please remove LzoCodec from config. 

Cheers

On Feb 12, 2014, at 5:12 PM, Li Li <fancye...@gmail.com> wrote:

> <property>
>  <name>io.compression.codecs</name>
>  
> <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>  <description>A list of the compression codec classes that can be used
>               for compression/decompression.</description>
> </property>
> 
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> 
> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>> What's the value for "io.compression.codecs" config parameter ?
>> 
>> Thanks
>> 
>> 
>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fancye...@gmail.com> wrote:
>>> 
>>> I am runing example of wordcout but encount an exception:
>>> I googled and know lzo compression's license is incompatible with apache's
>>> so it's not built in.
>>> the question is I am using default configuration of hadoop 1.2.1, why
>>> it need lzo?
>>> anothe question is, what's Cleaning up the staging area mean?
>>> 
>>> 
>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>> /lili/test
>>> 
>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>> : 1
>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>> 
>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>> java.lang.IllegalArgumentException: Compression codec
>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>        at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>> Caused by: java.lang.ClassNotFoundException:
>>> com.hadoop.compression.lzo.LzoCodec
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>        at java.lang.Class.forName0(Native Method)
>>>        at java.lang.Class.forName(Class.java:264)
>>>        at
>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>> 
>> 

Reply via email to