org.apache.hadoop.io.compress.SnappyCodec not found

2014-08-28 Thread arthur.hk.c...@gmail.com
Hi,

I use Hadoop 2.4.1, I got org.apache.hadoop.io.compress.SnappyCodec not found” 
error:

hadoop checknative
14/08/29 02:54:51 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
14/08/29 02:54:51 INFO zlib.ZlibFactory: Successfully loaded  initialized 
native-zlib library
Native library checking:
hadoop: true 
/mnt/hadoop/hadoop-2.4.1_snappy/lib/native/Linux-amd64-64/libhadoop.so
zlib:   true /lib64/libz.so.1
snappy: true 
/mnt/hadoop/hadoop-2.4.1_snappy/lib/native/Linux-amd64-64/libsnappy.so.1
lz4:true revision:99
bzip2:  false

(smoke test is ok)
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 
30 /tmp/teragenout
14/08/29 07:40:41 INFO mapreduce.Job: Running job: job_1409253811850_0002
14/08/29 07:40:53 INFO mapreduce.Job: Job job_1409253811850_0002 running in 
uber mode : false
14/08/29 07:40:53 INFO mapreduce.Job:  map 0% reduce 0%
14/08/29 07:41:00 INFO mapreduce.Job:  map 50% reduce 0%
14/08/29 07:41:01 INFO mapreduce.Job:  map 100% reduce 0%
14/08/29 07:41:02 INFO mapreduce.Job: Job job_1409253811850_0002 completed 
successfully
14/08/29 07:41:02 INFO mapreduce.Job: Counters: 31
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=197312
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=167
HDFS: Number of bytes written=3000
HDFS: Number of read operations=8
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Job Counters 
Launched map tasks=2
Other local map tasks=2
Total time spent by all maps in occupied slots (ms)=11925
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=11925
Total vcore-seconds taken by all map tasks=11925
Total megabyte-seconds taken by all map tasks=109900800
Map-Reduce Framework
Map input records=30
Map output records=30
Input split bytes=167
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=22
CPU time spent (ms)=1910
Physical memory (bytes) snapshot=357318656
Virtual memory (bytes) snapshot=1691631616
Total committed heap usage (bytes)=401997824
org.apache.hadoop.examples.terasort.TeraGen$Counters
CHECKSUM=644086318705578
File Input Format Counters 
Bytes Read=0
File Output Format Counters 
Bytes Written=3000
14/08/29 07:41:03 INFO terasort.TeraSort: starting
14/08/29 07:41:03 INFO input.FileInputFormat: Total input paths to process : 2


However I got org.apache.hadoop.io.compress.SnappyCodec not found” when 
running spark smoke test program: 

scala inFILE.first()
java.lang.RuntimeException: Error in configuring object 
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:158)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:171)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.RDD.take(RDD.scala:983)
at org.apache.spark.rdd.RDD.first(RDD.scala:1015)
at $iwC$$iwC$$iwC$$iwC.init(console:15)
at $iwC$$iwC$$iwC.init(console:20)
at $iwC$$iwC.init(console:22)
at $iwC.init(console:24)
at init(console:26)
at .init(console:30)
at .clinit(console)
at .init(console:7)
at .clinit(console)
at $print(console)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke

Re: org.apache.hadoop.io.compress.SnappyCodec not found

2014-08-28 Thread Tsuyoshi OZAWA
Hi,

It looks a problem of class path at spark side.

Thanks,
- Tsuyoshi

On Fri, Aug 29, 2014 at 8:49 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
 Hi,

 I use Hadoop 2.4.1, I got org.apache.hadoop.io.compress.SnappyCodec not
 found” error:

 hadoop checknative
 14/08/29 02:54:51 WARN bzip2.Bzip2Factory: Failed to load/initialize
 native-bzip2 library system-native, will use pure-Java version
 14/08/29 02:54:51 INFO zlib.ZlibFactory: Successfully loaded  initialized
 native-zlib library
 Native library checking:
 hadoop: true
 /mnt/hadoop/hadoop-2.4.1_snappy/lib/native/Linux-amd64-64/libhadoop.so
 zlib:   true /lib64/libz.so.1
 snappy: true
 /mnt/hadoop/hadoop-2.4.1_snappy/lib/native/Linux-amd64-64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false

 (smoke test is ok)
 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar
 teragen 30 /tmp/teragenout
 14/08/29 07:40:41 INFO mapreduce.Job: Running job: job_1409253811850_0002
 14/08/29 07:40:53 INFO mapreduce.Job: Job job_1409253811850_0002 running in
 uber mode : false
 14/08/29 07:40:53 INFO mapreduce.Job:  map 0% reduce 0%
 14/08/29 07:41:00 INFO mapreduce.Job:  map 50% reduce 0%
 14/08/29 07:41:01 INFO mapreduce.Job:  map 100% reduce 0%
 14/08/29 07:41:02 INFO mapreduce.Job: Job job_1409253811850_0002 completed
 successfully
 14/08/29 07:41:02 INFO mapreduce.Job: Counters: 31
 File System Counters
 FILE: Number of bytes read=0
 FILE: Number of bytes written=197312
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=167
 HDFS: Number of bytes written=3000
 HDFS: Number of read operations=8
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=4
 Job Counters
 Launched map tasks=2
 Other local map tasks=2
 Total time spent by all maps in occupied slots (ms)=11925
 Total time spent by all reduces in occupied slots (ms)=0
 Total time spent by all map tasks (ms)=11925
 Total vcore-seconds taken by all map tasks=11925
 Total megabyte-seconds taken by all map tasks=109900800
 Map-Reduce Framework
 Map input records=30
 Map output records=30
 Input split bytes=167
 Spilled Records=0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=22
 CPU time spent (ms)=1910
 Physical memory (bytes) snapshot=357318656
 Virtual memory (bytes) snapshot=1691631616
 Total committed heap usage (bytes)=401997824
 org.apache.hadoop.examples.terasort.TeraGen$Counters
 CHECKSUM=644086318705578
 File Input Format Counters
 Bytes Read=0
 File Output Format Counters
 Bytes Written=3000
 14/08/29 07:41:03 INFO terasort.TeraSort: starting
 14/08/29 07:41:03 INFO input.FileInputFormat: Total input paths to process :
 2


 However I got org.apache.hadoop.io.compress.SnappyCodec not found” when
 running spark smoke test program:

 scala inFILE.first()
 java.lang.RuntimeException: Error in configuring object
 at
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
 at
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
 at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:158)
 at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:171)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
 at scala.Option.getOrElse(Option.scala:120)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
 at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
 at scala.Option.getOrElse(Option.scala:120)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
 at org.apache.spark.rdd.RDD.take(RDD.scala:983)
 at org.apache.spark.rdd.RDD.first(RDD.scala:1015)
 at $iwC$$iwC$$iwC$$iwC.init(console:15)
 at $iwC$$iwC$$iwC.init(console:20)
 at $iwC$$iwC.init(console:22)
 at $iwC.init(console:24)
 at init(console:26)
 at .init(console:30)
 at .clinit(console)
 at .init(console:7)
 at .clinit(console)
 at $print(console)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788)
 at
 org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056)
 at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)
 at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796