Can you read snappy compressed file in hdfs?  Looks like the libsnappy.so is 
not in the hadoop native lib path.

On Thursday, April 2, 2015 at 10:13 AM, Nick Travers wrote:

> Has anyone else encountered the following error when trying to read a snappy
> compressed sequence file from HDFS?
> 
> *java.lang.UnsatisfiedLinkError:
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z*
> 
> The following works for me when the file is uncompressed:
> 
> import org.apache.hadoop.io._
> val hdfsPath = "hdfs://nost.name/path/to/folder 
> (http://nost.name/path/to/folder)"
> val file = sc.sequenceFile[BytesWritable,String](hdfsPath)
> file.count()
> 
> but fails when the encoding is Snappy.
> 
> I've seen some stuff floating around on the web about having to explicitly
> enable support for Snappy in spark, but it doesn't seem to work for me:
> http://www.ericlin.me/enabling-snappy-support-for-sharkspark
> <http://www.ericlin.me/enabling-snappy-support-for-sharkspark> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-snappy-and-HDFS-tp22349.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> (mailto:user-unsubscr...@spark.apache.org)
> For additional commands, e-mail: user-h...@spark.apache.org 
> (mailto:user-h...@spark.apache.org)
> 
> 


Reply via email to