I am working on importing snappy compressed json file into spark rdd or
dataset. However I meet this error: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

I have set the following configuration:

SparkConf conf = new SparkConf()
            .setAppName("normal spark")
            .setMaster("local")
            .set("spark.io.compression.codec",
"org.apache.spark.io.SnappyCompressionCodec")
            
.set("spark.driver.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            
.set("spark.driver.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            
.set("spark.executor.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            
.set("spark.executor.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            ;

Where D:\Downloads\spark-2.2.0-bin-hadoop2.7 is my spark unpacked path, and
I can find the snappy jar file snappy-0.2.jar and snappy-java-1.1.2.6.jar in

D:\Downloads\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\jars\

However nothing works and even the error message not change.

How can I fix it?


ref of stackoverflow: https://stackoverflow.com/questions/
47626012/config-snappy-support-for-spark-in-windows
<https://stackoverflow.com/questions/47626012/config-snappy-support-for-spark-in-windows>



Regard,
Junfeng Chen

Reply via email to