java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path

2015-07-21 Thread stark_summer
*java:*
java version 1.7.0_60
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)

*scala*
Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL

*hadoop cluster:*
with 51 Servers that hadoop-2.3-cdh-5.1.0 version ,and also setup Snappy ,
ls /home/cluster/apps/hadoop/lib/native/libsnappy*
/home/cluster/apps/hadoop/lib/native/libsnappyjava.so 
/home/cluster/apps/hadoop/lib/native/libsnappy.so.1
/home/cluster/apps/hadoop/lib/native/libsnappy.so 
/home/cluster/apps/hadoop/lib/native/libsnappy.so.1.1.3

when submit hadoop wordcount with SnappyCodec ,it can run success~

* ./bin/hadoop jar
~/opt/project/hadoop-demo/out/artifacts/hadoop_demo/hadoop-demo.jar 
com.hadoop.demo.mapreduce.base.WordCountWithSnappyCodec 
-Dmapred.map.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec 
/data/hadoop/wordcount/input output13*

But when  submit spark job with yarn-cluster, 

*/home/dp/spark/spark-1.4/spark-1.4.1-test/bin/spark-submit --master
yarn-cluster --executor-memory 3g  --driver-memory 1g  --class
org.apache.spark.examples.SparkPi
/home/dp/spark/spark-1.4/spark-1.4.1-test/examples/target/spark-examples_2.10-1.4.1.jar
10*

PS: As I know spark use of snappy-version:1.1.1.7,but hadoop-2.3-cdh-5.1.0
use of  snappy-version:1.0.4.1,So I  have replaced hadoop-2.3-cdh-5.1.0 with
snappy-version:1.1.1.7,but it also do not work~
It will run fail and blow error:

15/07/22 10:29:55 DEBUG component.AbstractLifeCycle: STARTED
o.s.j.s.ServletContextHandler{/metrics/json,null}
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.clinit(Snappy.java:44)
at
org.apache.spark.io.SnappyCompressionCodec.init(CompressionCodec.scala:150)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
at
org.apache.spark.scheduler.EventLoggingListener.init(EventLoggingListener.scala:69)
at org.apache.spark.SparkContext.init(SparkContext.scala:513)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:483)
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 23 more
15/07/22 10:29:55 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
at
org.apache.spark.scheduler.EventLoggingListener.init(EventLoggingListener.scala:69)
at org.apache.spark.SparkContext.init(SparkContext.scala:513)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57

Error: no snappyjava in java.library.path

2015-02-26 Thread Dan Dong
Hi, All,
  When I run a small program in spark-shell, I got the following error:
...
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 29 more
...

I see the file is actually there under my hadoop installation dir, e.g:
./hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce2/lib/snappy-java-1.0.4.1.jar
./hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/lib/snappy-java-1.0.4.1.jar
./hadoop-2.5.0-cdh5.2.0/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/snappy-java-1.0.4.1.jar
./hadoop-2.5.0-cdh5.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar
./hadoop-2.5.0-cdh5.2.0/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar
./hadoop-2.5.0-cdh5.2.0/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/snappy-java-1.0.4.1.jar

But although after I included one of the above path in $CLASSPATH, the
error is still there. So how to set the *PATH*s to resolve it? Thanks!

$ echo $CLASSPATH
/home/ubuntu/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce/lib

Cheers,
Dan


Re: Error: no snappyjava in java.library.path

2015-02-26 Thread Marcelo Vanzin
Hi Dan,

This is a CDH issue, so I'd recommend using cdh-u...@cloudera.org for
those questions.

This is an issue with fixed in recent CM 5.3 updates; if you're not
using CM, or want a workaround, you can manually configure
spark.driver.extraLibraryPath and spark.executor.extraLibraryPath
to include the path to the $HADOOP_HOME/lib/native/ directory.

(Note this is not a classpath issue, but a native library issue.)


On Thu, Feb 26, 2015 at 2:44 PM, Dan Dong dongda...@gmail.com wrote:
 Hi, All,
   When I run a small program in spark-shell, I got the following error:
 ...
 Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
 java.library.path
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
 at java.lang.Runtime.loadLibrary0(Runtime.java:849)
 at java.lang.System.loadLibrary(System.java:1088)
 at
 org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
 ... 29 more
 ...

 I see the file is actually there under my hadoop installation dir, e.g:
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce2/lib/snappy-java-1.0.4.1.jar
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/lib/snappy-java-1.0.4.1.jar
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/snappy-java-1.0.4.1.jar
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar
 ./hadoop-2.5.0-cdh5.2.0/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/snappy-java-1.0.4.1.jar

 But although after I included one of the above path in $CLASSPATH, the error
 is still there. So how to set the *PATH*s to resolve it? Thanks!

 $ echo $CLASSPATH
 /home/ubuntu/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce/lib

 Cheers,
 Dan




-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



no snappyjava in java.library.path

2015-01-12 Thread Dan Dong
Hi,
  My Spark job failed with no snappyjava in java.library.path as:
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1857)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1119)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)

I'm running spark-1.1.1 on hadoop2.4. I found that the file is there and I
have included it in the
CLASSPATH already.
../hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar
../hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar
../hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/snappy-java-1.0.4.1.jar

Did I miss anything or I should set it in other way?

Cheers,
Dan


Re: no snappyjava in java.library.path

2015-01-12 Thread David Rosenstrauch
I ran into this recently.  Turned out we had an old 
org-xerial-snappy.properties file in one of our conf directories that 
had the setting:


# Disables loading Snappy-Java native library bundled in the
# snappy-java-*.jar file forcing to load the Snappy-Java native
# library from the java.library.path.
#
org.xerial.snappy.disable.bundled.libs=true

When I switched that to false, it made the problem go away.

May or may not be your problem of course, but worth a look.

HTH,

DR

On 01/12/2015 03:28 PM, Dan Dong wrote:

Hi,
   My Spark job failed with no snappyjava in java.library.path as:
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1857)
 at java.lang.Runtime.loadLibrary0(Runtime.java:870)
 at java.lang.System.loadLibrary(System.java:1119)
 at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)

I'm running spark-1.1.1 on hadoop2.4. I found that the file is there and I
have included it in the
CLASSPATH already.
../hadoop/share/hadoop/tools/lib/snappy-java-1.0.4.1.jar
../hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar
../hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/snappy-java-1.0.4.1.jar

Did I miss anything or I should set it in other way?

Cheers,
Dan




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Error while running Streaming examples - no snappyjava in java.library.path

2014-10-20 Thread Akhil Das
Its a known bug in JDK7 and OSX's naming convention, here's how to resolve
it:

 1. Get the Snappy jar file from
http://central.maven.org/maven2/org/xerial/snappy/snappy-java/
 2. Copy the appropriate one to your project's class path.



Thanks
Best Regards

On Sun, Oct 19, 2014 at 10:18 PM, bdev buntu...@gmail.com wrote:

 I built the latest Spark project and I'm running into these errors when
 attempting to run the streaming examples locally on the Mac, how do I fix
 these errors?

 java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
 at java.lang.Runtime.loadLibrary0(Runtime.java:849)
 at java.lang.System.loadLibrary(System.java:1088)
 at
 org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:170)
 at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
 at org.xerial.snappy.Snappy.clinit(Snappy.java:47)
 at
 org.xerial.snappy.SnappyOutputStream.init(SnappyOutputStream.java:81)
 at

 org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:125)
 at

 org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1083)
 at

 org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
 at

 org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
 at

 org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:126)
 at

 org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:732)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:731)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at

 org.apache.spark.util.collection.ExternalSorter$IteratorForPartition.foreach(ExternalSorter.scala:789)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:731)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:727)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at

 org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:727)
 at

 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
 at
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
 at
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
 at org.apache.spark.scheduler.Task.run(Task.scala:56)
 at
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:181)
 at

 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

 Thanks!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Error-while-running-Streaming-examples-no-snappyjava-in-java-library-path-tp16765.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: Error while running Streaming examples - no snappyjava in java.library.path

2014-10-20 Thread Buntu Dev
Thanks Akhil.

On Mon, Oct 20, 2014 at 1:57 AM, Akhil Das ak...@sigmoidanalytics.com
wrote:

 Its a known bug in JDK7 and OSX's naming convention, here's how to resolve
 it:

  1. Get the Snappy jar file from
 http://central.maven.org/maven2/org/xerial/snappy/snappy-java/
  2. Copy the appropriate one to your project's class path.



 Thanks
 Best Regards

 On Sun, Oct 19, 2014 at 10:18 PM, bdev buntu...@gmail.com wrote:

 I built the latest Spark project and I'm running into these errors when
 attempting to run the streaming examples locally on the Mac, how do I fix
 these errors?

 java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
 at java.lang.Runtime.loadLibrary0(Runtime.java:849)
 at java.lang.System.loadLibrary(System.java:1088)
 at
 org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:170)
 at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
 at org.xerial.snappy.Snappy.clinit(Snappy.java:47)
 at
 org.xerial.snappy.SnappyOutputStream.init(SnappyOutputStream.java:81)
 at

 org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:125)
 at

 org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1083)
 at

 org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
 at

 org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
 at

 org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:126)
 at

 org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:732)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:731)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at

 org.apache.spark.util.collection.ExternalSorter$IteratorForPartition.foreach(ExternalSorter.scala:789)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:731)
 at

 org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:727)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at

 org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:727)
 at

 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
 at
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
 at
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
 at org.apache.spark.scheduler.Task.run(Task.scala:56)
 at
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:181)
 at

 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

 Thanks!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Error-while-running-Streaming-examples-no-snappyjava-in-java-library-path-tp16765.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





Error while running Streaming examples - no snappyjava in java.library.path

2014-10-19 Thread bdev
I built the latest Spark project and I'm running into these errors when
attempting to run the streaming examples locally on the Mac, how do I fix
these errors?

java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
at java.lang.Runtime.loadLibrary0(Runtime.java:849)
at java.lang.System.loadLibrary(System.java:1088)
at 
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:170)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
at org.xerial.snappy.Snappy.clinit(Snappy.java:47)
at 
org.xerial.snappy.SnappyOutputStream.init(SnappyOutputStream.java:81)
at
org.apache.spark.io.SnappyCompressionCodec.compressedOutputStream(CompressionCodec.scala:125)
at
org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1083)
at
org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
at
org.apache.spark.storage.BlockManager$$anonfun$7.apply(BlockManager.scala:579)
at
org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:126)
at
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
at
org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:732)
at
org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:731)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at
org.apache.spark.util.collection.ExternalSorter$IteratorForPartition.foreach(ExternalSorter.scala:789)
at
org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:731)
at
org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:727)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at
org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:727)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:181)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Error-while-running-Streaming-examples-no-snappyjava-in-java-library-path-tp16765.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org