Hi,

I have a similar issue like the user below:
I’m running Spark 0.8.1 (standalone). When I test the streaming 
NetworkWordCount example as in the docs with local[2] it works fine. As soon as 
I want to connect to my cluster using [NetworkWordCount master …] it says:
---
Failed to load native Mesos library from 
/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Exception in thread "main" java.lang.UnsatisfiedLinkError: no mesos in 
java.library.path
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
        at java.lang.Runtime.loadLibrary0(Runtime.java:849)
        at java.lang.System.loadLibrary(System.java:1088)
        at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:52)
        at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:64)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:260)
        at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:559)
        at 
org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:84)
        at 
org.apache.spark.streaming.api.java.JavaStreamingContext.<init>(JavaStreamingContext.scala:76)
        at 
org.apache.spark.streaming.examples.JavaNetworkWordCount.main(JavaNetworkWordCount.java:50)
---

I built mesos 0.13 and added the MESOS_NATIVE_LIBRARY entry in spark-env.sh. 
But then I get:
---
A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fed89801ce9, pid=13580, tid=140657358776064
#
# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# V  [libjvm.so+0x632ce9]  jni_GetByteArrayElements+0x89
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/vagrant/hs_err_pid13580.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
---

The error lag says:
---
Current thread (0x00007fed8473d000):  JavaThread "MesosSchedulerBackend driver" 
daemon [_thread_in_vm, id=13638, stack(0x00007fed57d7a000,0x00007fed57e7b000)]
…
---

Working on Ubuntu 12.04 in Virtual Box. Tried it with OpenJDK 6 and Oracle Java 
7.


Any ideas??
Many thanks.

Christoph


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>Please ignore this error - I found the issue.
>
>Thanks !
>
>
>On Mon, Jan 20, 2014 at 3:14 PM, Manoj Samel <manojsamelt...@gmail.com>wrote:
>
>> Hi
>>
>> I deployed spark 0.8.1 on standalone cluster per
>> https://spark.incubator.apache.org/docs/0.8.1/spark-standalone.html
>>
>> When i start a spark-shell , I get following error
>>
>> I thought mesos should not be required for standalone cluster. Do I have
>> to change any parameters in make-distribution.sh that I used to build the
>> spark distribution for this cluster ? I left all to default (and noticed
>> that the default HADOOP version is 1.0.4 which is not my hadoop version -
>> but I am not using Hadoop here).
>>
>> Creating SparkContext...
>> Failed to load native Mesos library from
>> java.lang.UnsatisfiedLinkError: no mesos in java.library.path
>> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
>>  at java.lang.Runtime.loadLibrary0(Runtime.java:823)
>> at java.lang.System.loadLibrary(System.java:1028)
>>  at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:52)
>> at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:64)
>>  at org.apache.spark.SparkContext.<init>(SparkContext.scala:260)
>> at
>> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:862)

Reply via email to