Hi,

More update on the case, spark in local config works fine, however when run
through mesos such previously described behaviour occurs:

my spark-env.sh:

export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_JAVA_OPTS="
-Dspark.serializer=org.apache.spark.serializer.KryoSerializer
-Dspark.local.dir=/mnt/usr/spark/tmp
-Dspark.mesos.coarse=True
-Dspark.executor.memory=512m
-Dspark.ui.port=8775
-Dspark.scheduler.mode=FAIR
-Dspark.logConf=true
"
export
SPARK_EXECUTOR_URI=hdfs://stanley/tmp/spark-0.9.0-2.0.0-mr1-cdh4.5.0.tgz
export
MASTER=zk://hadoop-zoo-1:2181,hadoop-zoo-2:2181,hadoop-zoo-3:2181/mesos
export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64/jre"




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Inconsistent-behavior-when-running-spark-on-top-of-tachyon-on-top-of-HDFS-HA-tp1544p1548.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to