What about down-scaling when I use Mesos, does that really deteriorate the
performance ? Otherwise we would probably go for spark on mesos on ec2 :)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Down-scaling-Spark-on-EC2-cluster-tp10494p12109.html
Sent
Any idea about the probable dates for this implementation. I believe it would
be a wonderful (and essential) functionality to gain more acceptance in the
community.
--
View this message in context:
it be if it is in the middle of a task).
Thanks in advance.
Shubhabrata
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Down-scaling-Spark-on-EC2-cluster-tp10494.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I am new to spark and we are developing a data science pipeline based on
spark on ec2. So far we have been using python on spark standalone cluster.
However, being a newbie I would like to know more about how can I do
debugging (program level) from spark logs (is it stderr ?). I find it a bit
unsubscribe
This is the error from stderr:
Spark Executor Command: java -cp
:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/spark/conf:/root/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop1.0.4.jar
-Djava.library.path=/root/ephemeral-hdfs/lib/native/
In order to check if there is any issue with python API I ran a scala
application provided in the examples. Still the same error
./bin/run-example org.apache.spark.examples.SparkPi
spark://[Master-URL]:7077
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
Moreover it seems all the workers are registered and have sufficient memory
(2.7GB where as I have asked for 512 MB). The UI also shows the jobs are
running on the slaves. But on the termial it is still the same error
Initial job has not accepted any resources; check your cluster UI to ensure
that