I also ran into same issue. What is the solution?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Compiling-Spark-master-284771ef-with-sbt-sbt-assembly-fails-on-EC2-tp11155p11189.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Have you tried flag --spark-version of spark-ec2 ?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Installing-Spark-0-9-1-on-EMR-Cluster-tp11084p11096.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
It seems that spark-ec2 script deploys Tachyon module along with other
setup.
I am trying to use .persist(OFF_HEAP) for RDD persistence, but on worker I
see this error
--
Failed to connect (2) to master localhost/127.0.0.1:19998 :
java.net.ConnectException: Connection refused
--
From
I am also running into modules/mod_authn_alias.so issue on r3.8xlarge when
launched cluster with ./spark-ec2; so ganglia is not accessible. From the
posts it seems that Patrick suggested using Ubuntu 12.04. Can you please
provide name of AMI that can be used with -a flag that will not have this
I am also running into modules/mod_authn_alias.so issue on r3.8xlarge when
launched cluster with ./spark-ec2; so ganglia is not accessible. From the
posts it seems that Patrick suggested using Ubuntu 12.04. Can you please
provide name of AMI that can be used with -a flag that will not have this
pressure..but I could never figure out the root cause.
--
14/07/02 07:34:45 WARN TaskSetManager: Loss was due to
java.io.FileNotFoundException
java.io.FileNotFoundException:
/var/storage/sda3/nm-local/usercache/nit/appcache/application_1403208801430_0183/spark-local-20140702065054-388d/0e