Thanks a lot
Worked like a charm.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Missing-SparkSQLCLIDriver-and-Beeline-drivers-in-Spark-tp11724p12024.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---
Hi
I wish to migrate from shark to the spark-sql shell, where I am facing some
difficulties in setting up.
I cloned the "branch-1.0-jdbc" to test out the spark-sql shell, but I am
unable to run it after building the source.
I've tried two methods for building (with Hadoop 1.0.4) - sbt/sbt assem
Rectified the issue by providing the executor uri location in the input
./bin/spark-submit --master mesos://:5050 --class
org.apache.spark.examples.SparkPi --driver-java-options
-Dspark.executor.uri=hdfs://:9000/new/spark-1.0.0-hadoop-2.4.0.tgz
/opt/spark-examples-1.0.0-hadoop2.4.0.jar 10
I am st
My setup ---
I have a private cluster running on 4 nodes. I want to use the spark-submit
script to execute spark applications on the cluster. I am using Mesos to
manage the cluster.
This is the command I ran on local mode, which ran successfully ---
./bin/spark-submit --master local --class org.
Hi
I am currently running a private mesos cluster of 1+3 machines for running
Spark and Shark applications on it. I've currently installed everything from
an admin account. I now want to run them from another account restricting
access to the configuration settings. Any suggestions on how to go ab
I do assume that you've added HADOOP_HOME to you environment variables.
Otherwise, you could fill the actual path of hadoop on your cluster. Also,
did you do update the bash?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806
I'm running a manually built cluster on EC2. I have mesos (0.18.2) and hdfs
(2.0.0-cdh4.5.0) installed on all slaves (3) and masters (3). I have
spark-1.0.0 on one master and the executor file is on hdfs for the slaves.
Whenever I try to launch a spark application on the cluster, it starts a
task
I am also getting the exact error, with the exact logs when I run Spark 1.0.0
in coarse-grained mode.
Coarse grained mode works perfectly with earlier versions that I tested -
0.9.1 and 0.9.0, but seems to have undergone some modification in spark
1.0.0
--
View this message in context:
http://
Since $HADOOP_HOME is deprecated, try adding it to the Mesos configuration
file.
Add `export MESOS_HADOOP_HOME=$HADOOP_HOME to ~/.bashrc` and that should
solve your error
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p69