Re: SparkSQL Thriftserver in Mesos

2014-09-26 Thread Cheng Lian
You can avoid install Spark on each node by uploading Spark distribution tarball file to HDFS setting |spark.executor.uri| to the HDFS location. In this way, Mesos will download and the tarball file before launching containers. Please refer to this Spark documentation page

Re: SparkSQL Thriftserver in Mesos

2014-09-22 Thread Dean Wampler
The Mesos install guide says this: "To use Mesos from Spark, you need a Spark binary package available in a place accessible by Mesos, and a Spark driver program configured to connect to Mesos." For example, putting it in HDFS or copying it to each node in the same location should do the trick.

Re: SparkSQL Thriftserver in Mesos

2014-09-22 Thread John Omernik
Any thoughts on this? On Sat, Sep 20, 2014 at 12:16 PM, John Omernik wrote: > I am running the Thrift server in SparkSQL, and running it on the node I > compiled spark on. When I run it, tasks only work if they landed on that > node, other executors started on nodes I didn't compile spark on (a

SparkSQL Thriftserver in Mesos

2014-09-20 Thread John Omernik
I am running the Thrift server in SparkSQL, and running it on the node I compiled spark on. When I run it, tasks only work if they landed on that node, other executors started on nodes I didn't compile spark on (and thus don't have the compile directory) fail. Should spark be distributed properly