I don't want to use YARN or Mesos, just trying the standalone spark cluster.
We need a way to do seamless submission with the API which I don't see.
To my surprise I was hit by this issue when i tried running the submit from
another machine, it is crazy that I have to submit the job from the worked
node or play with the envirnments variables. It is the seamless
http://apache-spark-user-list.1001560.n3.nabble.com/executor-failed-cannot-find-compute-classpath-sh-td859.html


On Fri, Sep 5, 2014 at 8:33 AM, Guru Medasani <gdm...@outlook.com> wrote:

> I am able to run Spark jobs and Spark Streaming jobs successfully via YARN
> on a CDH cluster.
>
> When you mean YARN isn’t quite there yet, you mean to submit the jobs
> programmatically? or just in general?
>
>
> On Sep 4, 2014, at 1:45 AM, Matt Chu <m...@kabam.com> wrote:
>
> https://github.com/spark-jobserver/spark-jobserver
>
> Ooyala's Spark jobserver is the current de facto standard, IIUC. I just
> added it to our prototype stack, and will begin trying it out soon. Note
> that you can only do standalone or Mesos; YARN isn't quite there yet.
>
> (The repo just moved from https://github.com/ooyala/spark-jobserver, so
> don't trust Google on this one (yet); development is happening in the first
> repo.)
>
>
>
> On Wed, Sep 3, 2014 at 11:39 PM, Vicky Kak <vicky....@gmail.com> wrote:
>
>> I have been able to submit the spark jobs using the submit script but I
>> would like to do it via code.
>> I am unable to search anything matching to my need.
>> I am thinking of using org.apache.spark.deploy.SparkSubmit to do so, may
>> be have to write some utility that passes the parameters required for this
>> class.
>> I would be interested to know how community is doing.
>>
>> Thanks,
>> Vicky
>>
>
>
>

Reply via email to