It's possible using the --queue argument of spark-submit. Unfortunately this is 
not documented on http://spark.apache.org/docs/latest/running-on-yarn.html but 
it appears if you just type spark-submit --help or spark-submit with no 
arguments.

Matei

On Jul 17, 2014, at 2:33 AM, Konstantin Kudryavtsev 
<kudryavtsev.konstan...@gmail.com> wrote:

> Hi all,
> 
> I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on this 
> cluster, is it possible somehow use Capacity scheduler for Spark jobs 
> management as well as MR jobs? I mean, I'm able to send MR job to specific 
> queue, may I do the same with Spark job?
> thank you in advance
> 
> Thank you,
> Konstantin Kudryavtsev

Reply via email to