unsubscribe
From: Matei Zaharia
To: user@spark.apache.org
Date: 07/17/2014 12:41 PM
Subject:Re: Spark scheduling with Capacity scheduler
It's possible using the --queue argument of spark-submit. Unfortunately
this is not documented on
http://spark.apache.org/docs/latest/running-on-yarn.html but it appears if
you just type spark-submit --help or spark-submit with no arguments.
Matei
On Jul 17, 2014, at 2:33 AM, Konstantin Kudryavtsev
wrote:
> Hi all,
>
> I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on
this cluster, is it possible somehow use Capacity scheduler for Spark jobs
management as well as MR jobs? I mean, I'm able to send MR job to specific
queue, may I do the same with Spark job?
> thank you in advance
>
> Thank you,
> Konstantin Kudryavtsev