Hi all,

I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on this
cluster, is it possible somehow use Capacity scheduler for Spark jobs
management as well as MR jobs? I mean, I'm able to send MR job to specific
queue, may I do the same with Spark job?
thank you in advance

Thank you,
Konstantin Kudryavtsev

Reply via email to