Cody,

Maybe I'm not getting this, but it doesn't look like this page is
describing a priority queue scheduling policy. What this section discusses
is how resources are shared between queues. A weight-1000 pool will get
1000 times more resources allocated to it than a priority 1 queue. Great,
but not what I want. I want to be able to define an Ordering on make my
tasks representing their priority, and have Spark allocate all resources to
the job that has the highest priority.

Alex

On Sat, Jan 10, 2015 at 10:11 PM, Cody Koeninger <c...@koeninger.org> wrote:

>
> http://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties
>
> "Setting a high weight such as 1000 also makes it possible to implement
> *priority* between pools—in essence, the weight-1000 pool will always get
> to launch tasks first whenever it has jobs active."
>
> On Sat, Jan 10, 2015 at 11:57 PM, Alessandro Baretta <
> alexbare...@gmail.com> wrote:
>
>> Mark,
>>
>> Thanks, but I don't see how this documentation solves my problem. You are
>> referring me to documentation of fair scheduling; whereas, I am asking
>> about as unfair a scheduling policy as can be: a priority queue.
>>
>> Alex
>>
>> On Sat, Jan 10, 2015 at 5:00 PM, Mark Hamstra <m...@clearstorydata.com>
>> wrote:
>>
>>> -dev, +user
>>>
>>> http://spark.apache.org/docs/latest/job-scheduling.html
>>>
>>>
>>> On Sat, Jan 10, 2015 at 4:40 PM, Alessandro Baretta <
>>> alexbare...@gmail.com> wrote:
>>>
>>>> Is it possible to specify a priority level for a job, such that the
>>>> active
>>>> jobs might be scheduled in order of priority?
>>>>
>>>> Alex
>>>>
>>>
>>>
>>
>

Reply via email to