http://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties

"Setting a high weight such as 1000 also makes it possible to implement
*priority* between pools—in essence, the weight-1000 pool will always get
to launch tasks first whenever it has jobs active."

On Sat, Jan 10, 2015 at 11:57 PM, Alessandro Baretta <alexbare...@gmail.com>
wrote:

> Mark,
>
> Thanks, but I don't see how this documentation solves my problem. You are
> referring me to documentation of fair scheduling; whereas, I am asking
> about as unfair a scheduling policy as can be: a priority queue.
>
> Alex
>
> On Sat, Jan 10, 2015 at 5:00 PM, Mark Hamstra <m...@clearstorydata.com>
> wrote:
>
>> -dev, +user
>>
>> http://spark.apache.org/docs/latest/job-scheduling.html
>>
>>
>> On Sat, Jan 10, 2015 at 4:40 PM, Alessandro Baretta <
>> alexbare...@gmail.com> wrote:
>>
>>> Is it possible to specify a priority level for a job, such that the
>>> active
>>> jobs might be scheduled in order of priority?
>>>
>>> Alex
>>>
>>
>>
>

Reply via email to