If you set up a number of pools equal to the number of different priority
levels you want, make the relative weights of those pools very different,
and submit a job to the pool representing its priority, I think youll get
behavior equivalent to a priority queue. Try it and see.
If I'm
Cody,
While I might be able to improve the scheduling of my jobs by using a few
different pools with weights equal to, say, 1, 1e3 and 1e6, effectively
getting a small handful of priority classes. Still, this is really not
quite what I am describing. This is why my original post was on the dev
-dev, +user
http://spark.apache.org/docs/latest/job-scheduling.html
On Sat, Jan 10, 2015 at 4:40 PM, Alessandro Baretta alexbare...@gmail.com
wrote:
Is it possible to specify a priority level for a job, such that the active
jobs might be scheduled in order of priority?
Alex
http://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties
Setting a high weight such as 1000 also makes it possible to implement
*priority* between pools—in essence, the weight-1000 pool will always get
to launch tasks first whenever it has jobs active.
On Sat, Jan 10,
Cody,
Maybe I'm not getting this, but it doesn't look like this page is
describing a priority queue scheduling policy. What this section discusses
is how resources are shared between queues. A weight-1000 pool will get
1000 times more resources allocated to it than a priority 1 queue. Great,
but
Mark,
Thanks, but I don't see how this documentation solves my problem. You are
referring me to documentation of fair scheduling; whereas, I am asking
about as unfair a scheduling policy as can be: a priority queue.
Alex
On Sat, Jan 10, 2015 at 5:00 PM, Mark Hamstra m...@clearstorydata.com