Dear William,

Thank you. This worked. I could check how many tickets the job on top of the queue had and then I gave the other jobs more override tickets to get them to the top. I just did not use enough override tickets before.

Best wishes

Marlies

On 11/11/2015 07:20 PM, William Hay wrote:
On Wed, Nov 11, 2015 at 09:48:21AM +1000, Marlies Hankel wrote:
Thanks everyone for the advice.

I have tried qalter -p <num> jobid and also qalter -ot <num> jobid but none
of these gets the jobs onto the top of the queue. I can only get them a bit
higher.
I'd increase  the weight_posix to several times weight_ticket since the priority
derived from weight_posix will be the same for all jobs you haven't boosted 
(unless
a user deliberately lowers their own priority) as such it doesn't affect 
scheduling.

Also how large are the numbers you are using with the above commands ?  The 
priority derived from
posix scales linearly with that number up to the maximum so the numbers need to 
be
fairly large.  Likewise with override tickets you need to hand out more than 
other jobs
are getting from the other ticket policies if you want them to get to the top.
Maybe I have got the scheduler configuration wrong, so I am posting the
configs here. I would like to have a fair share where past usage as well as
time in the queue is taken into account.

I should also say that I see some high priority values in the queue, see
below, which I have always wondered about if this is ok or not. In general
the policy is working. High frequency users can fill up the cluster when it
is empty and occasional or new users come at the top of the queue when they
submit jobs.
If you use qstat -u '*' -ext -pri this will show you how tickets are being 
allocated
and information about the priority of each job according to each of the 
policies.  These
are then combined according to the weights you've assigned to produce the 
overall priority.

Best wishes

Marlies

[root@queue ~]# qconf -ssconf
algorithm                         default
schedule_interval                 0:0:15
maxujobs                          0
queue_sort_method                 load
job_load_adjustments              np_load_avg=0.50
load_adjustment_decay_time        0:7:30
load_formula                      np_load_avg
schedd_job_info                   true
flush_submit_sec                  0
flush_finish_sec                  0
params                            none
reprioritize_interval             0:0:0
halftime                          168
usage_weight_list                 cpu=1.000000,mem=0.000000,io=0.000000
compensation_factor               5.000000
weight_user                       0.250000
weight_project                    0.250000
weight_department                 0.250000
weight_job                        0.250000
weight_tickets_functional         1000
weight_tickets_share              10000
share_override_tickets            TRUE
share_functional_shares           TRUE
max_functional_jobs_to_schedule   200
report_pjob_tickets               TRUE
max_pending_tasks_per_job         50
halflife_decay_list               none
policy_hierarchy                  OFS
weight_ticket                     100.000000
weight_waiting_time               50.000000
weight_deadline                   1000.000000
weight_urgency                    0.100000
weight_priority                   100.000000
max_reservation                   0
default_duration                  INFINITY

------------------------------------

[root@queue ~]# qconf -sstree
id=0
name=Root
type=0
shares=1
childnodes=1
id=1
name=default
type=0
shares=2000
childnodes=NONE

-------------------------------------

1203 150.00751 tbab_283k1 user1      qw    11/09/2015 13:24:21    40
1102 120.10001 as         user2    qw    10/28/2015 09:32:01    80
1103 105.10000 as         user2    qw    10/28/2015 09:33:09    80
1188 50.05451 as         user2    qw    11/03/2015 09:05:50    60
1189 50.05450 as         user2    qw    11/03/2015 09:06:27    60
1187 50.05450 as         user2    qw    11/03/2015 09:00:32    40
1195 50.03751 as         user2    qw    11/05/2015 14:52:45    80
1196 50.03749 as         user2    qw    11/05/2015 14:57:10    80
1200 50.00843 GCN4pore   user3    qw    11/09/2015 10:23:14    20
1201 50.00842 GCN4poreA1 user3    qw    11/09/2015 10:25:19    20
1202 50.00841 GCN4poreA2 user3    qw    11/09/2015 10:26:16    20
1204 50.00000 GCN4poreAR user3    qw    11/10/2015 13:00:13    20
1199 48.59359 GCN3pore1  user3    qw    11/09/2015 10:21:47    20


I want to have user2 jobs up the queue. jobs 1102 and 1103 have a priority
of +1024a and override tickets of 500 for 1103 and 2000 for 1102.

---------------------------------------------------


On 11/10/2015 11:08 PM, Mark Dixon wrote:
On Tue, 10 Nov 2015, Marlies Hankel wrote:
...
We are using OGS/Grid Engine 2011.11. I have recently implemented a fair
share policy which seems to work OK. However, on occasion, when a user
comes up to a deadline I would like to advance them up the queue.
Previously I could change their priority but this is now often not
enough. At the moment past usage is taken into account as well as wait
time.

Is there a way to put certain jobs on top of the queue when a fair share
policy is implemented?
...

Hi,

In addition to the other methods suggested, the admin can add "override
tickets" to particular jobs (qalter -ot <num>, I think), which should move
them up the queue if there's an "O" in your policy hierarchy ("qconf
-ssconf").

All the best,

Mark
--

------------------

Dr. Marlies Hankel
Research Fellow, Theory and Computation Group
Australian Institute for Bioengineering and Nanotechnology (Bldg 75)
eResearch Analyst, Research Computing Centre and Queensland Cyber 
Infrastructure Foundation
The University of Queensland
Qld 4072, Brisbane, Australia
Tel: +61 7 334 63996 | Fax: +61 7 334 63992 | mobile:0404262445
Email: [email protected] | www.theory-computation.uq.edu.au


Notice: If you receive this e-mail by mistake, please notify me,
and do not make any use of its contents. I do not waive any
privilege, confidentiality or copyright associated with it. Unless
stated otherwise, this e-mail represents only the views of the
Sender and not the views of The University of Queensland.


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

--

------------------

Dr. Marlies Hankel
Research Fellow, Theory and Computation Group
Australian Institute for Bioengineering and Nanotechnology (Bldg 75)
eResearch Analyst, Research Computing Centre and Queensland Cyber 
Infrastructure Foundation
The University of Queensland
Qld 4072, Brisbane, Australia
Tel: +61 7 334 63996 | Fax: +61 7 334 63992 | mobile:0404262445
Email: [email protected] | www.theory-computation.uq.edu.au


Notice: If you receive this e-mail by mistake, please notify me,
and do not make any use of its contents. I do not waive any
privilege, confidentiality or copyright associated with it. Unless
stated otherwise, this e-mail represents only the views of the
Sender and not the views of The University of Queensland.


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to