Hi,

Am 01.05.2014 um 21:13 schrieb Michael Stauffer:

> OGS/GE 2011.11p1 (Rocks 6.1)
> 
> I'm trying to understand the output from qstat after setting up a resource 
> limits for queue memory requests. 
> 
> I've got h_vmem and s_vmem set as consumables, and the default to 3.9G per 
> job, e.g.:
> 
> [root@compute-0-18 ~]# qconf -sc | grep h_vmem
> h_vmem              h_vmem     MEMORY      <=    YES         JOB        3900M 
>    0
> 
> Previously, using 'qstat -F h_vmem', I was seeing the amount of this resource 
> remaining after whatever running jobs had claimed either the default or 
> requested amount. 
> 
> But now after setting up the following queue limits, the all.q output shows 
> only the queue per-job limit, i.e. 'qf'. Is that intentional? qhost still 
> shows the remaining consumable resource amounts. Just curious about the 
> rationale, really.

As the limit is defined on a job and a host level the tighter one will be 
displayed - the prefix will either be qf: or hc. Meaning: you can submit a new 
job which requests up the the displayed value - either the limit of the queue 
or the remaining free memory on the host.

You could in addition define a limit under complex_values for h_vmem and the 
output will change to qc: if it's the actual constraint.

-- Reuti


> [root@compute-0-18 ~]# qconf -sq all.q
> <snip>
> h_rss                 INFINITY
> s_vmem                7.6G
> h_vmem                7.8G
> 
> [root@compute-0-18 ~]# qstat -F h_vmem,s_vmem
> [email protected]        BP    0/3/16         2.72     linux-x64     
>         qf:h_vmem=7.800G
>         qf:s_vmem=7.600G
> <snip>
> 
> [root@compute-0-18 ~]# qhost -F h_vmem -h compute-0-0
> HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  
> SWAPUS
> -------------------------------------------------------------------------------
> global                  -               -     -       -       -       -       
> -
> compute-0-0             linux-x64      16  2.75   63.0G    2.2G   31.2G     
> 0.0
>     Host Resource(s):      hc:h_vmem=51.573G
> 
> -M
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to