Ok, that seems to be working, but it changes the submission time `-l h_vmem=` 
to be per thread instead of per job.  Not a big deal, just a little annoying to 
have to calculate each time.

Thanks

Brett Taylor
Systems Administrator
Center for Systems and Computational Biology

The Wistar Institute
3601 Spruce St.
Room 214
Philadelphia PA 19104
Tel: 215-495-6914
Sending me a large file? Use my secure dropbox:
https://cscb-filetransfer.wistar.upenn.edu/dropbox/[email protected]

From: Ian Kaufman [mailto:[email protected]]
Sent: Wednesday, December 19, 2012 11:37 AM
To: Brett Taylor
Cc: [email protected]
Subject: Re: [gridengine users] vmem allocation

Hi Brett,
Try setting consumable to YES instead of JOB.

Ian

On Wed, Dec 19, 2012 at 8:05 AM, Brett Taylor 
<[email protected]<mailto:[email protected]>> wrote:
Hello,

I seem to be having some issues with the h_vmem setting and proper allocation.  
I was under the impression that this would function much like the CPU slots, 
and once all of the h_vmem had been allocated, that host would not accept more 
jobs.  This however does not seem to be the case.  I have two queues running on 
the same nodes.

My hosts are defined:

complex_values        slots=48,h_vmem=95G

complex config:

#name               shortcut   type        relop requestable consumable default 
 urgency
#----------------------------------------------------------------------------------------
h_vmem              h_vmem     MEMORY      <=    YES         JOB        4G      
 0

queue config, in both queues:

slots                 24
h_vmem                INFINITY

So if I submit one job to A.q, using –l h_vmem=95G, then `qstat -f -u "*" -F 
h_vmem` shows:

---------------------------------------------------------------------------------
[email protected]<mailto:[email protected]>       BIP   0/24/24        
1.02     lx26-amd64
        hc:h_vmem=0.000
   3970 0.60500 bt_script. btaylor      r     12/17/2012 11:26:45    24
---------------------------------------------------------------------------------
[email protected]<mailto:[email protected]>        BP    0/0/24         
1.02     lx26-amd64
        hc:h_vmem=0.000

but if I then submit to the 2nd queue, the h_vmem will go negative and allow 
both jobs to run at the same time:

---------------------------------------------------------------------------------
[email protected]<mailto:[email protected]>       BIP   0/24/24        
1.01     lx26-amd64
        hc:h_vmem=-95.000G
   3970 0.60500 bt_script. btaylor      r     12/17/2012 11:26:45    24
---------------------------------------------------------------------------------
[email protected]<mailto:[email protected]>        BP    0/24/24        
1.01     lx26-amd64
        hc:h_vmem=-95.000G
   4012 0.60500<tel:4012%200.60500> bt_script. btaylor      r     12/19/2012 
11:03:04    24


Is this not supposed to act like the cpu slots?  Any ideas on how I might be 
able to treat available vmem the same as the cpu slots?

Thanks,
Brett



Brett Taylor
Systems Administrator
Center for Systems and Computational Biology

The Wistar Institute
3601 Spruce St.
Room 214
Philadelphia PA 19104
Tel: 215-495-6914<tel:215-495-6914>
Sending me a large file? Use my secure dropbox:
https://cscb-filetransfer.wistar.upenn.edu/dropbox/[email protected]


_______________________________________________
users mailing list
[email protected]<mailto:[email protected]>
https://gridengine.org/mailman/listinfo/users



--
Ian Kaufman
Research Systems Administrator
UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu



--

This email was Anti Virus checked by Astaro Security Gateway. 
http://www.astaro.com
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to