Well, that is what you asked for - setting it to YES means applying it per
slot per the man pages.

What kind of behavior are you really looking for? If set to JOB, then the
resource is debited from the queue resource, which in your case, has no
limit (h_vmem is set to INFINITY). So, since there is no queue limit set,
SGE ignores the host limits when set to JOB, and you can oversubscribe.

Ian


On Wed, Dec 19, 2012 at 8:57 AM, Brett Taylor <[email protected]> wrote:

> Ok, that seems to be working, but it changes the submission time `-l
> h_vmem=` to be per thread instead of per job.  Not a big deal, just a
> little annoying to have to calculate each time.****
>
> ** **
>
> Thanks****
>
> ** **
>
> Brett Taylor****
>
> Systems Administrator****
>
> Center for Systems and Computational Biology****
>
>
> The Wistar Institute****
>
> 3601 Spruce St.****
>
> Room 214****
>
> Philadelphia PA 19104****
>
> Tel: 215-495-6914****
>
> Sending me a large file? Use my secure dropbox:****
>
> https://cscb-filetransfer.wistar.upenn.edu/dropbox/[email protected]****
>
> ** **
>
> *From:* Ian Kaufman [mailto:[email protected]]
> *Sent:* Wednesday, December 19, 2012 11:37 AM
> *To:* Brett Taylor
> *Cc:* [email protected]
> *Subject:* Re: [gridengine users] vmem allocation****
>
> ** **
>
> Hi Brett,****
>
> Try setting consumable to YES instead of JOB.
>
> Ian****
>
> ** **
>
> On Wed, Dec 19, 2012 at 8:05 AM, Brett Taylor <[email protected]> wrote:*
> ***
>
> Hello,****
>
>  ****
>
> I seem to be having some issues with the h_vmem setting and proper
> allocation.  I was under the impression that this would function much like
> the CPU slots, and once all of the h_vmem had been allocated, that host
> would not accept more jobs.  This however does not seem to be the case.  I
> have two queues running on the same nodes.  ****
>
>  ****
>
> My hosts are defined:****
>
>  ****
>
> complex_values        slots=48,h_vmem=95G****
>
>  ****
>
> complex config:****
>
>  ****
>
> #name               shortcut   type        relop requestable consumable
> default  urgency****
>
>
> #----------------------------------------------------------------------------------------
> ****
>
> h_vmem              h_vmem     MEMORY      <=    YES         JOB
> 4G       0****
>
>  ****
>
> queue config, in both queues:****
>
>  ****
>
> slots                 24****
>
> h_vmem                INFINITY****
>
>  ****
>
> So if I submit one job to A.q, using –l h_vmem=95G, then `qstat -f -u "*"
> -F h_vmem` shows:****
>
>  ****
>
>
> ---------------------------------------------------------------------------------
> ****
>
> [email protected]       BIP   0/24/24        1.02     lx26-amd64****
>
>         hc:h_vmem=0.000****
>
>    3970 0.60500 bt_script. btaylor      r     12/17/2012 11:26:45    24***
> *
>
>
> ---------------------------------------------------------------------------------
> ****
>
> [email protected]        BP    0/0/24         1.02     lx26-amd64****
>
>         hc:h_vmem=0.000****
>
>  ****
>
> but if I then submit to the 2nd queue, the h_vmem will go negative and
> allow both jobs to run at the same time:****
>
>  ****
>
>
> ---------------------------------------------------------------------------------
> ****
>
> [email protected]       BIP   0/24/24        1.01     lx26-amd64****
>
>         hc:h_vmem=-95.000G****
>
>    3970 0.60500 bt_script. btaylor      r     12/17/2012 11:26:45    24***
> *
>
>
> ---------------------------------------------------------------------------------
> ****
>
> [email protected]        BP    0/24/24        1.01     lx26-amd64****
>
>         hc:h_vmem=-95.000G****
>
>    4012 0.60500 bt_script. btaylor      r     12/19/2012 11:03:04    24***
> *
>
>  ****
>
>  ****
>
> Is this not supposed to act like the cpu slots?  Any ideas on how I might
> be able to treat available vmem the same as the cpu slots?****
>
>  ****
>
> Thanks,****
>
> Brett****
>
>  ****
>
>  ****
>
>  ****
>
> Brett Taylor****
>
> Systems Administrator****
>
> Center for Systems and Computational Biology****
>
>
> The Wistar Institute****
>
> 3601 Spruce St.****
>
> Room 214****
>
> Philadelphia PA 19104****
>
> Tel: 215-495-6914****
>
> Sending me a large file? Use my secure dropbox:****
>
> https://cscb-filetransfer.wistar.upenn.edu/dropbox/[email protected]****
>
>  ****
>
>
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users****
>
>
>
>
> --
> Ian Kaufman
> Research Systems Administrator
> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu ****
>
> ** **
>
> -- ****
>
> This email was Anti Virus checked by Astaro Security Gateway. 
> http://www.astaro.com****
>
>


-- 
Ian Kaufman
Research Systems Administrator
UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to