Hi,

Am 19.12.2011 um 05:36 schrieb mahbube rustaee:

> 
> 
> On Sun, Dec 18, 2011 at 9:12 PM, Reuti <[email protected]> wrote:
> Hi,
> 
> Am 18.12.2011 um 12:16 schrieb Mohamed Adel:
> 
> > Dear All,
> >
> > I’m using a cluster of 8-cores per compute node and 8GB main memory per 
> > compute node, which gives 1GB per core.
> > I want to submit an MPI job which needs 2GB memory per core, so my question 
> > is:
> > Can I request certain memory more than available per core and the scheduler 
> > will reduce the number of running processes per compute node to fulfill the 
> > high memory request?
> 
> just set up the available memory per node:
> 
> http://www.gridengine.info/tag/virtual_free/
> 
> 1) at this document, Reuti refer to h_vmem and virtual_free but not mem_free, 
> why? 

You could even ask: why not create a custom complex on my own for this purpose? 
The advantage of using an already defined complex which has a load senser 
internally attached is, that the lower value will be used as constraint: the 
internal bookkeeping of the consumable, or the value detected by the load 
sensor. The is true for virtual_free and mem_free.

If you use just mem_free, the detected value will never be equal to the 
installed memory, as the OS will always use some memory already. Often this can 
can be neglected, as a little bit of swapping won't hurt the process, so only 
with virtual_free you can use the complete installed memory for the user 
processes and allow a little bit of swapping. As often not all processes are 
using the granted memory over their lifetime all the time, it might even allow 
to use all installed memory and never face swapping - depends.To summarize:

== custom compex of type memory

[x] will help SGE to schedule
[ ] will enforce a memory limit per slot
[ ] will be compared to the found value by a load sensor
[ ] will always be lower than the installed memory due to the consumption of 
the OS

== h_vmem made consumable

[x] will help SGE to schedule
[x] will enforce a memory limit per slot
[ ] will be compared to the found value by a load sensor
[ ] will always be lower than the installed memory due to the consumption of 
the OS

== mem_free made consumable

[x] will help SGE to schedule
[ ] will enforce a memory limit per slot
[x] will be compared to the found value by a load sensor
[x] will always be lower than the installed memory due to the consumption of 
the OS

== virtual_free made consumable

[x] will help SGE to schedule
[ ] will enforce a memory limit per slot
[x] will be compared to the found value by a load sensor
[ ] will always be lower than the installed memory due to the consumption of 
the OS


You can check in your cluster by:

$ qhost -F mem_free,virtual_free


> Then request the 2GB in the job submission.
> 1) How memory request per slot (h_vmem or virtual_free) can be checked on a 
> heterogeneous nodes ? (e.g. some nodes  have 2G per slots and some nodes has 
> 1G per slot and users shouldn't request more than 2G h_vmem or virtual_free 
> for the former and 1G for the latter.

I suggest not to think in memory per core. It's just the total amout and any 
process can use it. If you request 2 GB it's per slot, and if there is less 
memory in a node, then  less processes will be scheduled thereto automatically.

-- Reuti

NB: Which eMail client are you using? In Apple Mail I can't spot the lines you 
added, hence the flow of conversion is lost and it's just one plain message 
file.


> 2) Is there any way to users to request such dynamic memory that can be 
> checked via scheduler?
> 
> > Can I request certain number of slots per node; i.e. 4 cores instead of 8 
> > (although the queue is configured to provide 8 slots per host)?
> 
> This depends on the defined PE. But it's not on a user level as it has to be 
> set up by the SGE admin in the PE beforehand. Requesting memory should give 
> you the proper distribution.
> 
> -- Reuti
> 
> 
> >
> > Thanks in advance,
> > --ma
> > _______________________________________________
> > users mailing list
> > [email protected]
> > https://gridengine.org/mailman/listinfo/users
> 
> 
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to