Am 15.06.2012 um 18:20 schrieb Joseph Farran:

> On 06/15/2012 09:06 AM, Reuti wrote:
>> 
>> This is the number of used slots across all queues. So, if one queue 
>> instance on an exechost uses them all up, a job in the superordinated queue 
>> will never start. In this case the slot count needs to be set in the queue 
>> definition (which could by arbitrary if the exechost setting is limiting it 
>> to 64, but in your case 64 in the queue definition should do).
>> 
>> Subordination will not free any resources like memory or alike. The job is 
>> still on the node, just stopped and hanging around.
>> 
>> -- Reuti
> 
> Hi Reuti.
> 
> Sorry, I don't follow.   Assume I am a OGE Newbie :-).    I am not new to the 
> concept as I currently have the setup I described working under Torque/Maui, 
> so now I am trying to duplicate the same setup but under OGE.

Do you define the slots in the host definition and not the queue setting in 
Torque?

In SGE the amount of slots for each queue instance residing on a maschine is 
defined in the queue definition. It should be set to 64 in your case. If you 
have one and only one queue, there is no need to set any slotcount in the 
exechost definition.

If you have two queue instances on a maschine, and want to limit the number of 
slots across both queue instances to avoid oversubscription, you need to define 
the overall limit in the exechost definition (to be complete: or in an RQS, but 
ignre it for now).

Having a limit defined in the exechost definition, you can be sure that the 
node will never run more processes than defined slots (often set to the real 
number of cores in a maschine).

==

In your case: define 64 slots in each queue, and don't limit the number of 
slots across all queues in the exechost definition.

-- Reuti


> I *DO* understand that a suspended job will keep pages in memory.    As long 
> as the job is not taking up CPU cycles, all is good.
> 
> So the setup I described is *not* possible with OGE?
> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to