Am 31.01.2012 um 06:33 schrieb Rayson Ho:

> On Mon, Jan 30, 2012 at 11:33 PM, Tom Bryan <tom...@cisco.com> wrote:
>> For our use, yes, spawn_multiple makes sense.  We won't be spawning lots and
>> lots of jobs in quick succession.  We're using MPI as an robust way to get
>> IPC as we spawn multiple child processes while using SGE to help us with
>> load balancing our compute nodes.
> 
> Note that spawn_multiple is not going to buy you anything as SGE and
> Open Grid Scheduler (and most other batch systems) do not handle
> dynamic slot allocation. There is no way to change the number of slots
> that are used by a job once it's running.

Agreed, the problem is first to phrase it in a submission command like: I need 
2 cores for 2 hours, 4 cores for one hour and finally 1 core for 8 hours. And 
the application must act accordingly. This all sounds more like a real-time 
queuing system and application, where this can be ensured to happen in time.

-- Reuti


> For this reason, I don't recall seeing any users using spawn_multiple
> (and also, IIRC, the call was introduced in MPI-2)... and you might
> want to make sure that normal MPI jobs work before debuging a
> spawn_multiple() job.
> 
> Rayson
> 
> =================================
> Grid Engine / Open Grid Scheduler
> http://gridscheduler.sourceforge.net/
> 
> Scalable Grid Engine Support Program
> http://www.scalablelogic.com/
> 
> 
>> 
>>> Anyway:
>>> do you see on the master node of the parallel job in:
>> 
>> Yes, I should have included that kind of output.  I'll have to run it again
>> with the cols option, but I used pstree to see that I have mpitest --child
>> processes as children of orted by way of sge_shepherd and sge_execd.
>> 
>> Thanks,
>> ---Tom
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> -- 
> Rayson
> 
> ==================================================
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 


Reply via email to