Good suggestion; done.
Thanks!
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Donohue
> Sent: Monday, June 05, 2006 9:31 AM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] Oversubscription/Schedul
>
> I also noticed another bug in the scheduler:
> hostfile:
> A slots=2 max-slots=2
> B slots=2 max-slots=2
> 'mpirun -np 5' quits with an over-subscription error
> 'mpirun -np 3 --host B' hangs and just chews up CPU cycles forever
>
Just as a quick followup on the 'hang' seen above. This was
> You make a good point about the values in that file, though -- I'll add
> some information to the FAQ that such config files are only valid on the
> nodes where they can be seen (i.e., that mpirun does not bundle up all
> these files and send them to remote nodes during mpirun). Sorry for the
>
send them to remote nodes during mpirun). Sorry for the
confusion!
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Donohue
> Sent: Monday, June 05, 2006 8:50 AM
> To: Open MPI Developers
> S
Sorry Brian and Jeff - I sent you chasing after something of a red herring...
After much more testing and banging my head on the desk trying to figure this
one out, it turns out '--mca mpi_yield_when_idle 1' on the command line does
actually work properly for me... The one or two times I had pr
On Fri, 26 May 2006, Brian W. Barrett wrote:
On Fri, 26 May 2006, Jeff Squyres (jsquyres) wrote:
You can see this by slightly modifying your test command -- run "env"
instead of "hostname". You'll see that the environment variable
OMPI_MCA_mpi_yield_when_idle is set to the value that you pass
On Fri, 26 May 2006, Jeff Squyres (jsquyres) wrote:
You can see this by slightly modifying your test command -- run "env"
instead of "hostname". You'll see that the environment variable
OMPI_MCA_mpi_yield_when_idle is set to the value that you passed in on
the mpirun command line, regardless of
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Donohue
> Sent: Wednesday, May 24, 2006 10:27 AM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] Oversubscription/Scheduling Bug
>
> I'm u
> > Since I have single-processor nodes, the obvious solution
> > would be to set slots=0 for each of my nodes, so that using 1
> > slot for every run causes the nodes to be oversubscribed.
> > However, it seems that slots=0 is treated like
> > slots=infinity, so my processes run in Aggressive
Paul --
Many thanks for your detailed report. I apparently missed a whole
boatload of e-mails on 2 May due to a problem with my mail client. Deep
apologies for missing this mail! :-(
More information below.
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun
10 matches
Mail list logo