Very interesting! Appreciate the info. My numbers are slightly better
- as I've indicated, there is a NxN message exchange currently in the
system that needs to be removed. With that commented out, the system
scales roughly linearly with number of processes.
At 04:31 PM 7/28/2005, you wrote:
All,
I have removed the ompi_ignores from the new bproc components I have been
working on and they are now the default for bproc. These new components
have several advantages over the old bproc component but mainly:
- we now provide ptys support for standard IO
- it should work better with threa
Greg,
Thanks for tracking this down!
Tim
Greg Watson wrote:
Hi all,
To recap: the problem was that if orted was launched from Eclipse (on
OS X) then subsequent attempts to run a program (using mpirun or
whatever) returned immediately. If orted was launched from anywhere
else (java, comm
Using the mvapi btl you can now set OMPI_MCA_btl_mvapi_use_srq=1 which
will cause mvapi to use a shared receive queue. This will allow much
better scaling as receives are posted per interface port and not per
queue pair. Note: older versions of mellanox firmware may see a
substantial performanc
Thanks for reporting this. I just committed code to the rsh pls to
specifically check $bindir if the orted is not found in your path (on
the local node). If orted is still not found, it'll now issue a
friendly error message:
[7:58] vogon:~/mpi % mpirun -np 1 hello