Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace? This ofcourse is a synthetic test using NetPipe. For regular apps that move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:

Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason what you're seeing a such degradation of performances.

If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
     george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:

I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )
<bwMyrinet.png>
<bwOMPI.o1969>
<bwOMPI.o1979>
<bwMPICH.o1978>

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to