On Jun 13, 2006, at 11:07 AM, Brock Palen wrote:

Here are the results with -mca mpi_leave_pinned
The results are as expected, they are exactly similar to the mpich results. Thank you for the help. I have attached a plot with all three and the raw data for anyones viewing pleasure.

We never doubt about that :)


Im still curious about does mpi_leave_pinned affect real jobs if its not included? for the most part for large messages of the size used in this test will never be seen. So the effect should be negligible? Clarification? Note I am a admin not a MPI programmer have very lax experience with real code.

If you want/can run some real applications with and without this flag and compare them it will be more than useful. We never went deeper than some benchmarks on this topic. Additional information is welcome ...

  Thanks,
    george.

<bwmyirnet.png>
<bwMCA.o1985>
<bwMPICH.o1978>
<bwOMPI.o1979>

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:38 AM, Brock Palen wrote:

Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This
ofcourse is a synthetic test using NetPipe.   For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:

Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be the main reason what you're seeing a such degradation of performances.

If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
     george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:

I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )
<bwMyrinet.png>
<bwOMPI.o1969>
<bwOMPI.o1979>
<bwMPICH.o1978>

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to