Greetings,

I recently purchased and set up a new blade cluster using Xeon 5560 CPUs, Mellanox DDR ConnectX cards, running CentOS 5.2. I use the cluster to run a large FORTRAN 90 fluid model. I have been using OpenMPI on my other clusters for years, and it is my default MPI environment.

I downloaded and installed the latest OpenMPI 1.3.3 release with the following:

./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ openmpi-1.3.3-intel --with-openib=/opt/ofed --with-openib-libdir=/opt/ ofed/lib64 --with-tm=/opt/torque/

To show the configuration, I ran:

(machine:~)% mpicc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man -- infodir=/usr/share/info --enable-shared --enable-threads=posix -- enable-checking=release --with-system-zlib --enable-__cxa_atexit -- disable-libunwind-exceptions --enable-libgcj-multifile --enable- languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk -- disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2- gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I then ran a large number of tests using one of my typical model domain configurations (which are relatively expensive) to test how well the system was performing. I didn't want to use "benchmarking" code, but rather the code I actually use the cluster for. Remarkably, it was scaling linearly up to about 8 nodes (using 8 cores per node).

I decided -- for curiosity -- to see how this compared with MVAPICH2. I downloaded the 1.4rc2 code, and compiled it using the following:

./configure FC=ifort F77=ifort F90=ifort --prefix=/share/apps/ mvapich2-1.4-intel --enable-f90 --with-ib-libpath=/opt/ofed/lib64 -- with-rdma=gen2 --with-ib-include=/opt/ofed/include

This was confirmed with:

(machine:~)% mpicc -v
mpicc for 1.4.0rc2
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man -- infodir=/usr/share/info --enable-shared --enable-threads=posix -- enable-checking=release --with-system-zlib --enable-__cxa_atexit -- disable-libunwind-exceptions --enable-libgcj-multifile --enable- languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk -- disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2- gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

I tested the same runs as before, now using MVAPICH2 rather than OpenMPI. To my astonishment, the MVAPICH2 runs ran -- on average -- 20% faster as measured in terms of wall clock time. This was incredibly surprising to me. I tried a number of domain configurations (over 1-16 nodes, with various numbers of processors per node), and the improvement was from 7.7-35.2 percent (depending on the configuration).

I reran a number of my OpenMPI tests because it was so surprising, and they were consistent with the original. I read through the FAQ: <http://www.open-mpi.org/faq/?category=openfabrics > and tried a number of options with RDMA (the size of the messages passed in the code I run is -- I believe -- rather small) and I was able to improve the OpenMPI results by about 3%, but still nowhere near what I was getting with MVAPICH2.

I ran a final test which I find very strange: I ran the same test case on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes little sense to me. Both are using ifort as the mpif90 compiler using *identical* optimization flags, etc. I don't understand how the results could be different.

All of these cases are run with myself as the only user of the cluster and each test is run alone (without any other interference on the machine). I am running TORQUE, so each is submitted to the queue, then the actual queue run time is used as the measure of time, which is the actual wallclock time for the job to finish. Some may discount that time metric; however, it is what I am most concerned with. If I have to wait 2 hours to run a job in OpenMPI, but 1:36 in MVAPICH2, that is a significant advantage to me.

That said, MVAPICH2 has its own problems with hung mpd processes that can linger around on the nodes, etc. I prefer to use OpenMPI, so my question is:

What does the list suggest I modify in order to improve the OpenMPI performance?

I have played with the RDMA parameters to increase its thresholds, but little was gained. I am happy to provide the output of ompi_info if needed, but it is long so I didn't want to include in the initial post. I apologize for my naivete on the internals of MPI hardware utilization.

Thank you in advance.

Cheers,
Brian





Reply via email to