Hello again,

Hopefully this is an easier question....

My cluster uses Infiniband interconnects (Mellanox Infinihost III and some 
ConnectX).  I'm seeing terrible and sporadic latency (order ~1000 microseconds) 
 as measured by the subounce code (http://sourceforge.net/projects/subounce/), 
but the bandwidth is as expected.  I'm used to seeing only 1-2 microseconds 
with MVAPICH and wondering why OpenMPI either isn't performing as well or 
doesn't play well with how bounce is measuring latency (by timing 0 byte 
messages).  I've tried to play with a few parameters with no success.  Here's 
how the build is configured:

myflags="-O3 -xSSE2"
./configure --prefix=/part0/apps/MPI/intel/openmpi-1.4.1 \
            --disable-dlopen --with-wrapper-ldflags="-shared-intel" \
            --enable-orterun-prefix-by-default \
            --with-openib --enable-openib-connectx-xrc --enable-openib-rdmacm \
            CC=icc CXX=icpc F77=ifort FC=ifort \
            CFLAGS="$myflags" FFLAGS="$myflags" CXXFLAGS="$myflags" 
FCFLAGS="$myflags" \
            OBJC=gcc OBJCFLAGS="-O3"
Any ideas?

Thanks,
Steve


Reply via email to