This didn't help. Osu_bibw.c still reports max bi bandwidth in the 1600s, should be in the 1900s. I looked back at my notes, and OFED 1.0 rc4 had desired max bi bandwidth with OFED 1.0 rc4, did the uDAPL IB MTU change?
$ mpiexec -genv I_MPI_DAPL_PROVIDER OpenIB-scm -genv I_MPI_DEBUG 3 -genv I_MPI_DEVICE rdssm -genv LD_LIBRARY_PATH .../lib -n 2 ../osu_bibw.x I_MPI: [0] set_up_devices(): will use device: libmpi.rdssm.so I_MPI: [0] set_up_devices(): will use DAPL provider: OpenIB-cma I_MPI: [0] set_up_devices(): will use device: libmpi.rdssm.so I_MPI: [0] set_up_devices(): will use DAPL provider: OpenIB-cma # OSU MPI Bidirectional Bandwidth Test (Version 2.1) # Size Bi-Bandwidth (MB/s) 1 0.813478 2 1.637650 4 3.260333 8 6.627831 16 12.168080 32 25.683379 64 50.580351 128 95.035855 256 174.132061 512 310.656179 1024 513.066433 2048 726.685587 4096 877.233753 8192 973.311995 16384 1040.096136 32768 849.790165 65536 1088.723063 131072 1296.584344 262144 1428.176271 524288 1540.248671 1048576 1579.665660 2097152 1608.765475 4194304 1628.157462 Scott Weitzenkamp SQA and Release Manager Server Virtualization Business Unit Cisco Systems > -----Original Message----- > From: Arlin Davis [mailto:[EMAIL PROTECTED] > Sent: Friday, June 09, 2006 11:38 AM > To: Scott Weitzenkamp (sweitzen) > Cc: Tziporet Koren; [EMAIL PROTECTED]; Davis, Arlin > R; Lentini, James; openib-general > Subject: Re: [openib-general] IB MTU tunable for uDAPL and/or > Intel MPI? > > Scott Weitzenkamp (sweitzen) wrote: > > > While we're talking about MTUs, is the IB MTU tunable in > uDAPL and/or > > Intel MPI via env var or config file? > > > > Looks like Intel MPI 2.0.1 uses 2K for IB MTU like MVAPICH does in > > OFED 1.0 rc4 and rc6, I'd like to try 1K with Intel MPI. > > > > Scott > > > There is no mechanism for me to modify the MTU using rdma_cm > so whatever > is returned in the path record is what you get with the OpenIB-cma > provider. However, you could use the OpenIB-scm provider > which is hard > coded for 1K MTU as a comparision. Can you run with "-genv > I_MPI_DAPL_PROVIDER OpenIB-scm" on your cluster? > > -arlin > > > > > > -------------------------------------------------------------- > ---------- > > *From:* [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] *On Behalf Of *Scott > > Weitzenkamp (sweitzen) > > *Sent:* Thursday, June 08, 2006 4:38 PM > > *To:* Tziporet Koren; [EMAIL PROTECTED] > > *Cc:* openib-general > > *Subject:* RE: [openib-general] OFED-1.0-rc6 is available > > > > The MTU change undos the changes for bug 81, so I have reopened > > bug 81 (http://openib.org/bugzilla/show_bug.cgi?id=81). > > > > With rc6, PCI-X osu_bw and osu_bibw performance is bad, > and PCI-E > > osu_bibw performance is bad. I've enclosed some > performance data, > > look at rc4 vs rc5 vs rc6 for Cougar/Cheetah/LionMini. > > > > Are there other benchmarks driving the changes in rc6 (and rc4)? > > > > Scott Weitzenkamp > > SQA and Release Manager > > Server Virtualization Business Unit > > Cisco Systems > > > > > > > > > > *OSU MPI:* > > > > * Added mpi_alltoall fine tuning parameters > > > > * Added default configuration/documentation file > > $MPIHOME/etc/mvapich.conf > > > > * Added shell configuration files > > $MPIHOME/etc/mvapich.csh , $MPIHOME/etc/mvapich.csh > > > > * Default MTU was changed back to 2K for > InfiniHost III > > Ex and InfiniHost III Lx HCAs. For InfiniHost card > recommended > > value is: > > VIADEV_DEFAULT_MTU=MTU1024 > > > >------------------------------------------------------------- > ----------- > > > >_______________________________________________ > >openib-general mailing list > >openib-general@openib.org > >http://openib.org/mailman/listinfo/openib-general > > > >To unsubscribe, please visit > http://openib.org/mailman/listinfo/openib-general > > > _______________________________________________ openib-general mailing list openib-general@openib.org http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general