[OMPI devel] mpif.h on Intel build when run with OMPI_FC=gfortran

2016-03-03 Thread Dave Turner
alternate mpif.h include file. This looks to be a bug to me, but please let me know if I missed a config flag somewhere. Dave Turner Selene cat bugtest.F ! Program to illustrate bug when OpenMPI is compiled with Intel !compilers but run using OMPI_FC=gfortran. PROGR

Re: [OMPI devel] mpif.h on Intel build when run with OMPI_FC=gfortran

2016-03-03 Thread Dave Turner
e use a different Fortran > ! compiler to build Open MPI. > > intel fortran compilers have the right stuff, so mpif-sizeof.h is usable, > and you get something very different. > > Cheers, > > Gilles > > > On 3/4/2016 10:17 AM, Dave Turner wrote: > > > M

Re: [OMPI devel] mpif.h on Intel build when run with OMPI_FC=gfortran

2016-03-03 Thread Dave Turner
, not because of OpenMPI. > > Larry Baker > US Geological Survey > 650-329-5608 > ba...@usgs.gov > > > > On 3 Mar 2016, at 6:39 PM, Dave Turner wrote: > > Gilles, > > I don't see the point of having the OMPI_CC and OMPI_FC environment > variables at

Re: [OMPI devel] mpif.h on Intel build when run with OMPI_FC=gfortran

2016-03-03 Thread Dave Turner
rent to our users, and allows us to present a single > build tree that works for both compilers. > > > > Cheers, > > Ben > > > > > > > > *From:* devel [mailto:devel-boun...@open-mpi.org] *On Behalf Of *Dave > Turner > *Sent:* Friday, 4 March 2016 2:2

Re: [OMPI devel] rdmacm and udcm for 2.0.1 and RoCE

2017-01-06 Thread Dave Turner
32 (all the reset of the command line > args) > > and see if it then works? > > Howard > > > 2017-01-04 16:37 GMT-07:00 Dave Turner : > >> >> -- >> No OpenFabrics connection scheme

Re: [OMPI devel] rdmacm and udcm for 2.0.1 and RoCE

2017-01-11 Thread Dave Turner
> args) > > and see if it then works? > > Howard > > > 2017-01-04 16:37 GMT-07:00 Dave Turner : > >> >> -- >> No OpenFabrics connection schemes reported that they were able to be >

[OMPI devel] NetPIPE performance curves

2017-05-02 Thread Dave Turner
t case scenario that the --nocache measurements represent, I could certainly see large bioinformatics runs being affected as the message lengths are not going to be factors of 8 bytes. Dave Turner -- Work: davetur...@ksu.edu (785) 532-7791 2219 Engineering Hall,

Re: [OMPI devel] NetPIPE performance curves

2017-05-04 Thread Dave Turner
-mca pml ob1 --mca btl openib,self --mca > btl_openib_get_limit $((1024*1024)) --mca btl_openib_put_limit > $((1024*1024)) ./NPmpi --nocache --start 100 > > George. > > > > On Wed, May 3, 2017 at 4:27 PM, Dave Turner > wrote: > >> George, >> >> Our lo

[OMPI devel] Poor performance when compiling with --disable-dlopen

2018-01-23 Thread Dave Turner
-dlopen with --disable-mca-dso showed good performance. Replacing --disable-dlopen with --enable-static showed good performance. So it's only --disable-dlopen that leads to poor performance. http://netpipe.cs.ksu.edu Dave Turner -- Work: davetur...@ksu.edu (785

Re: [OMPI devel] Poor performance when compiling with --disable-dlopen

2018-01-23 Thread Dave Turner
glance, that looks pretty odd, and I'll have a look at it. > > Which benchmark are you using to measure the bandwidth ? > Does your benchmark MPI_Init_thread(MPI_THREAD_MULTIPLE) ? > Have you tried without --enable-mpi-thread-multiple ? > > Cheers, > > Gilles

Re: [OMPI devel] Poor performance when compiling with --disable-dlopen

2018-02-12 Thread Dave Turner
n our system. Dave Turner On Wed, Jan 24, 2018 at 1:00 PM, wrote: > Send devel mailing list submissions to > devel@lists.open-mpi.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.open-mpi.org/mailman/listinfo/d

[OMPI devel] Seeing message failures in OpenMPI 4.0.1 on UCX

2019-04-16 Thread Dave Turner
ional tests I can run. Dave Turner CentOS 7 on Intel processors, QDR IB and 40 GbE tests UCX 1.5.0 installed from the tarball according to the docs on the webpage OpenMPI-4.0.1 configured for verbs with: ./configure F77=ifort FC=ifort --prefix=/homes/daveturner/libs/openmpi-

[OMPI devel] mlx4 QP operation err

2015-01-28 Thread Dave Turner
report them to Mellanox? I've attached some files with more detailed information on this problem. Dave Turner -- Work: davetur...@ksu.edu (785) 532-7791 118 Nichols Hall, Manhattan KS 66502 Home:drdavetur...@gmail.com cell: (785) 770-5929

Re: [OMPI devel] devel Digest, Vol 2905, Issue 1

2015-01-31 Thread Dave Turner
The Mellanox 2.33.5100 firmware upgrade that came out a few days ago did indeed fix the problem we were seeing with the mlx4 errors. Thanks for pointing us in that direction. Dave Turner On Thu, Jan 29, 2015 at 11:00 AM, wrote: > Send devel mailing list submissi

[OMPI devel] RoCE plus QDR IB tunable parameters

2015-02-06 Thread Dave Turner
0 Gbps TCP for large messages. However, I do think these issues will come up more in the future. With the low latency of RoCE matching IB, there are more opportunities to do channel bonding or allowing multiple interfaces for aggregate traffic for even smaller message sizes. D

Re: [OMPI devel] RoCE plus QDR IB tunable parameters

2015-02-06 Thread Dave Turner
E on these nodes and go with QDR IB plus 10 Gbps TCP for large >> messages. >> >> However, I do think these issues will come up more in the future. >> With the low latency of RoCE matching IB, there are more opportunities >> to do

Re: [OMPI devel] OMPI devel] RoCE plus QDR IB tunable parameters

2015-02-09 Thread Dave Turner
lues, and imho they > should be 327680 and 81920 because of the 8/10 encoding > (And that being said, that should not change the measured performance) > > Also, could you try again by forcing the same btl_tcp_latency and > btl_openib_latency ? > > Cheers, > > Gilles &

Re: [OMPI devel] devel Digest, Vol 2917, Issue 1

2015-02-19 Thread Dave Turner
ogic (the bandwidth part). > > I just pushed a fix in master > > https://github.com/open-mpi/ompi/commit/e173f9b0c0c63c3ea24b8d8bc0ebafe1f1736acb > . > Once validated this should be moved over the 1.8 branch. > > Dave do you think it is possible to renew your experiment

Re: [OMPI devel] OMPI devel] RoCE plus QDR IB tunable parameters

2015-02-23 Thread Dave Turner
at kind of performance do you get when you use > MXM? (e.g., the yalla PML on master) > > > > On Feb 19, 2015, at 6:41 PM, Dave Turner wrote: > > > > > > I've downloaded the OpenMPI master as suggested and rerun all my > aggregate tests > &

Re: [OMPI devel] Seeing message failures in OpenMPI 4.0.1 on UCX

2019-06-03 Thread Dave Turner via devel
I've rerun my NetPIPE tests using --mca btl ^uct as Yossi suggested and that does indeed get rid of the message failures. I don't see any difference in performance but wanted to check if there is any downside to doing the build without uct as suggested. D