Re: [OMPI users] Hybrid MPI+OpenMP benchmarks (looking for)

2017-10-10 Thread Peter Kjellström
HPGMG-FV is easy to build and to run both serial, mpi, openmp and mpi+openmp. /Peter On Mon, 9 Oct 2017 17:54:02 + "Sasso, John (GE Digital, consultant)" wrote: > I am looking for a decent hybrid MPI+OpenMP benchmark utility which I > can easily build and run with OpenMPI 1.6.5 (at least) a

[OMPI users] alltoallv

2017-10-10 Thread Michael Di Domenico
i'm getting stuck trying to run some fairly large IMB-MPI alltoall tests under openmpi 2.0.2 on rhel 7.4 i have two different clusters, one running mellanox fdr10 and one running qlogic qdr if i issue mpirun -n 1024 ./IMB-MPI1 -npmin 1024 -iter 1 -mem 2.001 alltoallv the job just stalls after t

Re: [OMPI users] OpenMPI 3.0.0, compilation using Intel icc 11.1 on Linux, error when compiling pmix_mmap

2017-10-10 Thread Ted Sussman
Hello all, Thank you for your responses. I worked around the issue by building and installing pmix-1.1.1 separately, to directory /opt/pmix-1.1.1, then using --with-pmix=/opt/pmix-1.1.1 when configuring OpenMPI 3.0.0. Sincerely, Ted Sussman On 2 Oct 2017 at 19:30, Jeff Squyres (jsquyres) w

[OMPI users] RoCE device performance with large message size

2017-10-10 Thread Brendan Myers
Hello All, I have a RoCE interoperability event starting next week and I was wondering if anyone had any ideas to help me with a new vendor I am trying to help get ready. I am using: * Open MPI 2.1 * Intel MPI Benchmarks 2018 * OFED 3.18 (requirement from vendor) *

Re: [OMPI users] RoCE device performance with large message size

2017-10-10 Thread Jeff Squyres (jsquyres)
Probably want to check to make sure that lossless ethernet is enabled everywhere (that's a common problem I've seen); otherwise, you end up in timeouts and retransmissions. Check with your vendor on how to do layer-0 diagnostics, etc. Also, if this is a new vendor, they should probably try runn