Re: [OMPI users] MPI test suite

2020-07-24 Thread Zhang, Junchao via users
8 Subject: Re: [OMPI users] MPI test suite I know OSU micro-benchmarks. But it is not an extensive test suite. Thanks --Junchao Zhang On Jul 23, 2020, at 2:00 PM, Marco Atzeri via users mailto:users@lists.open-mpi.org>> wrote: On 23.07.2020 20:28, Zhang, Junchao via users wrote: Hell

Re: [OMPI users] MPI test suite

2020-07-23 Thread Zhang, Junchao via users
I know OSU micro-benchmarks. But it is not an extensive test suite. Thanks --Junchao Zhang > On Jul 23, 2020, at 2:00 PM, Marco Atzeri via users > wrote: > > On 23.07.2020 20:28, Zhang, Junchao via users wrote: >> Hello, >> Does OMPI have a test suite that

[OMPI users] MPI test suite

2020-07-23 Thread Zhang, Junchao via users
Hello, Does OMPI have a test suite that can let me validate MPI implementations from other vendors? Thanks --Junchao Zhang

Re: [OMPI users] CUDA mpi question

2019-11-27 Thread Zhang, Junchao via users
hreads[i], NULL)) { fprintf(stderr, "Error joining threadn"); return 2; } } cudaDeviceReset(); MPI_Finalize(); } From: users mailto:users-boun...@lists.open-mpi.org>> On Behalf Of Zhang, Junchao via users Sent: Wednesday, November 27,

Re: [OMPI users] CUDA mpi question

2019-11-27 Thread Zhang, Junchao via users
toh or htod) should I use to insert kernels producing send data and kernels using received data? I imagine MPI uses GPUDirect RDMA to move data directly from GPU to NIC. Why do we need to bother dtoh or htod streams? George. On Wed, Nov 27, 2019 at 4:02 PM Zhang, Junchao via users mailto:users@

Re: [OMPI users] CUDA mpi question

2019-11-27 Thread Zhang, Junchao via users
I use to insert kernels producing send data and kernels using received data? I imagine MPI uses GPUDirect RDMA to move data directly from GPU to NIC. Why do we need to bother dtoh or htod streams? George. On Wed, Nov 27, 2019 at 4:02 PM Zhang, Junchao via users mailto:users@lists.open-mpi

[OMPI users] CUDA mpi question

2019-11-27 Thread Zhang, Junchao via users
Hi, Suppose I have this piece of code and I use cuda-aware MPI, cudaMalloc(&sbuf,sz); Kernel1<<<...,stream>>>(...,sbuf); MPI_Isend(sbuf,...); Kernel2<<<...,stream>>>(); Do I need to call cudaStreamSynchronize(stream) before MPI_Isend() to make sure data in sbuf is ready

Re: [OMPI users] CUDA supported APIs

2019-08-19 Thread Zhang, Junchao via users
: users mailto:users-boun...@lists.open-mpi.org>> 代表 Zhang, Junchao via users mailto:users@lists.open-mpi.org>> 寄件日期: 2019年8月15日 上午 11:52:56 收件者: Open MPI Users mailto:users@lists.open-mpi.org>> 副本: Zhang, Junchao mailto:jczh...@mcs.anl.gov>> 主旨: Re: [OMPI users] CUDA s

Re: [OMPI users] CUDA supported APIs

2019-08-15 Thread Zhang, Junchao via users
Another question: If MPI_Allgatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, const int recvcounts[],const int displs[], MPI_Datatype recvtype, MPI_Comm comm) is cuda aware, are recvcounts, displs in CPU memory or GPU memory? --Junchao Zhang On Thu, Aug 15, 201

[OMPI users] CUDA supported APIs

2019-08-15 Thread Zhang, Junchao via users
Hi, Are the APIs at https://www.open-mpi.org/faq/?category=runcuda#mpi-apis-cuda latest? I could not find MPI_Neighbor_xxx and MPI_Reduce_local. Thanks. --Junchao Zhang ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mail

Re: [OMPI users] OpenMPI 2.1.1 bug on Ubuntu 18.04.2 LTS

2019-08-02 Thread Zhang, Junchao via users
-dev:i386: >>> p 2.1.1-8 bionic 500 >>> >>> $ sudo apt-get install libopenmpi-dev=2.1.6 >>> Reading package lists... Done >>> Building dependency tree >>> Reading state information... Done >>> E: Version '2.1.6' fo

Re: [OMPI users] OpenMPI 2.1.1 bug on Ubuntu 18.04.2 LTS

2019-08-02 Thread Zhang, Junchao via users
t; $ sudo apt-get install libopenmpi-dev=2.1.6 >> Reading package lists... Done >> Building dependency tree >> Reading state information... Done >> E: Version '2.1.6' for 'libopenmpi-dev' was not found >> >> --Junchao Zhang >> >> >

Re: [OMPI users] OpenMPI 2.1.1 bug on Ubuntu 18.04.2 LTS

2019-08-01 Thread Zhang, Junchao via users
... Done E: Version '2.1.6' for 'libopenmpi-dev' was not found --Junchao Zhang On Thu, Aug 1, 2019 at 1:15 PM Jeff Squyres (jsquyres) mailto:jsquy...@cisco.com>> wrote: Does the bug exist in Open MPI v2.1.6? > On Jul 31, 2019, at 2:19 PM, Zhang, Junchao via users >

[OMPI users] OpenMPI 2.1.1 bug on Ubuntu 18.04.2 LTS

2019-07-31 Thread Zhang, Junchao via users
Hello, I met a bug with OpenMPI 2.1.1 distributed in the latest Ubuntu 18.04.2 LTS. It happens with self to self send/recv using MPI_ANY_SOURCE for message matching. See the attached test code. You can reproduce it even with one process. It is a severe bug. Since this Ubuntu is widely used

Re: [OMPI users] How to know how OpenMPI was built?

2019-07-31 Thread Zhang, Junchao via users
Did not find "Configure command line" in OpenMPI 2.1.1, but found it in 3.1.4. But that is OK. Thanks. --Junchao Zhang On Tue, Jul 30, 2019 at 5:10 PM Jeff Squyres (jsquyres) mailto:jsquy...@cisco.com>> wrote: Run ompi_info. > On Jul 30, 2019, at 5:57 PM, Zhang

[OMPI users] How to know how OpenMPI was built?

2019-07-30 Thread Zhang, Junchao via users
Hello, On a system with pre-installed OpenMPI, how to know the configure options used to build OpenMPI (so that I can build from source myself with the same options)? Thanks --Junchao Zhang ___ users mailing list users@lists.open-mpi.org https://li

[OMPI users] Compilation errors with SunOS and Sun CC

2019-07-09 Thread Zhang, Junchao via users
Hello, I compiled OpenMPI 4.0.1 & 3.1.4 on "SunOS 5.11 illumos-a22312a201 i86pc i386 i86pc" with "cc: Sun C 5.10 SunOS_i386 2009/06/03". I met many errors, including "evutil_rand.c", line 68: void function cannot return value cc: acomp failed for evutil_rand.c gmake[5]: *** [Makefile:772: evutil

Re: [OMPI users] Possible bugs in MPI_Neighbor_alltoallv()

2019-06-28 Thread Zhang, Junchao via users
s (and the alltoallw variant as well) Meanwhile, you can manually download and apply the patch at https://github.com/open-mpi/ompi/pull/6782.patch Cheers, Gilles On 6/28/2019 1:10 PM, Zhang, Junchao via users wrote: > Hello, > When I do MPI_Neighbor_alltoallv or MPI_Ineighbor_alltoa

[OMPI users] Possible bugs in MPI_Neighbor_alltoallv()

2019-06-27 Thread Zhang, Junchao via users
Hello, When I do MPI_Neighbor_alltoallv or MPI_Ineighbor_alltoallv, I find when either outdegree or indegree is zero, OpenMPI will return an error. The suspicious code is at pneighbor_alltoallv.c / pineighbor_alltoallv.c 101 } else if ((NULL == sendcounts) || (NULL == sdispls) || 102