8
Subject: Re: [OMPI users] MPI test suite
I know OSU micro-benchmarks. But it is not an extensive test suite.
Thanks
--Junchao Zhang
On Jul 23, 2020, at 2:00 PM, Marco Atzeri via users
mailto:users@lists.open-mpi.org>> wrote:
On 23.07.2020 20:28, Zhang, Junchao via users wrote:
Hell
I know OSU micro-benchmarks. But it is not an extensive test suite.
Thanks
--Junchao Zhang
> On Jul 23, 2020, at 2:00 PM, Marco Atzeri via users
> wrote:
>
> On 23.07.2020 20:28, Zhang, Junchao via users wrote:
>> Hello,
>> Does OMPI have a test suite that
Hello,
Does OMPI have a test suite that can let me validate MPI implementations from
other vendors?
Thanks
--Junchao Zhang
hreads[i], NULL)) {
fprintf(stderr, "Error joining threadn");
return 2;
}
}
cudaDeviceReset();
MPI_Finalize();
}
From: users
mailto:users-boun...@lists.open-mpi.org>> On
Behalf Of Zhang, Junchao via users
Sent: Wednesday, November 27,
toh or htod) should I use to insert kernels producing send data
and kernels using received data? I imagine MPI uses GPUDirect RDMA to move data
directly from GPU to NIC. Why do we need to bother dtoh or htod streams?
George.
On Wed, Nov 27, 2019 at 4:02 PM Zhang, Junchao via users
mailto:users@
I use to insert kernels producing send data
and kernels using received data? I imagine MPI uses GPUDirect RDMA to move data
directly from GPU to NIC. Why do we need to bother dtoh or htod streams?
George.
On Wed, Nov 27, 2019 at 4:02 PM Zhang, Junchao via users
mailto:users@lists.open-mpi
Hi,
Suppose I have this piece of code and I use cuda-aware MPI,
cudaMalloc(&sbuf,sz);
Kernel1<<<...,stream>>>(...,sbuf);
MPI_Isend(sbuf,...);
Kernel2<<<...,stream>>>();
Do I need to call cudaStreamSynchronize(stream) before MPI_Isend() to make
sure data in sbuf is ready
: users
mailto:users-boun...@lists.open-mpi.org>> 代表
Zhang, Junchao via users
mailto:users@lists.open-mpi.org>>
寄件日期: 2019年8月15日 上午 11:52:56
收件者: Open MPI Users mailto:users@lists.open-mpi.org>>
副本: Zhang, Junchao mailto:jczh...@mcs.anl.gov>>
主旨: Re: [OMPI users] CUDA s
Another question: If MPI_Allgatherv(const void *sendbuf, int sendcount,
MPI_Datatype sendtype, void *recvbuf, const int recvcounts[],const int
displs[], MPI_Datatype recvtype, MPI_Comm comm) is cuda aware, are recvcounts,
displs in CPU memory or GPU memory?
--Junchao Zhang
On Thu, Aug 15, 201
Hi,
Are the APIs at https://www.open-mpi.org/faq/?category=runcuda#mpi-apis-cuda
latest? I could not find MPI_Neighbor_xxx and MPI_Reduce_local.
Thanks.
--Junchao Zhang
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mail
-dev:i386:
>>> p 2.1.1-8 bionic 500
>>>
>>> $ sudo apt-get install libopenmpi-dev=2.1.6
>>> Reading package lists... Done
>>> Building dependency tree
>>> Reading state information... Done
>>> E: Version '2.1.6' fo
t; $ sudo apt-get install libopenmpi-dev=2.1.6
>> Reading package lists... Done
>> Building dependency tree
>> Reading state information... Done
>> E: Version '2.1.6' for 'libopenmpi-dev' was not found
>>
>> --Junchao Zhang
>>
>>
>
... Done
E: Version '2.1.6' for 'libopenmpi-dev' was not found
--Junchao Zhang
On Thu, Aug 1, 2019 at 1:15 PM Jeff Squyres (jsquyres)
mailto:jsquy...@cisco.com>> wrote:
Does the bug exist in Open MPI v2.1.6?
> On Jul 31, 2019, at 2:19 PM, Zhang, Junchao via users
>
Hello,
I met a bug with OpenMPI 2.1.1 distributed in the latest Ubuntu 18.04.2 LTS.
It happens with self to self send/recv using MPI_ANY_SOURCE for message
matching. See the attached test code. You can reproduce it even with one
process.
It is a severe bug. Since this Ubuntu is widely used
Did not find "Configure command line" in OpenMPI 2.1.1, but found it in 3.1.4.
But that is OK.
Thanks.
--Junchao Zhang
On Tue, Jul 30, 2019 at 5:10 PM Jeff Squyres (jsquyres)
mailto:jsquy...@cisco.com>> wrote:
Run ompi_info.
> On Jul 30, 2019, at 5:57 PM, Zhang
Hello,
On a system with pre-installed OpenMPI, how to know the configure options
used to build OpenMPI (so that I can build from source myself with the same
options)?
Thanks
--Junchao Zhang
___
users mailing list
users@lists.open-mpi.org
https://li
Hello,
I compiled OpenMPI 4.0.1 & 3.1.4 on "SunOS 5.11 illumos-a22312a201 i86pc i386
i86pc" with "cc: Sun C 5.10 SunOS_i386 2009/06/03". I met many errors, including
"evutil_rand.c", line 68: void function cannot return value
cc: acomp failed for evutil_rand.c
gmake[5]: *** [Makefile:772: evutil
s
(and the alltoallw variant as well)
Meanwhile, you can manually download and apply the patch at
https://github.com/open-mpi/ompi/pull/6782.patch
Cheers,
Gilles
On 6/28/2019 1:10 PM, Zhang, Junchao via users wrote:
> Hello,
> When I do MPI_Neighbor_alltoallv or MPI_Ineighbor_alltoa
Hello,
When I do MPI_Neighbor_alltoallv or MPI_Ineighbor_alltoallv, I find when
either outdegree or indegree is zero, OpenMPI will return an error. The
suspicious code is at pneighbor_alltoallv.c / pineighbor_alltoallv.c
101 } else if ((NULL == sendcounts) || (NULL == sdispls) ||
102
19 matches
Mail list logo