Re: [OMPI users] Quality and details of implementation for Neighborhood collective operations

2022-06-08 Thread Michael Thomadakis via users
architecture-aware. The only 2 > components that provide support for neighborhood collectives are basic (for > the blocking version) and libnbc (for the non-blocking versions). > > George. > > > On Wed, Jun 8, 2022 at 1:27 PM Michael Thomadakis via users < > users@lists.

[OMPI users] Quality and details of implementation for Neighborhood collective operations

2022-06-08 Thread Michael Thomadakis via users
Hello OpenMPI I was wondering if the MPI_Neighbor_x calls have received any special design and optimizations in OpenMPI 4.1.x+ for these patterns of communication. For instance, these could benefit from proximity awareness and intra- vs inter-node communications. However, even single node com

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-20 Thread Michael Thomadakis
This discussion started getting into an interesting question: ABI standardization for portability by language. It makes sense to have ABI standardization for portability of objects across environments. At the same time it does mean that everyone follows the exact same recipe for low level implement

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Michael Thomadakis
> > Cheers, > > Gilles > > On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis > wrote: > > Thanks for the note. How about OMP runtimes though? > > > > Michael > > > > On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users < > users@lists.open-mpi.org&

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Michael Thomadakis
you use Fortran bindings (use mpi > and use mpi_f08), and you'd better keep yourself out of trouble with > C/C++ and mpif.h > > Cheers, > > Gilles > > On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis > wrote: > > Thanks for the note. How about OMP runtimes t

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Michael Thomadakis
; Windows. > > > > > > > Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone > > ---- Original message > From: Michael Thomadakis > Date: 9/18/17 3:51 PM (GMT-05:00) > To: users@lists.open-mpi.org > Subject: [OMPI users] Question concerning com

[OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Michael Thomadakis
Dear OpenMPI list, As far as I know, when we build OpenMPI itself with GNU or Intel compilers we expect that the subsequent MPI application binary will use the same compiler set and run-times. Would it be possible to build OpenMPI with the GNU tool chain but then subsequently instruct the OpenMPI

Re: [OMPI users] Strange behavior of OMPI 1.8.3

2014-10-07 Thread Michael Thomadakis
wrote: > Hi Michael, > > If you do not include --enable-ipv6 in the config line, do you still > observe the problem? > Is it possible that one or more interfaces on nodes H1 and H2 do not have > ipv6 enabled? > > Howard > > > 2014-10-06 16:51 GMT-06:00 Micha

[OMPI users] Strange behavior of OMPI 1.8.3

2014-10-06 Thread Michael Thomadakis
Hello, I've configured OpenMPI1.8.3 with the following command line $ AXFLAGS="-xSSE4.2 -axAVX,CORE-AVX-I,CORE-AVX2" $ myFLAGS="-O2 ${AXFLAGS}" ; $ ./configure --prefix=${proot} \ --with-lsf \ --with-cma \ --enable-peruse --enable-branch-probabilities \ --enable-mpi-fortran=all

[OMPI users] Planned support for Intel Phis

2014-02-02 Thread Michael Thomadakis
Hello OpenMPI, I was wondering what is the support that is being implemented for the Intel Phi platforms. That is would we be able to run MPI code in "symmetric" fashion, where some ranks run on the cores of the multicore hostst and some on the cores of the Phis in a multinode cluster environment.

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-09 Thread Michael Thomadakis
gt;have not built >>>Open MPI for Xeon Phi for your interconnect, but it seems to me >>>that it >>>should work. >>> >>> >>> >>>-Tom >>> >>> >>> >>>Cheers

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-08 Thread Michael Thomadakis
e. But I think if you have a compiler for Xeon Phi (Intel Compiler >> or GCC) and an interconnect for it, you should be able to build an Open >> MPI >> that works on Xeon Phi. >> >> Cheers, >> Tom Elken >> >> thanks... >> >> Michael >> &

Re: [OMPI users] Question on handling of memory for communications

2013-07-08 Thread Michael Thomadakis
| Remember that the point of IB and other operating-system bypass devices is that the driver is not involved in the fast path of sending / | receiving. One of the side-effects of that design point is that userspace does all the allocation of send / receive buffers. That's a good point. It was not

Re: [OMPI users] Question on handling of memory for communications

2013-07-08 Thread Michael Thomadakis
On old AMD platforms > (and modern Intels with big GPUs), issues are not that uncommon (we've seen > up to 40% DMA bandwidth difference there). > > Brice > > > > Le 08/07/2013 19:44, Michael Thomadakis a écrit : > > Hi Brice, > > thanks for testing this out.

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-08 Thread Michael Thomadakis
gt; Rolf will have to answer the question on level of support. The CUDA code > is not in the 1.6 series as it was developed after that series went > "stable". It is in the 1.7 series, although the level of support will > likely be incrementally increasing as that "feature"

Re: [OMPI users] Question on handling of memory for communications

2013-07-08 Thread Michael Thomadakis
verload the CPU even more because of the additional > copies. > > Brice > > > > Le 08/07/2013 18:27, Michael Thomadakis a écrit : > > People have mentioned that they experience unexpected slow downs in > PCIe_gen3 I/O when the pages map to a socket different from the o

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-08 Thread Michael Thomadakis
the 1.6 series as it was developed after that series went > "stable". It is in the 1.7 series, although the level of support will > likely be incrementally increasing as that "feature" series continues to > evolve. > > > > On Jul 6, 2013, at 12:06 PM, M

Re: [OMPI users] Question on handling of memory for communications

2013-07-08 Thread Michael Thomadakis
l 8, 2013, at 11:35 AM, Michael Thomadakis > wrote: > > > The issue is that when you read or write PCIe_gen 3 dat to a non-local > NUMA memory, SandyBridge will use the inter-socket QPIs to get this data > across to the other socket. I think there is considerable limitation in &g

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-08 Thread Michael Thomadakis
rs] Support for CUDA and GPU-direct with OpenMPI > 1.6.5 an 1.7.2 > > ** ** > > There was discussion of this on a prior email thread on the OMPI devel > mailing list: > > ** ** > > http://www.open-mpi.org/community/lists/devel/2013/05/12354.php &

Re: [OMPI users] Question on handling of memory for communications

2013-07-08 Thread Michael Thomadakis
does anything special memory mapping to work around this. And if with Ivy Bridge (or Haswell) he situation has improved. thanks Mike On Mon, Jul 8, 2013 at 9:57 AM, Jeff Squyres (jsquyres) wrote: > On Jul 6, 2013, at 4:59 PM, Michael Thomadakis > wrote: > > > When you stack runs

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Michael Thomadakis
of support. The CUDA code > is not in the 1.6 series as it was developed after that series went > "stable". It is in the 1.7 series, although the level of support will > likely be incrementally increasing as that "feature" series continues to > evolve. > > > On

[OMPI users] Question on handling of memory for communications

2013-07-06 Thread Michael Thomadakis
Hello OpenMPI, When you stack runs on SandyBridge nodes atached to HCAs ove PCI3 *gen 3*do you pay any special attention to the memory buffers according to which socket/memory controller their physical memory belongs to? For instance, if the HCA is attached to the PCIgen3 lanes of Socket 1 do yo

[OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-06 Thread Michael Thomadakis
Hello OpenMPI, I am wondering what level of support is there for CUDA and GPUdirect on OpenMPI 1.6.5 and 1.7.2. I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, it seems that with configure v1.6.5 it was ignored. Can you identify GPU memory and send messages from it directl

Re: [OMPI users] How to select specific out of multiple interfaces for communication and support for heterogeneous fabrics

2013-07-05 Thread Michael Thomadakis
Great ... thanks. We will try it out as soon as the common backbone IB is in place. cheers Michael On Fri, Jul 5, 2013 at 6:10 PM, Ralph Castain wrote: > As long as the IB interfaces can communicate to each other, you should be > fine. > > On Jul 5, 2013, at 3:26 PM, Michae

Re: [OMPI users] How to select specific out of multiple interfaces for communication and support for heterogeneous fabrics

2013-07-05 Thread Michael Thomadakis
sses cannot communicate. > > HTH > Ralph > > On Jul 5, 2013, at 2:34 PM, Michael Thomadakis > wrote: > > Hello OpenMPI > > We area seriously considering deploying OpenMPI 1.6.5 for production (and > 1.7.2 for testing) on HPC clusters which consists of nodes with *di

[OMPI users] How to select specific out of multiple interfaces for communication and support for heterogeneous fabrics

2013-07-05 Thread Michael Thomadakis
Hello OpenMPI We area seriously considering deploying OpenMPI 1.6.5 for production (and 1.7.2 for testing) on HPC clusters which consists of nodes with *different types of networking interfaces*. 1) Interface selection We are using OpenMPI 1.6.5 and was wondering how one would go about selectin