[OMPI users] question about the Open-MPI ABI

2023-02-01 Thread Jeff Hammond via users
0012cf80 B ompi_mpi_info_null 00116038 D ompi_mpi_info_null_addr 00133720 B ompi_mpi_op_null 001163c0 D ompi_mpi_op_null_addr 00135740 B ompi_mpi_win_null 00117c80 D ompi_mpi_win_null_addr 0012d080 B ompi_request_null 00116040 D ompi_request_null_addr -- Je

Re: [OMPI users] Disabling barrier in MPI_Finalize

2022-09-09 Thread Jeff Hammond via users
r it? > > > > Thanks, > > Kurt > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-05 Thread Jeff Hammond via users
--with-memalign=64 >> >> and OpenMPI configure options: >> >> >> '--prefix=/scinet/niagara/software/2022a/opt/gcc-11.2.0/openmpi/4.1.2+ucx-1.11.2' >> '--enable-mpi-cxx' >> '--enable-mpi1-compatibility' >> '--with-hwloc=internal' >> '--with-knem=/opt/knem-1.1.3.90mlnx1' >> '--with-libevent=internal' >> '--with-platform=contrib/platform/mellanox/optimized' >> '--with-pmix=internal' >> '--with-slurm=/opt/slurm' >> '--with-ucx=/scinet/niagara/software/2022a/opt/gcc-11.2.0/ucx/1.11.2' >> >> I am then wondering: >> >> 1) Is UCX library considered "stable" for production use with very large >> sized problems ? >> >> 2) Is there a way to "bypass" UCX at runtime? >> >> 3) Any idea for debugging this? >> >> Of course, I do not yet have a "minimum reproducer" that bugs, since it >> happens only on "large" problems, but I think I could export the data for a >> 512 processes reproducer with PARMetis call only... >> >> Thanks for helping, >> >> Eric >> >> -- >> >> Eric Chamberland, ing., M. Ing >> >> Professionnel de recherche >> >> GIREF/Université Laval >> >> (418) 656-2131 poste 41 22 42 >> >> > > -- > Josh Hursey > IBM Spectrum MPI Developer > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Université Laval > (418) 656-2131 poste 41 22 42 > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[OMPI users] please fix your attributes implementation in v5.0.0rc3+, which is broken by GCC 11

2022-04-30 Thread Jeff Hammond via users
. https://jenkins.open-mpi.org/jenkins/job/open-mpi.build.compilers/8370/ indicates you are not testing GCC 11. Please test this compiler. https://github.com/open-mpi/ompi/pull/10343 has details. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] cross-compilation documentation seems to be missing

2021-09-07 Thread Jeff Hammond via users
RISC-V node. It will generate a config.cache file. > > Then you can > > grep ^ompi_cv_fortran_ config.cache > > to generate the file you can pass to --with-cross when cross compiling > on your x86 system > > > Cheers, > > > Gilles > > > On 9/7/2021 7:35 PM, Jeff Ham

[OMPI users] cross-compilation documentation seems to be missing

2021-09-07 Thread Jeff Hammond via users
relevant. Thanks, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[OMPI users] how to suppress "libibverbs: Warning: couldn't load driver ..." messages?

2021-06-23 Thread Jeff Hammond via users
I am running on a single node and do not need any network support. I am using the NVIDIA build of Open-MPI 3.1.5. How do I tell it to never use anything related to IB? It seems that ^openib is not enough. Thanks, Jeff $ OMP_NUM_THREADS=1 /proj/nv/Linux_aarch64/21.5/comm_libs/openmpi/openmpi-3

Re: [OMPI users] Books/resources to learn (open)MPI from

2020-08-20 Thread Jeff Hammond via users
mpi.org> wrote: >> >> >> >> Assuming you want to learn about MPI (and not the Open MPI internals), >> >> the books by Bill Gropp et al. are the reference : >> >> https://www.mcs.anl.gov/research/projects/mpi/usingmpi/ >> >> >> >> (Using MPI 3rd edition is affordable on amazon) >> > >> > >> > Thanks! Yes, this is what I was after. However, if I wanted to learn >> about OpenMPI internals, what would be the go-to resource? >> > > > -- > Jeff Squyres > jsquy...@cisco.com > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa architecture

2020-08-20 Thread Jeff Hammond via users
rently than traditional MPI codes in a NUMA context and it is worth mentioning it explicitly if you are using NWChem, GAMES, MOLPRO, or other code that uses GA or DDI. If you are running VASP, CP2K, or other code that uses MPI in a more conventional manner, don't worry about it. Jeff

Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll

2020-01-25 Thread Jeff Hammond via users
fully built OpenMPI > > 4.0.2 with GCC, Intel and AOCC compilers, all using the same options. > > > > hcoll is provided by MLNX_OFED 4.7.3 and configure is run with > > > > --with-hcoll=/opt/mellanox/hcoll > > > > > > -- > Ake Sandgren, HPC2N,

Re: [OMPI users] problem with cancelling Send-Request

2019-10-02 Thread Jeff Hammond via users
s not work, so I’m wondering, > whether this is a current limitation or are we not supposed to end up in > this specific …_request_cancel implementation? > > Thank you in advance! > > Christian > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] problem with cancelling Send-Request

2019-10-02 Thread Jeff Hammond via users
requests by now */ > *return* OMPI_SUCCESS; > } > > The man page for MPI_Cancel does not mention that cancelling Send requests > does not work, so I’m wondering, > whether this is a current limitation or are we not supposed to end up in > this specific …_request_c

Re: [OMPI users] silent failure for large allgather

2019-08-11 Thread Jeff Hammond via users
that's not really > cool. > It sounds like Open-MPI doesn't properly support the maximum transfer size of PSM2. One way to work around this is to wrap your MPI collective calls and do <4G chunking yourself. Jeff > Could the error reporting in this case be

Re: [OMPI users] When is it save to free the buffer after MPI_Isend?

2019-08-11 Thread Jeff Hammond via users
uot;req" should have been turned into >> MPI_REQUEST_NULL if flag==true. >> >> -- >> Jeff Squyres >> jsquy...@cisco.com >> >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Issues compiling HPL with OMPIv4.0.0

2019-04-03 Thread Jeff Hammond
he nature of the problem or why it works with the old > OMPI version and not with the new. Any help or pointer would be appreciated. > Thanks. > AFernandez > > > _______ > users mailing list > users@lists.o

Re: [OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Jeff Hammond
tes >> > memory, I'm able to catch bad_alloc as I expected. It seems that I am >> > misunderstanding something. Could you please help? Thanks a lot. >> > >> > >> > >> > Best regards, >> > Zhen >> > >> >

Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-17 Thread Jeff Hammond
s. Would you > recommend that I report this issue on the developer's mailing list or open > a GitHub issue? > > Best wishes, > Thomas Pak > > On Mar 16 2019, at 7:40 pm, Jeff Hammond wrote: > > Is there perhaps a different way to solve your problem that doesn’t spaw

Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-16 Thread Jeff Hammond
; > } > > // Finalize > MPI_Finalize(); > > } > """ > > Thanks in advance and best wishes, > Thomas Pak > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Best way to send on mpi c, architecture dependent data type

2019-03-13 Thread Jeff Hammond
r is > 4bytes. For solve, instead of use long, you need to use int that have the > same size in both architectures. Other option could be serialize long. > > So my question is: there any way to pass data that don't depend of > architecture? > > > > __

[OMPI users] please fix RMA before you ship 4.0.0

2019-01-23 Thread Jeff Hammond
appear with shared-memory, which is a pretty important conduit. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Jeff Hammond
___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Querying/limiting OpenMPI memory allocations

2018-12-20 Thread Jeff Hammond
; >> > users@lists.open-mpi.org >> >> > https://lists.open-mpi.org/mailman/listinfo/users >> >> >> >> ___ >> >> users mailing list >> >> users@lists.open-mpi.org >> >> https://lists.open-mpi.org/mailman/listinfo/users >> > >> > ___ >> > users mailing list >> > users@lists.open-mpi.org >> > https://lists.open-mpi.org/mailman/listinfo/users >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] filesystem-dependent failure building Fortran interfaces

2018-12-05 Thread Jeff Hammond
cpp, though I don't know how how > robust it is these days in GNU Fortran.] > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffham

Re: [OMPI users] [version 2.1.5] invalid memory reference

2018-10-11 Thread Jeff Hammond
| > ======= > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] RDMA over Ethernet in Open MPI - RoCE on AWS?

2018-09-11 Thread Jeff Hammond
net NICs can > handle RDMA requests directly? Or am I misunderstanding RoCE/how Open > MPI's RoCE transport? > > Ben > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users >

Re: [OMPI users] know which CPU has the maximum value

2018-08-11 Thread Jeff Hammond
heap 2 cents from a user. > Gus Correa > > > On 08/10/2018 01:52 PM, Jeff Hammond wrote: > >> This thread is a perfect illustration of why MPI Forum participants >> should not flippantly discuss feature deprecation in discussion with >> users. Users who are not famil

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Jeff Hammond
___ > >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >>> > >>> > >>> ___ > >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >>> ___ > >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >> > >> ___ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > >> > >> ___ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OSHMEM: shmem_ptr always returns NULL

2018-06-01 Thread Jeff Hammond
11-2:105590] base/spml_base_select.c:194 - mca_spml_base_select() >>> select: component ucx selected >>> [c11-2:105590] spml_ucx.c:82 - mca_spml_ucx_enable() *** ucx ENABLED >>> [c11-1:36522] spml_ucx.c:305 - mca_spml_

Re: [OMPI users] MPI Windows: performance of local memory access

2018-05-23 Thread Jeff Hammond
mance Computing Center Stuttgart (HLRS) > Nobelstr. 19 > D-70569 Stuttgart > > Tel.: +49(0)711-68565890 > Fax: +49(0)711-6856832 > E-Mail: schuch...@hlrs.de > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] User-built OpenMPI 3.0.1 segfaults when storing into an atomic 128-bit variable

2018-05-04 Thread Jeff Hammond
ill be 128b-aligned if the base is. Noncontiguous is actually worse in that the implementation could allocate the segment for each process with only 64b alignment. Jeff > -Nathan > > On May 3, 2018, at 9:43 PM, Jeff Hammond wrote: > > Given that this seems to break user exper

Re: [OMPI users] User-built OpenMPI 3.0.1 segfaults when storing into an atomic 128-bit variable

2018-05-03 Thread Jeff Hammond
(poll2.h:46) > >> ==22815==by 0x583B4A7: poll_dispatch (poll.c:165) > >> ==22815==by 0x5831BDE: opal_libevent2022_event_base_loop > (event.c:1630) > >> ==22815==by 0x57F210D: progress_engine (in > /usr/local/lib/libopen-pal.so.40.1.0) > >> ==22815==

Re: [OMPI users] libmpi_cxx.so doesn't exist in lib path when installing 3.0.1

2018-04-08 Thread Jeff Hammond
__ >>> users mailing list >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> >>> https://lists.open-mpi.org/mailman/listinfo/users >>> <https://lists.open-mpi.org/mailman/listinfo/users> >>> >>> >>> >>> >>> ___ >>> users mailing list >>> users@lists.open-mpi.org >>> https://lists.open-mpi.org/mailman/listinfo/users >>> >> >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] libmpi_cxx

2018-03-29 Thread Jeff Hammond
gt; bindings. Hence the deprecation in 2009 and the removal in 2012. > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jef

Re: [OMPI users] Concerning the performance of the one-sided communications

2018-02-16 Thread Jeff Hammond
Best regards > Ahmed > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > > > _______ > users mailing

Re: [OMPI users] Using OpenSHMEM with Shared Memory

2018-02-06 Thread Jeff Hammond
elp / error messages > > > I tried fiddling with the MCA command-line settings, but didn't have any > luck. Is it possible to do this? Can anyone point me to some > documentation? > > Thanks, > > Ben > ___

Re: [OMPI users] Oversubscribing

2018-01-24 Thread Jeff Hammond
___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Custom datatype with variable length array

2018-01-16 Thread Jeff Hammond
at you are doing. It's possible that a two-phase implementation wins when the specific usage allows you to use a more efficient collective algorithm. > I'm open to any good and elegant suggestions! > I won't guarentee that any of my suggestions satisfied either property :-) Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Jeff Hammond
get an AMD chip in your computer. > > On Thursday, January 4, 2018, r...@open-mpi.org wrote: > >> Yes, please - that was totally inappropriate for this mailing list. >> Ralph >> >> >> On Jan 4, 2018, at 4:33 PM, Jeff Hammond wrote: >> >> Can we

Re: [OMPI users] latest Intel CPU bug

2018-01-04 Thread Jeff Hammond
at shouldn’t happen >> all that frequently, and so I would naively expect the impact to be at the >> lower end of the reported scale for those environments. TCP-based systems, >> though, might be on the other end. >> >> >> >> Probably something we’ll only reall

Re: [OMPI users] Possible memory leak in opal_free_list_grow_st

2017-12-04 Thread Jeff Hammond
hout replacing a lot of Boost code by a > hand-coded equivalent. > > Any suggestions welcome. > > Thanks, > > Philip > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/lis

Re: [OMPI users] How can I send an unsigned long long recvcounts and displs using MPI_Allgatherv()

2017-11-28 Thread Jeff Hammond
algorithm is > quite big and I am afraid that this will create further delays. Actually, > this is the reason I am trying to replace Bcast() and try other things. > > I am using Open MPI 2.1.2 and testing on a single computer with 7 MPI > processes. The ompi_info is the attac

Re: [OMPI users] How can I measure synchronization time of MPI_Bcast()

2017-10-20 Thread Jeff Hammond
___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Hybrid MPI+OpenMP benchmarks (looking for)

2017-10-09 Thread Jeff Hammond
gt; users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-22 Thread Jeff Hammond
here may be unnecessarily restrictive at times. > > > > On Wed, Sep 20, 2017 at 4:45 PM, Jeff Hammond wrote: > > > > > > On Wed, Sep 20, 2017 at 5:55 AM, Dave Love wrote: > > Jeff Hammond writes: > > > > > Please separate C and C++ here. C has a s

Re: [OMPI users] Multi-threaded MPI communication

2017-09-21 Thread Jeff Hammond
tantially decrease the overall execution time?* > > Hoping to get your help soon. Sorry for the long question. > > Regards, > Saiyedul Islam > > PS: Specifications of the cluster: GCC 5.10, OpenMP 2.0.1, CentOS 6.5 (as > part of Rockscluster). > ______

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-20 Thread Jeff Hammond
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love wrote: > Jeff Hammond writes: > > > Please separate C and C++ here. C has a standard ABI. C++ doesn't. > > > > Jeff > > [For some value of "standard".] I've said the same about C++, but the > curr

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-20 Thread Jeff Hammond
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond > wrote: > > > Fortran is a legit problem, although if somebody builds a standalone > Fortran > > 2015 implementation of

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
t;> >> >> In general, what is clean way to build OpenMPI with a GNU compiler set >> but >> >> then instruct the wrappers to use Intel compiler set? >> >> >> >> Thanks! >> >> Michael >> >> >> >> __

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
t; _______ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > > > > > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Issues with Large Window Allocations

2017-09-08 Thread Jeff Hammond
higher communication latencies as well). > >> > >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix > >> component that uses shmem_open to create a POSIX shared memory object > >> instead of a file on disk, which is then mmap'ed. Unfortuna

Re: [OMPI users] Issues with Large Window Allocations

2017-09-04 Thread Jeff Hammond
uld it be possible to use anonymous shared memory > mappings to avoid the backing file for large allocations (maybe above a > certain threshold) on systems that support MAP_ANONYMOUS and distribute the > result of the mmap call among the processes on the node? > > Thanks, > Joseph &g

Re: [OMPI users] Issues with Large Window Allocations

2017-08-29 Thread Jeff Hammond
gt; supports allocations up to 60GB, so my second point reported below may be > invalid. Number 4 seems still seems curious to me, though. > > Best > Joseph > > On 08/25/2017 09:17 PM, Jeff Hammond wrote: > >> There's no reason to do anything special for shared memory

Re: [OMPI users] Issues with Large Window Allocations

2017-08-25 Thread Jeff Hammond
eded. >>> >>> Best >>> Joseph >>> -- >>> Dipl.-Inf. Joseph Schuchart >>> High Performance Computing Center Stuttgart (HLRS) >>> Nobelstr. 19 >>> D-70569 Stuttgart >>> >>> Tel.: +49(0)711-68565890 >>> Fax: +49(0)711-6856832 >>> E-Mail: schuch...@hlrs.de >>> >>> ___ >>> users mailing list >>> users@lists.open-mpi.org >>> https://lists.open-mpi.org/mailman/listinfo/users >>> >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> >> > > -- > Dipl.-Inf. Joseph Schuchart > High Performance Computing Center Stuttgart (HLRS) > Nobelstr. 19 > D-70569 Stuttgart > > Tel.: +49(0)711-68565890 > Fax: +49(0)711-6856832 > E-Mail: schuch...@hlrs.de > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] How to get a verbose compilation?

2017-08-05 Thread Jeff Hammond
. Thanks > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Jeff Hammond
t; > > > >>>> ___ > > > > >>>> users mailing list > > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > > > > >>>

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-14 Thread Jeff Hammond
nMPI's mpirun in the following >>> >>> way >>> >>> mpirun -np 4 cfd_software >>> >>> and I get double free or corruption every single time. >>> >>> I have two questions - >>> >>> 1) I am unable to captu

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-13 Thread Jeff Hammond
Has this error i.e. double free or corruption been reported by others ? >> Is there a Is a >> >> bug fix available ? >> >> Regards, >> >> Ashwin. >> >> > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] "undefined reference to `MPI_Comm_create_group'" error message when using Open MPI 1.6.2

2017-06-08 Thread Jeff Hammond
t; Is there any way that I can compile and link the code using Open MPI 1.6.2? > > Thanks, > Arham Amouei > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Jeff Hammond
326E4DD5D Unknown Unknown Unknown > a.out 000000403769 Unknown Unknown Unknown > > _ > *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE EARTH *[image: > Earth-22-june.gif (7996 bytes)] > > http://sites.google.com/site/kolukulasivasrinivas/ > > Siva Srinivas Kolukula, PhD > *Scientist - B* > Indian Tsunami Early Warning Centre (ITEWC) > Advisory Services and Satellite Oceanography Group (ASG) > Indian National Centre for Ocean Information Services (INCOIS) > "Ocean Valley" > Pragathi Nagar (B.O) > Nizampet (S.O) > Hyderabad - 500 090 > Telangana, INDIA > > Office: 040 23886124 > > > *Cell: +91 9381403232; +91 8977801947* > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI I/O gives undefined behavior if the amount of bytes described by a filetype reaches 2^32

2017-05-02 Thread Jeff Hammond
> Indeed, this is a problem. There is an effort to fix the API in MPI-4 (see https://github.com/jeffhammond/bigmpi-paper) but as you know, there are implementation defects that break correct MPI-3 programs that use datatypes to workaround the limits of C int. We were able to find a bunch of proble

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-27 Thread Jeff Hammond
prior emails in this > thread: "As always, experiment to find the best for your hardware and > jobs." ;-) > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.org > http

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres) wrote: > On Mar 15, 2017, at 8:25 PM, Jeff Hammond wrote: > > > > I couldn't find the docs on mpool_hints, but shouldn't there be a way to > disable registration via MPI_Info rather than patching the source

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
d via IB is not a solution for > >> multi-node jobs, huh). > > > > But it works OK with libfabric (ofi mtl). Is there a problem with > > libfabric? > > > > Has anyone reported this issue to the cp2k people? I know it's not > > their problem, but I

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-03-07 Thread Jeff Hammond
>>>> >>>> Cheers >>>> Joseph >>>> >>>> -- >>>> Dipl.-Inf. Joseph Schuchart >>>> High Performance Computing Center Stuttgart (HLRS) >>>> Nobelstr. 19 >>>> D-70569 Stuttgart >>>> >&g

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-21 Thread Jeff Hammond
h Performance Computing Center Stuttgart (HLRS) > > Nobelstr. 19 > > D-70569 Stuttgart > > > > Tel.: +49(0)711-68565890 > > Fax: +49(0)711-6856832 > > E-Mail: schuch...@hlrs.de > > > > ___ >

Re: [OMPI users] Rounding errors and MPI

2017-01-18 Thread Jeff Hammond
Center > Lattes: http://lattes.cnpq.br/0796232840554652 > > > > ___ > users mailing > listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users > > > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
a60f50479 >> >> The problem seems to have been one with the Xcode configuration: >> >> "It turns out my Xcode was messed up as I was missing /usr/include/. >> After rerunning xcode-select --install it works now." >> >> On my OS X 10.11.6,

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-28 Thread Jeff Hammond
ents of the typical effects of spinning and > ameliorations on some sort of "representative" system? > > None that are published, unfortunately. Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
well (http://www.pgroup.com/userforum/viewtopic.php?t=5413&start=0) since > I'm not sure. But, no matter what, does anyone have thoughts on how to > solve this? > > Thanks, > Matt > > -- > Matt Thompson > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Cast MPI inside another MPI?

2016-11-27 Thread Jeff Hammond
Have you tried subcommunicators? MPI is well-suited to hierarchical parallelism since MPI-1 days. Additionally, MPI-3 enables MPI+MPI as George noted. Your question is probably better suited for Stack Overflow, since it's not implementation-specific... Jeff On Fri, Nov 25, 2016 at 3:34 AM Diego

Re: [OMPI users] Follow-up to Open MPI SC'16 BOF

2016-11-22 Thread Jeff Hammond
> > > >1. MPI_ALLOC_MEM integration with memkind > > It would sense to prototype this as a standalone project that is integrated with any MPI library via PMPI. It's probably a day or two of work to get that going. Jeff -- Jeff Hammond jeff.scie...@gmail.com http:

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-07 Thread Jeff Hammond
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love wrote: > > [Some time ago] > Jeff Hammond writes: > > > If you want to keep long-waiting MPI processes from clogging your CPU > > pipeline and heating up your machines, you can turn blocking MPI > > collectives into nice

Re: [OMPI users] OMPI users] Fortran and MPI-3 shared memory

2016-10-27 Thread Jeff Hammond
world written in Fortran that could > benefit greatly from this MPI-3 capability. My own background is in > numerical weather prediction, and I know it would be welcome in that > community. Someone knowledgeable in both C and Fortran should be able to > get to bottom of it. > > T

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-25 Thread Jeff Hammond
ion 15.0.2.164 > OPEN-MPI 2.0.1 > > > T. Rosmond > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Performing partial calculation on a single node in an MPI job

2016-10-17 Thread Jeff Hammond
George: http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm Jeff On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca wrote: > Vahid, > > You cannot use Fortan's vector subscript with MPI. > -- Jeff Hammond jeff.scie...@gmail.com http://jeff

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-10-16 Thread Jeff Hammond
e mpi world, then only start the mpi framework once it's needed? > > Regards, > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...

Re: [OMPI users] job distribution issue

2016-09-21 Thread Jeff Hammond
ral Biology and Bioinformatics Division* >> *CSIR-Indian Institute of Chemical Biology* >> >> *Kolkata 700032* >> >> *INDIA* >> > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jeff Hammond
as the same option. I never need stdin to run MiniDFT (i.e. QE-lite). Since both codes you name already have the correct workaround for stdin, I would not waste any time debugging this. Just do the right thing from now on and enjoy having your applications wo

Re: [OMPI users] mpi_f08 Question: set comm on declaration error, and other questions

2016-08-21 Thread Jeff Hammond
is invalid. > > > > > > Huh. I guess I'd assumed that the MPI Standard would have made sure a > declared communicator that hasn't been filled would have been an error to > use. > > > > When I get back on Monday, I'll try out some other compil

[OMPI users] ompi_info -c does not print configure arguments

2016-07-23 Thread Jeff Hammond
AX_INFO_KEY: 36 MPI_MAX_INFO_VAL: 256 MPI_MAX_PORT_NAME: 1024 MPI_MAX_DATAREP_STRING: 128 How do I extract configure arguments from an OpenMPI installation? I am trying to reproduce a build exactly and I do not have access to config.log from the origin build. Thanks, Jeff -- Jeff Hammond jeff.sc

Re: [OMPI users] Using Open MPI as a communication library

2016-07-08 Thread Jeff Hammond
__ >> users mailing list >> us...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/07/29615.php >> > > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/07/29616.php > > > > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/07/29617.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Continuous integration question...

2016-06-22 Thread Jeff Hammond
same latest_snapshot.txt thing there: >> >> wget >> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt >> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat >> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat> >> latest_snapshot.txt`.tar.bz2 >> >> > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29519.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
was assuming KMP_AFFINITY was used > > > so let me put it this way : > > do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will > very likely end up doing time sharing ... > > > Cheers, > > > Gilles > > On 6/22/2016 5:07 PM, Jeff Hammond wrote:

Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
__ > users mailing listus...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29498.php > > > > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29499.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29495.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] max buffer size

2016-06-05 Thread Jeff Hammond
gt; us...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/06/29371.php >> >> ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29372.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Broadcast faster than barrier

2016-05-30 Thread Jeff Hammond
explained. There's a nice paper on self-consistent performance of MPI implementations that has lots of details. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-21 Thread Jeff Hammond
g them? > > If we exclude GPU or other nonMPI solutions, and cost being a primary > factor, what is progression path from 2boxes to a cloud based solution > (amazon and the like...) > > Regards, > MM > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Porting MPI-3 C-program to Fortran

2016-04-18 Thread Jeff Hammond
MPI uses void** arguments to pass pointer by reference so it can be updated. In Fortran, you always pass by reference so you don't need this. Just pass your Fortran pointer argument. There are MPI-3 shared memory examples in Fortran somewhere. Try Using Advanced MPI (latest edition) or MPI Trac

Re: [OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Jeff Hammond
ing MPI-3 shared memory features? > > T. Rosmond > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/28915.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] resolution of MPI_Wtime

2016-04-08 Thread Jeff Hammond
t: >> http://www.open-mpi.org/community/lists/users/2016/04/28910.php > > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/28911.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] resolution of MPI_Wtime

2016-04-07 Thread Jeff Hammond
om this. It is extremely important to application developers that MPI_Wtime represent a "best effort" implementation on every platform. Other implementations of MPI have very accurate counters. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
at was posted, each thread uses its own communicator, so it > complies with the above lines. > I didn't see the attachment. Sorry. Reading email at the beach appears to be a bad idea on multiple levels :-) You are right that duping the comm for the second file makes this a correct

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
> Boltzmannstrasse 3, 85748 Garching, Germany > http://www5.in.tum.de/ > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Existing and emerging interconnects for commodity PCs

2016-03-21 Thread Jeff Hammond
#x27;good', say, 10 years down the road? >> >> Thanks >> Durga >> >> We learn from history that we never learn from history. >> > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28769.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
On Mon, Mar 21, 2016 at 1:37 PM, Brian Dobbins wrote: > > Hi Jeff, > > On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond > wrote: > >> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf >> to see the status of all implementations w.r.t. MPI-3 as

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf to see the status of all implementations w.r.t. MPI-3 as of one year ago. Jeff On Mon, Mar 21, 2016 at 1:14 PM, Jeff Hammond wrote: > Call MPI from C code, where you will have all the preprocessor support you > need. Wrap that C code with F

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
the mpif90/mpif77 commands provide them a terrible, > terrible idea? > > Or any other suggestions? > > Thanks, > - Brian > > _______ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28777.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Fault tolerant feature in Open MPI

2016-03-16 Thread Jeff Hammond
rate process running in node A to other node, let's say to > node C. > is there a way to do this with open MPI ? thanks. > > Regards, > > Husen > > > > > On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond > wrote: > >> Why do you need OpenMPI to do th

Re: [OMPI users] Fault tolerant feature in Open MPI

2016-03-16 Thread Jeff Hammond
his. > > and by the way, does Open MPI able to checkpoint or restart mpi > application/GROMACS automatically ? > Please, I really need help. > > Regards, > > > Husen > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

  1   2   >