0012cf80 B ompi_mpi_info_null
00116038 D ompi_mpi_info_null_addr
00133720 B ompi_mpi_op_null
001163c0 D ompi_mpi_op_null_addr
00135740 B ompi_mpi_win_null
00117c80 D ompi_mpi_win_null_addr
0012d080 B ompi_request_null
00116040 D ompi_request_null_addr
--
Je
r it?
>
>
>
> Thanks,
>
> Kurt
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
--with-memalign=64
>>
>> and OpenMPI configure options:
>>
>>
>> '--prefix=/scinet/niagara/software/2022a/opt/gcc-11.2.0/openmpi/4.1.2+ucx-1.11.2'
>> '--enable-mpi-cxx'
>> '--enable-mpi1-compatibility'
>> '--with-hwloc=internal'
>> '--with-knem=/opt/knem-1.1.3.90mlnx1'
>> '--with-libevent=internal'
>> '--with-platform=contrib/platform/mellanox/optimized'
>> '--with-pmix=internal'
>> '--with-slurm=/opt/slurm'
>> '--with-ucx=/scinet/niagara/software/2022a/opt/gcc-11.2.0/ucx/1.11.2'
>>
>> I am then wondering:
>>
>> 1) Is UCX library considered "stable" for production use with very large
>> sized problems ?
>>
>> 2) Is there a way to "bypass" UCX at runtime?
>>
>> 3) Any idea for debugging this?
>>
>> Of course, I do not yet have a "minimum reproducer" that bugs, since it
>> happens only on "large" problems, but I think I could export the data for a
>> 512 processes reproducer with PARMetis call only...
>>
>> Thanks for helping,
>>
>> Eric
>>
>> --
>>
>> Eric Chamberland, ing., M. Ing
>>
>> Professionnel de recherche
>>
>> GIREF/Université Laval
>>
>> (418) 656-2131 poste 41 22 42
>>
>>
>
> --
> Josh Hursey
> IBM Spectrum MPI Developer
>
> --
> Eric Chamberland, ing., M. Ing
> Professionnel de recherche
> GIREF/Université Laval
> (418) 656-2131 poste 41 22 42
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
.
https://jenkins.open-mpi.org/jenkins/job/open-mpi.build.compilers/8370/
indicates you are not testing GCC 11. Please test this compiler.
https://github.com/open-mpi/ompi/pull/10343 has details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
RISC-V node. It will generate a config.cache file.
>
> Then you can
>
> grep ^ompi_cv_fortran_ config.cache
>
> to generate the file you can pass to --with-cross when cross compiling
> on your x86 system
>
>
> Cheers,
>
>
> Gilles
>
>
> On 9/7/2021 7:35 PM, Jeff Ham
relevant.
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
I am running on a single node and do not need any network support. I am
using the NVIDIA build of Open-MPI 3.1.5. How do I tell it to never use
anything related to IB? It seems that ^openib is not enough.
Thanks,
Jeff
$ OMP_NUM_THREADS=1
/proj/nv/Linux_aarch64/21.5/comm_libs/openmpi/openmpi-3
mpi.org> wrote:
>> >>
>> >> Assuming you want to learn about MPI (and not the Open MPI internals),
>> >> the books by Bill Gropp et al. are the reference :
>> >> https://www.mcs.anl.gov/research/projects/mpi/usingmpi/
>> >>
>> >> (Using MPI 3rd edition is affordable on amazon)
>> >
>> >
>> > Thanks! Yes, this is what I was after. However, if I wanted to learn
>> about OpenMPI internals, what would be the go-to resource?
>>
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
rently than traditional MPI codes in
a NUMA context and it is worth mentioning it explicitly if you are using
NWChem, GAMES, MOLPRO, or other code that uses GA or DDI. If you are
running VASP, CP2K, or other code that uses MPI in a more conventional
manner, don't worry about it.
Jeff
fully built OpenMPI
> > 4.0.2 with GCC, Intel and AOCC compilers, all using the same options.
> >
> > hcoll is provided by MLNX_OFED 4.7.3 and configure is run with
> >
> > --with-hcoll=/opt/mellanox/hcoll
> >
> >
>
> --
> Ake Sandgren, HPC2N,
s not work, so I’m wondering,
> whether this is a current limitation or are we not supposed to end up in
> this specific …_request_cancel implementation?
>
> Thank you in advance!
>
> Christian
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
requests by now */
> *return* OMPI_SUCCESS;
> }
>
> The man page for MPI_Cancel does not mention that cancelling Send requests
> does not work, so I’m wondering,
> whether this is a current limitation or are we not supposed to end up in
> this specific …_request_c
that's not really
> cool.
>
It sounds like Open-MPI doesn't properly support the maximum transfer size
of PSM2. One way to work around this is to wrap your MPI collective calls
and do <4G chunking yourself.
Jeff
> Could the error reporting in this case be
uot;req" should have been turned into
>> MPI_REQUEST_NULL if flag==true.
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
he nature of the problem or why it works with the old
> OMPI version and not with the new. Any help or pointer would be appreciated.
> Thanks.
> AFernandez
>
>
> _______
> users mailing list
> users@lists.o
tes
>> > memory, I'm able to catch bad_alloc as I expected. It seems that I am
>> > misunderstanding something. Could you please help? Thanks a lot.
>> >
>> >
>> >
>> > Best regards,
>> > Zhen
>> >
>> >
s. Would you
> recommend that I report this issue on the developer's mailing list or open
> a GitHub issue?
>
> Best wishes,
> Thomas Pak
>
> On Mar 16 2019, at 7:40 pm, Jeff Hammond wrote:
>
> Is there perhaps a different way to solve your problem that doesn’t spaw
;
> }
>
> // Finalize
> MPI_Finalize();
>
> }
> """
>
> Thanks in advance and best wishes,
> Thomas Pak
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
r is
> 4bytes. For solve, instead of use long, you need to use int that have the
> same size in both architectures. Other option could be serialize long.
>
> So my question is: there any way to pass data that don't depend of
> architecture?
>
>
>
> __
appear with shared-memory, which is a pretty important conduit.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
; >> > users@lists.open-mpi.org
>> >> > https://lists.open-mpi.org/mailman/listinfo/users
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.open-mpi.org
>> >> https://lists.open-mpi.org/mailman/listinfo/users
>> >
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
cpp, though I don't know how how
> robust it is these days in GNU Fortran.]
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffham
|
> =======
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
net NICs can
> handle RDMA requests directly? Or am I misunderstanding RoCE/how Open
> MPI's RoCE transport?
>
> Ben
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
heap 2 cents from a user.
> Gus Correa
>
>
> On 08/10/2018 01:52 PM, Jeff Hammond wrote:
>
>> This thread is a perfect illustration of why MPI Forum participants
>> should not flippantly discuss feature deprecation in discussion with
>> users. Users who are not famil
___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>>
> >>>
> >>> ___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>> ___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >> ___
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >> ___
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
11-2:105590] base/spml_base_select.c:194 - mca_spml_base_select()
>>> select: component ucx selected
>>> [c11-2:105590] spml_ucx.c:82 - mca_spml_ucx_enable() *** ucx ENABLED
>>> [c11-1:36522] spml_ucx.c:305 - mca_spml_
mance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
ill be
128b-aligned if the base is. Noncontiguous is actually worse in that the
implementation could allocate the segment for each process with only 64b
alignment.
Jeff
> -Nathan
>
> On May 3, 2018, at 9:43 PM, Jeff Hammond wrote:
>
> Given that this seems to break user exper
(poll2.h:46)
> >> ==22815==by 0x583B4A7: poll_dispatch (poll.c:165)
> >> ==22815==by 0x5831BDE: opal_libevent2022_event_base_loop
> (event.c:1630)
> >> ==22815==by 0x57F210D: progress_engine (in
> /usr/local/lib/libopen-pal.so.40.1.0)
> >> ==22815==
__
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> <https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
gt; bindings. Hence the deprecation in 2009 and the removal in 2012.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jef
Best regards
> Ahmed
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
>
> _______
> users mailing
elp / error messages
>
>
> I tried fiddling with the MCA command-line settings, but didn't have any
> luck. Is it possible to do this? Can anyone point me to some
> documentation?
>
> Thanks,
>
> Ben
> ___
___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
at you are doing.
It's possible that a two-phase implementation wins when the specific usage
allows you to use a more efficient collective algorithm.
> I'm open to any good and elegant suggestions!
>
I won't guarentee that any of my suggestions satisfied either property :-)
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
get an AMD chip in your computer.
>
> On Thursday, January 4, 2018, r...@open-mpi.org wrote:
>
>> Yes, please - that was totally inappropriate for this mailing list.
>> Ralph
>>
>>
>> On Jan 4, 2018, at 4:33 PM, Jeff Hammond wrote:
>>
>> Can we
at shouldn’t happen
>> all that frequently, and so I would naively expect the impact to be at the
>> lower end of the reported scale for those environments. TCP-based systems,
>> though, might be on the other end.
>> >>
>> >> Probably something we’ll only reall
hout replacing a lot of Boost code by a
> hand-coded equivalent.
>
> Any suggestions welcome.
>
> Thanks,
>
> Philip
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/lis
algorithm is
> quite big and I am afraid that this will create further delays. Actually,
> this is the reason I am trying to replace Bcast() and try other things.
>
> I am using Open MPI 2.1.2 and testing on a single computer with 7 MPI
> processes. The ompi_info is the attac
___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
gt; users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
here may be unnecessarily restrictive at times.
> >
> > On Wed, Sep 20, 2017 at 4:45 PM, Jeff Hammond
wrote:
> >
> >
> > On Wed, Sep 20, 2017 at 5:55 AM, Dave Love
wrote:
> > Jeff Hammond writes:
> >
> > > Please separate C and C++ here. C has a s
tantially decrease the overall execution time?*
>
> Hoping to get your help soon. Sorry for the long question.
>
> Regards,
> Saiyedul Islam
>
> PS: Specifications of the cluster: GCC 5.10, OpenMP 2.0.1, CentOS 6.5 (as
> part of Rockscluster).
> ______
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love
wrote:
> Jeff Hammond writes:
>
> > Please separate C and C++ here. C has a standard ABI. C++ doesn't.
> >
> > Jeff
>
> [For some value of "standard".] I've said the same about C++, but the
> curr
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond
> wrote:
>
> > Fortran is a legit problem, although if somebody builds a standalone
> Fortran
> > 2015 implementation of
t;>
>> >> In general, what is clean way to build OpenMPI with a GNU compiler set
>> but
>> >> then instruct the wrappers to use Intel compiler set?
>> >>
>> >> Thanks!
>> >> Michael
>> >>
>> >> __
t; _______
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
higher communication latencies as well).
> >>
> >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix
> >> component that uses shmem_open to create a POSIX shared memory object
> >> instead of a file on disk, which is then mmap'ed. Unfortuna
uld it be possible to use anonymous shared memory
> mappings to avoid the backing file for large allocations (maybe above a
> certain threshold) on systems that support MAP_ANONYMOUS and distribute the
> result of the mmap call among the processes on the node?
>
> Thanks,
> Joseph
&g
gt; supports allocations up to 60GB, so my second point reported below may be
> invalid. Number 4 seems still seems curious to me, though.
>
> Best
> Joseph
>
> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
>
>> There's no reason to do anything special for shared memory
eded.
>>>
>>> Best
>>> Joseph
>>> --
>>> Dipl.-Inf. Joseph Schuchart
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstr. 19
>>> D-70569 Stuttgart
>>>
>>> Tel.: +49(0)711-68565890
>>> Fax: +49(0)711-6856832
>>> E-Mail: schuch...@hlrs.de
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>>
>
> --
> Dipl.-Inf. Joseph Schuchart
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
. Thanks
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing
t; > > > >>>> ___
> > > > >>>> users mailing list
> > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>>
nMPI's mpirun in the following
>>>
>>> way
>>>
>>> mpirun -np 4 cfd_software
>>>
>>> and I get double free or corruption every single time.
>>>
>>> I have two questions -
>>>
>>> 1) I am unable to captu
Has this error i.e. double free or corruption been reported by others ?
>> Is there a Is a
>>
>> bug fix available ?
>>
>> Regards,
>>
>> Ashwin.
>>
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
t; Is there any way that I can compile and link the code using Open MPI 1.6.2?
>
> Thanks,
> Arham Amouei
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail
326E4DD5D Unknown Unknown Unknown
> a.out 000000403769 Unknown Unknown Unknown
>
> _
> *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE EARTH *[image:
> Earth-22-june.gif (7996 bytes)]
>
> http://sites.google.com/site/kolukulasivasrinivas/
>
> Siva Srinivas Kolukula, PhD
> *Scientist - B*
> Indian Tsunami Early Warning Centre (ITEWC)
> Advisory Services and Satellite Oceanography Group (ASG)
> Indian National Centre for Ocean Information Services (INCOIS)
> "Ocean Valley"
> Pragathi Nagar (B.O)
> Nizampet (S.O)
> Hyderabad - 500 090
> Telangana, INDIA
>
> Office: 040 23886124
>
>
> *Cell: +91 9381403232; +91 8977801947*
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
Indeed, this is a problem. There is an effort to fix the API in MPI-4 (see
https://github.com/jeffhammond/bigmpi-paper) but as you know, there are
implementation defects that break correct MPI-3 programs that use datatypes
to workaround the limits of C int. We were able to find a bunch of
proble
prior emails in this
> thread: "As always, experiment to find the best for your hardware and
> jobs." ;-)
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> http
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres)
wrote:
> On Mar 15, 2017, at 8:25 PM, Jeff Hammond wrote:
> >
> > I couldn't find the docs on mpool_hints, but shouldn't there be a way to
> disable registration via MPI_Info rather than patching the source
d via IB is not a solution for
> >> multi-node jobs, huh).
> >
> > But it works OK with libfabric (ofi mtl). Is there a problem with
> > libfabric?
> >
> > Has anyone reported this issue to the cp2k people? I know it's not
> > their problem, but I
>>>>
>>>> Cheers
>>>> Joseph
>>>>
>>>> --
>>>> Dipl.-Inf. Joseph Schuchart
>>>> High Performance Computing Center Stuttgart (HLRS)
>>>> Nobelstr. 19
>>>> D-70569 Stuttgart
>>>>
>&g
h Performance Computing Center Stuttgart (HLRS)
> > Nobelstr. 19
> > D-70569 Stuttgart
> >
> > Tel.: +49(0)711-68565890
> > Fax: +49(0)711-6856832
> > E-Mail: schuch...@hlrs.de
> >
> > ___
>
Center
> Lattes: http://lattes.cnpq.br/0796232840554652
>
>
>
> ___
> users mailing
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
a60f50479
>>
>> The problem seems to have been one with the Xcode configuration:
>>
>> "It turns out my Xcode was messed up as I was missing /usr/include/.
>> After rerunning xcode-select --install it works now."
>>
>> On my OS X 10.11.6,
ents of the typical effects of spinning and
> ameliorations on some sort of "representative" system?
>
>
None that are published, unfortunately.
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
well (http://www.pgroup.com/userforum/viewtopic.php?t=5413&start=0) since
> I'm not sure. But, no matter what, does anyone have thoughts on how to
> solve this?
>
> Thanks,
> Matt
>
> --
> Matt Thompson
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Have you tried subcommunicators? MPI is well-suited to hierarchical
parallelism since MPI-1 days.
Additionally, MPI-3 enables MPI+MPI as George noted.
Your question is probably better suited for Stack Overflow, since it's not
implementation-specific...
Jeff
On Fri, Nov 25, 2016 at 3:34 AM Diego
>
>
>
>1. MPI_ALLOC_MEM integration with memkind
>
> It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI. It's probably a day or two of
work to get that going.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http:
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love wrote:
>
> [Some time ago]
> Jeff Hammond writes:
>
> > If you want to keep long-waiting MPI processes from clogging your CPU
> > pipeline and heating up your machines, you can turn blocking MPI
> > collectives into nice
world written in Fortran that could
> benefit greatly from this MPI-3 capability. My own background is in
> numerical weather prediction, and I know it would be welcome in that
> community. Someone knowledgeable in both C and Fortran should be able to
> get to bottom of it.
>
> T
ion 15.0.2.164
> OPEN-MPI 2.0.1
>
>
> T. Rosmond
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
George:
http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm
Jeff
On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca wrote:
> Vahid,
>
> You cannot use Fortan's vector subscript with MPI.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeff
e mpi world, then only start the mpi framework once
it's needed?
>
> Regards,
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...
ral Biology and Bioinformatics Division*
>> *CSIR-Indian Institute of Chemical Biology*
>>
>> *Kolkata 700032*
>>
>> *INDIA*
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
as the same option. I
never need stdin to run MiniDFT (i.e. QE-lite).
Since both codes you name already have the correct workaround for stdin, I
would not waste any time debugging this. Just do the right thing from now
on and enjoy having your applications wo
is invalid.
>
>
>
>
>
> Huh. I guess I'd assumed that the MPI Standard would have made sure a
> declared communicator that hasn't been filled would have been an error to
> use.
>
>
>
> When I get back on Monday, I'll try out some other compil
AX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
How do I extract configure arguments from an OpenMPI installation? I am
trying to reproduce a build exactly and I do not have access to config.log
from the origin build.
Thanks,
Jeff
--
Jeff Hammond
jeff.sc
__
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/07/29615.php
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29616.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29617.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
same latest_snapshot.txt thing there:
>>
>> wget
>> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt
>> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat
>> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat>
>> latest_snapshot.txt`.tar.bz2
>>
>>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29519.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
was assuming KMP_AFFINITY was used
>
>
> so let me put it this way :
>
> do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will
> very likely end up doing time sharing ...
>
>
> Cheers,
>
>
> Gilles
>
> On 6/22/2016 5:07 PM, Jeff Hammond wrote:
__
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29498.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29499.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29495.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt; us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29371.php
>>
>> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29372.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
explained.
There's a nice paper on self-consistent performance of MPI implementations
that has lots of details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
g them?
>
> If we exclude GPU or other nonMPI solutions, and cost being a primary
> factor, what is progression path from 2boxes to a cloud based solution
> (amazon and the like...)
>
> Regards,
> MM
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
MPI uses void** arguments to pass pointer by reference so it can be updated. In
Fortran, you always pass by reference so you don't need this. Just pass your
Fortran pointer argument.
There are MPI-3 shared memory examples in Fortran somewhere. Try Using Advanced
MPI (latest edition) or MPI Trac
ing MPI-3 shared memory features?
>
> T. Rosmond
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/04/28915.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
t:
>> http://www.open-mpi.org/community/lists/users/2016/04/28910.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/04/28911.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
om this. It
is extremely important to application developers that MPI_Wtime represent a
"best effort" implementation on every platform.
Other implementations of MPI have very accurate counters.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
at was posted, each thread uses its own communicator, so it
> complies with the above lines.
>
I didn't see the attachment. Sorry. Reading email at the beach appears to be a
bad idea on multiple levels :-)
You are right that duping the comm for the second file makes this a correct
> Boltzmannstrasse 3, 85748 Garching, Germany
> http://www5.in.tum.de/
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
#x27;good', say, 10 years down the road?
>>
>> Thanks
>> Durga
>>
>> We learn from history that we never learn from history.
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28769.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Mon, Mar 21, 2016 at 1:37 PM, Brian Dobbins wrote:
>
> Hi Jeff,
>
> On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond
> wrote:
>
>> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf
>> to see the status of all implementations w.r.t. MPI-3 as
://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf to
see the status of all implementations w.r.t. MPI-3 as of one year ago.
Jeff
On Mon, Mar 21, 2016 at 1:14 PM, Jeff Hammond
wrote:
> Call MPI from C code, where you will have all the preprocessor support you
> need. Wrap that C code with F
the mpif90/mpif77 commands provide them a terrible,
> terrible idea?
>
> Or any other suggestions?
>
> Thanks,
> - Brian
>
> _______
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28777.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
rate process running in node A to other node, let's say to
> node C.
> is there a way to do this with open MPI ? thanks.
>
> Regards,
>
> Husen
>
>
>
>
> On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond > wrote:
>
>> Why do you need OpenMPI to do th
his.
>
> and by the way, does Open MPI able to checkpoint or restart mpi
> application/GROMACS automatically ?
> Please, I really need help.
>
> Regards,
>
>
> Husen
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
1 - 100 of 157 matches
Mail list logo