Re: [OMPI devel] Open MPI v5.0.0rc13 is available for testing

2023-10-04 Thread Tomislav Janjusic via devel
_ From: devel on behalf of Christoph Niethammer via devel Sent: Wednesday, October 4, 2023 2:40 AM To: Austen Lauria Cc: Christoph Niethammer ; Open MPI Developers Subject: Re: [OMPI devel] [Open MPI Announce] Open MPI v5.0.0rc13 is available for testing External email: Use caution opening li

Re: [OMPI devel] [Open MPI Announce] Open MPI v5.0.0rc13 is available for testing

2023-10-04 Thread Christoph Niethammer via devel
Hello Austen, Unfortunately I could not attend the last telco and therefore I do not know if this was discussed. So I'd like to bring up attention for https://github.com/mpi-forum/mpi-issues/issues/765 It is not voted on yet but seems to have support to go in. As Partitioned communication is int

Re: [OMPI devel] Open MPI 4.1.6rc1 release candidate posted

2023-09-06 Thread Orion Poplawski via devel
On 8/8/23 07:34, Jeff Squyres (jsquyres) via devel wrote: There have been enough minor fixes to warrant a 4.1.6 release.  We've posted 4.1.6rc1 tarballs in the usual location: https://www.open-mpi.org/software/ompi/v4.1/ .. We would welcome and

Re: [OMPI devel] Open MPI v5.0.0rc9 is available for testing

2022-11-25 Thread Gilles Gouaillardet via devel
Well, --enable-mca-no-build=io-romio341 works strictly speaking (it does not build the io/romio341 component). That being said, it does not prevent 3rd-party/romio from being built, and this is what fails with gcc 4.8 I will file an issue in order to keep track of that. Cheers, Gilles On Fri,

Re: [OMPI devel] Open MPI v5.0.0rc9 is available for testing

2022-11-25 Thread Gilles Gouaillardet via devel
This is very odd ... I dumped DISABLE_io_romio341, and it is *not* set (!) Anyway, let's follow-up at https://github.com/open-mpi/ompi/issues/11088 Cheers, Gilles On Fri, Nov 18, 2022 at 5:37 PM Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Let me take a step back... > > Looki

Re: [OMPI devel] Open MPI v5.0.0rc9 is available for testing

2022-11-25 Thread Gilles Gouaillardet via devel
Let me take a step back... Looking at the code, it should work ... except it does not :-( $ configure --enable-mca-no-build=io-romio341 [...] checking which components should be disabled... io-romio341 [...] +++ Configuring MCA framework io checking for no configure components in framework io...

Re: [OMPI devel] Open MPI v5.0.0rc9 is available for testing

2022-11-17 Thread Barrett, Brian via devel
--enable-mca-no-build=io-romio341 should still work. Or just --disable-io-romio. No comment around the RHEL7 part; that's pretty old, but I don't think we've officially said it is too old. Probably something worth filing a ticket for so that we can run to ground before 5.0 release. Oddly, CI

Re: [OMPI devel] Open MPI v5.0.0rc9 is available for testing

2022-11-14 Thread Gilles Gouaillardet via devel
Folks, I tried to build on a RHEL7 like, and it fails at make time because ROMIO requires stdatomic.h (it seems this file is only available from GCC 4.9) Are we supposed to be able to build Open MPI 5 with GCC 4.8 (e.g. stock RHEL7 compiler)? --enable-mca-no-build=io-romio314 cannot hel

Re: [OMPI devel] Open MPI Java MPI bindings

2022-08-10 Thread Gilles Gouaillardet via devel
Hi, Same impression here. There are bug reports once in a while (both Open MPI mailing list and Stack Overflow). I remember one used the Java bindings to teach MPI. So my gut feeling is the number of active users is small but not zero. Cheers, Gilles On Wed, Aug 10, 2022 at 8:48 PM t-kawashim

Re: [OMPI devel] Open MPI Java MPI bindings

2022-08-10 Thread t-kawashima--- via devel
Hi, Fujitsu MPI, which is currently based on Open MPI v4.0.x, supports Java bindings. It is because one of our customers requested to support it decade or so ago. I don't know users are still using it. Fujitsu MPI development team regularly receives trouble/request reports but does not receive

Re: [OMPI devel] Open MPI source RPM / specfile

2022-04-11 Thread Jeff Squyres (jsquyres) via devel
via devel Sent: Monday, April 4, 2022 4:51 PM To: Open MPI Developers Cc: Zhang, Wei Subject: Re: [OMPI devel] Open MPI source RPM / specfile Hi Jeff, The AWS EFA team also uses the RPM specfile to build openmpi RPM, and distribute the RPM. We would also prefer that you continue to maintain it

Re: [OMPI devel] Open MPI source RPM / specfile

2022-04-04 Thread Zhang, Wei via devel
Hi Jeff, The AWS EFA team also uses the RPM specfile to build openmpi RPM, and distribute the RPM. We would also prefer that you continue to maintain it. Sincerely, Wei Zhang On 4/4/22, 1:47 PM, "devel on behalf of Goldman, Adam via devel" wrote: CAUTION: This email originated from ou

Re: [OMPI devel] Open MPI source RPM / specfile

2022-04-04 Thread Goldman, Adam via devel
Hi Jeff, We (Intel IEFS Team) do not use the SRPM, but we do use the RPM specfile and scripts to build our own RPMs. We would prefer if you can continue to maintain this going forward. Regards, Adam Goldman Intel Corporation -Original Message- From: devel On Behalf Of Jeff Squyres (j

Re: [OMPI devel] Open MPI v5.0.x branch created

2021-06-29 Thread Austen W Lauria via devel
github.com/open-mpi/ompi/issues/new   If that is problematic, you can just send the details here and I can open it.   Thanks! Austen Lauria   - Original message -From: "Eric Chamberland via devel" Sent by: "devel" To: "Open MPI Developers" Cc: "Eric Chamberla

Re: [OMPI devel] Open MPI v5.0.x branch created

2021-06-28 Thread Eric Chamberland via devel
Hi, I just checked out the 5.0.x branch ans gave it a try. Is it ok to report problems or shall we wait until an official rc1 ? Thanks, Eric ps: I have a bug with MPI_File_open... On 2021-03-11 1:24 p.m., Geoffrey Paulsen via devel wrote: Open MPI developers,   We've created the Open MPI v

Re: [OMPI devel] Open MPI documentation

2020-11-19 Thread luis Cebamanos via devel
Hi Jeff, It sounds like music to me. The OpenMPI doc needs some sort of boost and Sphinx is the ideal package for this. Cheers, Luis Cebamanos On 16/11/2020 21:02, Jeff Squyres (jsquyres) via devel wrote: > Over the past few months, I've been musing about Open MPI's documentation. > > Short vers

Re: [OMPI devel] Open MPI 3rd party packaging changes

2020-10-08 Thread Orion Poplawski via devel
On 10/1/20 1:43 PM, Barrett, Brian via devel wrote: All - Only 6 months after I promised the code would be done, the changes we discussed in February around 3rd party packages (Libevent, HWLOC, PMIx, and PRRTE) are merged to master. With these changes, Open MPI will prefer an external version

Re: [OMPI devel] [Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-07-06 Thread Jeff Squyres (jsquyres) via devel
Gentle reminder that part 2 of "The ABCs of Open MPI" will be this Wednesday, 8 July, 2020 at: - 8am US Pacific time - 11am US Eastern time - 3pm UTC - 5pm CEST Ralph and I will be continuing our discussion and explanations of the Open MPI ecosystem. The Webex link to join is on the event wiki

Re: [OMPI devel] [Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-06-22 Thread Jeff Squyres (jsquyres) via devel
After assembling the content for this online presentation (based on questions and comments from the user community), we have so much material to cover that we're going to split it into two sessions. The first part will be **this Wednesday (24 June 2020)*** at: - 8am US Pacific time - 11am US Ea

Re: [OMPI devel] Open MPI BTL TCP interface mapping

2020-01-09 Thread Zhang, William via devel
for the first graph was simulated using the num links variable. Please let me know if I misunderstood something. Thanks, William Zhang From: George Bosilca Date: Thursday, January 9, 2020 at 12:02 PM To: Open MPI Developers Cc: "Zhang, William" Subject: Re: [OMPI devel] Open M

Re: [OMPI devel] Open MPI BTL TCP interface mapping

2020-01-09 Thread George Bosilca via devel
Will, The 7134 issue is complex in its interactions with the rest of the TCP BTL, and I could not find the time to look at it careful enough (or test it on AWS). But maybe you can address my main concern here. #7134 interfaces selection will have an impact on the traffic distribution among the dif

Re: [OMPI devel] Open MPI BTL TCP interface mapping

2020-01-09 Thread Zhang, William via devel
Hello devel, Thanks George for reviewing: https://github.com/open-mpi/ompi/pull/7167 Can I get a review (not from Brian) for this patch as well: https://github.com/open-mpi/ompi/pull/7134 These PR’s fix common matching bugs that users utilizing the tcp btl encounter. It has been proven to fix

Re: [OMPI devel] Open MPI BTL TCP interface mapping review request

2019-12-17 Thread Zhang, William via devel
Oh sorry, https://github.com/open-mpi/ompi/pull/7167 https://github.com/open-mpi/ompi/pull/7134 Thanks for pointing it out 😊 From: "Heinz, Michael William" Date: Tuesday, December 17, 2019 at 11:19 AM To: Open MPI Developers Cc: "Zhang, William" Subject: RE: Open MPI BTL TCP interface mapping

Re: [OMPI devel] Open MPI BTL TCP interface mapping review request

2019-12-17 Thread Heinz, Michael William via devel
William, You seem to have posted the same pull request twice? From: devel On Behalf Of Zhang, William via devel Sent: Tuesday, December 17, 2019 2:16 PM To: devel@lists.open-mpi.org Cc: Zhang, William Subject: [OMPI devel] Open MPI BTL TCP interface mapping review request Hello devel, Can so

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-13 Thread Ralph Castain via devel
m/open-mpi/ompi/issues/6613 MPIR was broken in 4.0.1 due to a race condition in PMIx. It was patched, it looks to me, for 4.0.2. Here is the openpmix issue:https://github.com/openpmix/openpmix/issues/1189 I think this lines up - 4.0.2 should be good with a fix. John DelSignore ---11/12/2019 02:25:1

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-13 Thread Ralph Castain via devel
1189 I think this lines up - 4.0.2 should be good with a fix. John DelSignore ---11/12/2019 02:25:14 PM---Hi Austen, Thanks for the reply. What I am seeing is consistent with your thought, in that when I se From: John DelSignore To: Open MPI Developers Cc: Austen W Lauria, devel Date: 11/12/2019 0

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-13 Thread John DelSignore via devel
From: John DelSignore <mailto:jdelsign...@perforce.com> To: Open MPI Developers <mailto:devel@lists.open-mpi.org> Cc: Austen W Lauria <mailto:awlau...@us.ibm.com>, devel <mailto:devel-boun...@lists.open-mpi.org> Date: 11/12/2019 02:25 PM Subject: [EXTERNAL] Re: [OMPI dev

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Ralph Castain via devel
Developers Cc: Austen W Lauria , devel Date: 11/12/2019 02:25 PM Subject: [EXTERNAL] Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView Hi Austen, Thanks for the reply. What I am seeing is consistent with yo

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread John DelSignore via devel
vel <mailto:devel-boun...@lists.open-mpi.org> Date: 11/12/2019 02:25 PM Subject: [EXTERNAL] Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView Hi Austen, Thanks for the reply. What I am seeing is consistent with

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Ralph Castain via devel
mething the volatile keyword doesn't do on its own. I think it's also much cleaner as it eliminates an arbitrary sleep from the code - which I see as a good thing as well. "Ralph Castain via devel" ---11/12/2019 09:24:23 AM---> On

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread George Bosilca via devel
>On the other end, this would require the thread updating >this variable to: > >pthread_mutex_lock(&lock); >flg = new_val; >pthread_cond_signal(&cond); >pthread_mutex_unlock(&a

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Austen W Lauria via devel
ink it's also much cleaner as it eliminates an arbitrary sleep from the code - which I see as a good thing as well. "Ralph Castain via devel" ---11/12/2019 09:24:23 AM---> On Nov 11, 2019, at 4:53 PM, Gilles Gouaillardet via

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread George Bosilca via devel
ting this variable to: > > pthread_mutex_lock(&lock); > flg = new_val; > pthread_cond_signal(&cond); > pthread_mutex_unlock(&lock); > > This provides the memory barrier for the thread polling on the flag to see > the update - something the volatile keyword doesn&#x

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Austen W Lauria via devel
- 4.0.2 should be good with a fix. From: John DelSignore To: Open MPI Developers Cc: Austen W Lauria , devel Date: 11/12/2019 02:25 PM Subject:[EXTERNAL] Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Austen W Lauria via devel
Cc: "Ralph Castain" Date: 11/12/2019 01:28 PM Subject:[EXTERNAL] Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView Sent by:"devel" Just to be clear as well: you cannot use the pthread metho

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread John DelSignore via devel
; On Nov 11, 2019, at 4:53 PM, Gilles Gouaillardet via devel <mailto:devel@lists.open-mpi.org> wrote: > From: "Ralph Castain via devel" <mailto:devel@lists.open-mpi.org> To: "OpenMPI Devel" <mailto:devel@lists.open-mpi.org> Cc: "Ralph Castain&quo

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Ralph Castain via devel
devel" ---11/12/2019 09:24:23 AM---> On Nov 11, 2019, at 4:53 PM, Gilles Gouaillardet via devel mailto:devel@lists.open-mpi.org> > wrote: > From: "Ralph Castain via devel" mailto:devel@lists.open-mpi.org> > To: "OpenMPI Devel" mailto:devel@lists.open-mpi.o

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread George Bosilca via devel
---> On Nov 11, 2019, at 4:53 PM, Gilles Gouaillar]"Ralph > Castain via devel" ---11/12/2019 09:24:23 AM---> On Nov 11, 2019, at 4:53 > PM, Gilles Gouaillardet via devel wrote: > > > From: "Ralph Castain via devel" > To: "OpenMPI Devel" &g

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Austen W Lauria via devel
devel" To: "OpenMPI Devel" Cc: "Ralph Castain" Date: 11/12/2019 09:24 AM Subject:[EXTERNAL] Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView Sent by:"devel" &

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-12 Thread Ralph Castain via devel
> On Nov 11, 2019, at 4:53 PM, Gilles Gouaillardet via devel > wrote: > > John, > > OMPI_LAZY_WAIT_FOR_COMPLETION(active) > > > is a simple loop that periodically checks the (volatile) "active" condition, > that is expected to be updated by an other thread. > So if you set your breakpoint

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-11 Thread Gilles Gouaillardet via devel
John, OMPI_LAZY_WAIT_FOR_COMPLETION(active) is a simple loop that periodically checks the (volatile) "active" condition, that is expected to be updated by an other thread. So if you set your breakpoint too early, and **all** threads are stopped when this breakpoint is hit, you might experienc

Re: [OMPI devel] Open MPI v4.0.1: Process is hanging inside MPI_Init() when debugged with TotalView

2019-11-11 Thread Ralph Castain via devel
Hi John Sorry to say, but there is no way to really answer your question as the OMPI community doesn't actively test MPIR support. I haven't seen any reports of hangs during MPI_Init from any release series, including 4.x. My guess is that it may have something to do with the debugger interacti

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-24 Thread Bert Wesarg via devel
FYI, debian libtool packages take care of this with this patch: https://git.launchpad.net/ubuntu/+source/libtool/tree/debian/patches/link_all_deplibs.patch Best, Bert On Mon, Nov 19, 2018 at 12:01 AM Christopher Samuel wrote: > > Hi Brian, > > On 17/11/18 5:13 am, Barrett, Brian via devel wrote

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-18 Thread Christopher Samuel
Hi Brian, On 17/11/18 5:13 am, Barrett, Brian via devel wrote: > Unfortunately, I don’t have a good idea of what to do now. We already > did the damage on the 3.x series. Our backwards compatibility testing > (as lame as it is) just links libmpi, so it’s all good. But if anyone > uses libtool,

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-16 Thread Barrett, Brian via devel
Gilles - Look at the output of Chris’s libtool link line; you can see it’s explicitly adding a dependency on libopen-pal.so to the test binary. Once it does that, it’s game over, the OS linking system will, rightly, complain about us changing the c:r:a in the libtool version system in a way th

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-14 Thread Gilles Gouaillardet
Chris, I am a bit puzzled at your logs. As far as I understand, ldd libhhgttg.so.1 reports that libopen-rte.so.40 and libopen-pal.so.40 are both dependencies, but that does not say anything on who is depending on them. They could be directly needed by libhhgttg.so.1 (I hope / do not think it is

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-14 Thread Christopher Samuel
On 15/11/18 12:10 pm, Christopher Samuel wrote: > I wonder if it's because they use libtool instead? Yup, it's libtool - using it compile my toy example shows the same behaviour with "readelf -d" pulling in the private libraries directly. :-( [csamuel@farnarkle2 libtool]$ cat hhgttg.c int answer

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-14 Thread Christopher Samuel
On 15/11/18 11:45 am, Christopher Samuel wrote: > Unfortunately that's not the case, just creating a shared library > that only links in libmpi.so will create dependencies on the private > libraries too in the final shared library. :-( Hmm, I might be misinterpreting the output of "ldd", it looks

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-14 Thread Christopher Samuel
On 15/11/18 2:16 am, Barrett, Brian via devel wrote: > In practice, this should not be a problem. The wrapper compilers (and > our instructions for linking when not using the wrapper compilers) > only link against libmpi.so (or a set of libraries if using Fortran), > as libmpi.so contains the pub

Re: [OMPI devel] Open-MPI backwards compatibility and library version changes

2018-11-14 Thread Barrett, Brian via devel
Chris - When we look at ABI stability for Open MPI releases, we look only at the MPI and SHMEM interfaces, not the internal interfaces used by Open MPI internally. libopen-pal.so is an internal library, and we do not guarantee ABI stability across minor releases. In 3.0.3, there was a backwar

Re: [OMPI devel] Open MPI vs msys2

2018-10-23 Thread Santiago Serebrinsky
This seems quite undesirable. Do you have any idea if this is going to be fixed? I look forward to your update. Cheers, Santiago On Tue, Oct 23, 2018 at 9:57 AM Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Meanwhile I found gcc from the gcc package looks for headers in > /usr/in

Re: [OMPI devel] Open MPI vs msys2

2018-10-23 Thread Gilles Gouaillardet
Meanwhile I found gcc from the gcc package looks for headers in /usr/include, but gcc from mingw does not (!) I found a few misc issues, and I will update hopefully soon. Cheers, Gilles Santiago Serebrinsky wrote: >Gilles, > > >You are right, I have > > >$ pacman -Ql msys2-runtime-devel | gre

Re: [OMPI devel] Open MPI vs msys2

2018-10-23 Thread Santiago Serebrinsky
Gilles, You are right, I have $ pacman -Ql msys2-runtime-devel | grep stat msys2-runtime-devel /usr/include/cygwin/stat.h msys2-runtime-devel /usr/include/sys/stat.h msys2-runtime-devel /usr/include/sys/statfs.h msys2-runtime-devel /usr/include/sys/statvfs.h So I wouldn't know why the error mess

Re: [OMPI devel] Open MPI vs msys2

2018-10-22 Thread Gilles Gouaillardet
Santiago, I downloaded and installed msys2 from https://www.msys2.org, and here is what I have on my system gilles@gilles-PC MINGW32 ~ $ uname -a MINGW32_NT-6.1-WOW gilles-PC 2.11.1(0.329/5/3) 2018-09-10 13:25 i686 Msys gilles@gilles-PC MINGW32 ~ $ pacman -Qi msys2-runtime-devel Name   

Re: [OMPI devel] Open MPI website borked up?

2018-09-04 Thread Jeff Squyres (jsquyres) via devel
Yes, there was a problem for a short while last week; it was fixed. > On Sep 1, 2018, at 4:55 PM, Ralph H Castain wrote: > > I suspect this is a stale message - I’m not seeing any problem with the > website > > >> On Aug 29, 2018, at 12:55 PM, Howard Pritchard wrote: >> >> Hi Folks, >> >>

Re: [OMPI devel] Open MPI website borked up?

2018-09-01 Thread Ralph H Castain
I suspect this is a stale message - I’m not seeing any problem with the website > On Aug 29, 2018, at 12:55 PM, Howard Pritchard wrote: > > Hi Folks, > > Something seems to be borked up about the OMPI website. Got to website and > you'll > get some odd parsing error appearing. > > Howard >

Re: [OMPI devel] Open MPI v2.1.4rc1

2018-08-10 Thread Jeff Squyres (jsquyres) via devel
Thanks Geoffroy. I don't think I'm worried about this for v2.1.4, and the UCX community hasn't responded. So I'm going to release 2.1.4 as-is. > On Aug 9, 2018, at 3:33 PM, Vallee, Geoffroy R. wrote: > > Hi, > > I tested on Summitdev here at ORNL and here are my comments (but I only have >

Re: [OMPI devel] Open MPI v2.1.4rc1

2018-08-09 Thread Pavel Shamis
Adding Alina and Yossi. On Thu, Aug 9, 2018 at 2:34 PM Vallee, Geoffroy R. wrote: > Hi, > > I tested on Summitdev here at ORNL and here are my comments (but I only > have a limited set of data for summitdev so my feedback is somewhat > limited): > - netpipe/mpi is showing a slightly lower bandwi

Re: [OMPI devel] Open MPI v2.1.4rc1

2018-08-09 Thread Vallee, Geoffroy R.
Hi, I tested on Summitdev here at ORNL and here are my comments (but I only have a limited set of data for summitdev so my feedback is somewhat limited): - netpipe/mpi is showing a slightly lower bandwidth than the 3.x series (I do not believe it is a problem). - I am facing a problem with UCX,

Re: [OMPI devel] Open MPI 3.1.1rc1 posted

2018-07-01 Thread Vallee, Geoffroy R.
Hi, Sorry for the slow feedback but hopefully I have now what I need to give feedback in a more timely manner... I tested the RC on Summitdev at ORNL (https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/) by running a simple test (I will be running more tests for

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-07-01 Thread Gilles Gouaillardet
I was unable to reproduce this on Ubuntu 14.04.5 Note the default gcc is 4.8 gcc-4.9 can be installed, but no g++ nor gfortran. Did you build Open MPI with the same compiler used to build libUtils.so and a.out What do type gcc ls -l /usr/bin/gcc gcc —version g++ —version say ? On top of the info

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread Jeff Squyres (jsquyres) via devel
"libmpi.so" but through the library "libMyUtils.so". This makes the usage of > parameters "-L/home/dummy/openmpi/build/lib -lmpi" not possible. > > Any more suggestions? > > Thanks, > > L. > > > > > Sent: Fri

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread lille stor
    Sent: Friday, June 22, 2018 at 10:17 PM From: "Jeff Squyres (jsquyres)" To: "lille stor" Cc: "Open MPI Developers List" Subject: Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork On Jun 22, 2018, at 4:09 PM, lille stor wrote: > > I tr

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread Jeff Squyres (jsquyres) via devel
On Jun 22, 2018, at 4:09 PM, lille stor wrote: > > I tried compile the C++ program using "mpic++" like you suggested but > unfortunately g++ still throws the same errror > ("/home/dummy/openmpi/build/lib/libopen-pal.so.20: undefined reference to > pthread_atfork"). > > I suspect that the pro

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread lille stor
pers List" Cc: "Jeff Squyres (jsquyres)" Subject: Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork I think Ralph is a little confused -- 2.1.3 is recent enough. :-) Are you using "mpic++" to compile your application? That should add in all the relevant flag

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread Jeff Squyres (jsquyres) via devel
I think Ralph is a little confused -- 2.1.3 is recent enough. :-) Are you using "mpic++" to compile your application? That should add in all the relevant flags that are needed to compile an Open MPI C++ application. > On Jun 22, 2018, at 3:29 PM, r...@open-mpi.org wrote: > > OMPI 2.1.3??? Is

Re: [OMPI devel] Open MPI: Undefined reference to pthread_atfork

2018-06-22 Thread r...@open-mpi.org
OMPI 2.1.3??? Is there any way you could update to something more recent? > On Jun 22, 2018, at 12:28 PM, lille stor wrote: > > Hi, > > > When compiling a C++ source file named test.cpp that needs a shared library > named libUtils.so (which in its turn needs Open MPI shared library, hence th

Re: [OMPI devel] Open MPI 3.1.0rc4 posted

2018-04-17 Thread Gilles Gouaillardet
The fix is required but not sufficient for PMIx v1.2.5 iirc. It is true no one seemed to care for 6 months, but in all fairness, v3.1 was never released to end users who might use SLURM with PMIx v2.0 The changes are very minimal if PMIx >= v2.1 is used imho, but this is not my call to make. C

Re: [OMPI devel] Open MPI 3.1.0rc4 posted

2018-04-17 Thread r...@open-mpi.org
I’ll let you decide about 3.1.0. FWIW: I think Gilles fix should work for external PMIx v1.2.5 as well. > On Apr 17, 2018, at 7:56 AM, Barrett, Brian via devel > wrote: > > Do we honestly care for 3.1.0? I mean, we went 6 months without it working > and no one cared. We can’t fix all bugs,

Re: [OMPI devel] Open MPI 3.1.0rc4 posted

2018-04-17 Thread Barrett, Brian via devel
Do we honestly care for 3.1.0? I mean, we went 6 months without it working and no one cared. We can’t fix all bugs, and I’m a little concerned about making changes right before release. Brian > On Apr 17, 2018, at 7:49 AM, Gilles Gouaillardet > wrote: > > Brian, > > https://github.com/ope

Re: [OMPI devel] Open MPI 3.1.0rc4 posted

2018-04-17 Thread Gilles Gouaillardet
Brian, https://github.com/open-mpi/ompi/pull/5081 fixes support for external PMIx v2.0 Support for external PMIx v1 is broken (same in master) and extra dev would be required to fix it. The easiest path, if acceptable, is to simply drop support for PMIx v1 Cheers, Gilles "Barrett, Brian vi

Re: [OMPI devel] Open MPI 3.0.1rc2 available for testing

2018-02-06 Thread Paul Hargrove
All (that I have) looks good to me. I don't have Intel compiler tests today due to planned maintenance at NERSC. My Mac OSX High Sierra system has a failed SSD. The UCX and PSM(1) test systems I was using appear to be retired. I also don't have all the slow ARM and MIPS emulator results yet. Howe

Re: [OMPI devel] Open-MPI killing nodes with mlx5 drivers?

2017-11-05 Thread Christopher Samuel
On 30/10/17 14:07, Christopher Samuel wrote: > We have an issue where codes compiled with Open-MPI kill nodes with > ConnectX-4 and ConnectX-5 cards connected to Mellanox Ethernet switches > using the mlx5 driver from the latest Mellanox OFED For the record, this crash is fixed in Mellanox OFED 4

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-09 Thread Marco Atzeri
On 07/09/2017 16:38, Marco Atzeri wrote: On 07/09/2017 16:29, Jeff Squyres (jsquyres) wrote: On Sep 7, 2017, at 9:09 AM, Marco Atzeri wrote: further issue on cygwin, in addition to an easy libevent issue that I already patched. Is that a patch we should: a) submit upstream to libevent b)

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-09 Thread Marco Atzeri
On 07/09/2017 16:29, Jeff Squyres (jsquyres) wrote: On Sep 7, 2017, at 9:09 AM, Marco Atzeri wrote: Yes, "patcher" replaces the old malloc hooks. I don't think we looked at how patcher would function on Cygwin at all -- it probably isn't relevant on Windows, because it's only necessary fo

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-07 Thread Marco Atzeri
On 07/09/2017 16:29, Jeff Squyres (jsquyres) wrote: On Sep 7, 2017, at 9:09 AM, Marco Atzeri wrote: further issue on cygwin, in addition to an easy libevent issue that I already patched. Is that a patch we should: a) submit upstream to libevent b) patch locally in Open MPI ? Hi Jeff, id

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-07 Thread Jeff Squyres (jsquyres)
On Sep 7, 2017, at 9:09 AM, Marco Atzeri wrote: > > further issue on cygwin, in addition to an easy libevent issue that I already > patched. Is that a patch we should: a) submit upstream to libevent b) patch locally in Open MPI ? > it seems "opal/mca/memory/patcher" is new compared to 1.10.x

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-07 Thread Marco Atzeri
30/08/2017 22:48, Howard Pritchard wrote: Hi Folks, Open MPI 2.1.2rc3 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ further issue on cygwin, in addition to an easy libevent issue that I already patched. it seems "opal/mca/memory/patche

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-05 Thread Jeff Squyres (jsquyres)
On Sep 5, 2017, at 1:00 PM, Marco Atzeri wrote: > > I noticed the removal of autogen.sh but not the replacement with > autogen.pl so I was using the standard autoreconf. Gotcha. Yes, autogen.pl will do Moar Things than a plain vanilla autoreconf. > I will test and will let you know, unfortunat

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-05 Thread Marco Atzeri
On 05/09/2017 17:35, Jeff Squyres (jsquyres) wrote: Marco - Remind me: are you running autogen.pl for a Cygwin-specific reason? I'm unable to replicate your issue: - I downloaded the 2.1.2rc3 tarball - I ran autogen.pl in the tarball - I built the tarball (tried both normal build and a VPATH b

Re: [OMPI devel] Open MPI 3.1 Feature List

2017-09-05 Thread r...@open-mpi.org
We currently have PMIx v2.1.0beta in OMPI master. This includes cross-version support - i.e., OMPI v3.1 would be able to run against an RM using any PMIx version. At the moment, the shared memory (or dstore) support isn’t working across versions, but I’d consider that a “bug” that will hopefully

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-05 Thread Jeff Squyres (jsquyres)
Marco - Remind me: are you running autogen.pl for a Cygwin-specific reason? I'm unable to replicate your issue: - I downloaded the 2.1.2rc3 tarball - I ran autogen.pl in the tarball - I built the tarball (tried both normal build and a VPATH build) I can confirm, too, that config/opal_get_versio

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-02 Thread Gilles Gouaillardet
Marco, can you please detail how you built Open MPI ? i guess you downloaded a tarball and built from that. in this case, there is no need to run autogen.pl --force and unless something is wrong with the timestamps of the tarball, autoreconf should never be invoked. Cheers, Gilles On Sat, Se

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-09-02 Thread Marco Atzeri
On 30/08/2017 22:48, Howard Pritchard wrote: Hi Folks, Open MPI 2.1.2rc3 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ Fixes since rc2: Issue #4122: CMA compilation error in SM BTL.    Thanks to Paul Hargrove for catching this. Issue #4

Re: [OMPI devel] Open MPI 3.0.0rc4 available

2017-08-31 Thread Peter Kjellström
On Tue, 29 Aug 2017 17:55:22 + "Barrett, Brian via devel" wrote: > The fourth release candidate for Open MPI 3.0.0 is now available for > download. Changes since rc2 include: ... > https://www.open-mpi.org/software/ompi/v3.0/ TLDR: worked fine on a few different systems for me. I took

Re: [OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-08-30 Thread Paul Hargrove
Pretty much the same report as I gave for 3.0.0rc4 about 24 hours ago: I have nearly completed my normal suite of tests. > Only slow emulated 32-bit ARM and MIPS remain. This time around I've dropped big-endian PPC (because Open MPI did). > However, I've added Apple's public betas of Mac OSX High

Re: [OMPI devel] Open MPI 3.0.0rc4 available

2017-08-30 Thread Jeff Squyres (jsquyres)
Excellent. Many thanks, Paul! > On Aug 30, 2017, at 12:17 AM, Paul Hargrove wrote: > > I have nearly completed my normal suite of tests. > Only slow emulated 32-bit ARM and MIPS remain. > > This time around I've dropped big-endian PPC (because Open MPI did). > However, I've added Apple's publi

Re: [OMPI devel] Open MPI 3.0.0rc4 available

2017-08-29 Thread Paul Hargrove
I have nearly completed my normal suite of tests. Only slow emulated 32-bit ARM and MIPS remain. This time around I've dropped big-endian PPC (because Open MPI did). However, I've added Apple's public betas of Mac OSX High Sierra and Xcode 9. I have no new issues to report, and the ones I raised

Re: [OMPI devel] Open MPI v2.1.2rc1 available

2017-08-16 Thread Jeff Squyres (jsquyres)
Thanks Paul! > On Aug 16, 2017, at 1:22 AM, Paul Hargrove wrote: > > I have not yet had a chance to run this RC through all it paces. > > However, I can say that I have successfully built and run this RC on a system > with Apple's latest public Betas of Mac OS High Sierra and Xcode 9. > > -

Re: [OMPI devel] Open MPI v2.1.2rc1 available

2017-08-15 Thread Paul Hargrove
I have not yet had a chance to run this RC through all it paces. However, I can say that I have successfully built and run this RC on a system with Apple's latest public Betas of Mac OS High Sierra and Xcode 9. -Paul On Thu, Aug 10, 2017 at 11:47 AM, Howard Pritchard wrote: > Hi Folks, > > > O

Re: [OMPI devel] Open MPI 3.0.0 first release candidate posted

2017-06-29 Thread Howard Pritchard
Brian, Things look much better with this patch. We need it for 3.0.0 release The patch from 3794 applied cleanly from master. Howard 2017-06-29 16:51 GMT-06:00 r...@open-mpi.org : > I tracked down a possible source of the oob/tcp error - this should > address it, I think: https://github.com/o

Re: [OMPI devel] Open MPI 3.0.0 first release candidate posted

2017-06-29 Thread r...@open-mpi.org
I tracked down a possible source of the oob/tcp error - this should address it, I think: https://github.com/open-mpi/ompi/pull/3794 > On Jun 29, 2017, at 3:14 PM, Howard Pritchard wrote: > > Hi Brian, > > I tested this rc using both srun native lau

Re: [OMPI devel] Open MPI 3.0.0 first release candidate posted

2017-06-29 Thread Howard Pritchard
Hi Brian, I tested this rc using both srun native launch and mpirun on the following systems: - LANL CTS-1 systems (haswell + Intel OPA/PSM2) - LANL network testbed system (haswell + connectX5/UCX and OB1) - LANL Cray XC I am finding some problems with mpirun on the network testbed system. For

Re: [OMPI devel] Open MPI 3.x branch naming

2017-06-02 Thread Josh Hursey
+1 from IBM on removing the v3.x branch sooner rather than later. I've switched our MTT and Jenkins setups - so we should be in good shape. On Wed, May 31, 2017 at 9:57 AM, Barrett, Brian via devel < devel@lists.open-mpi.org> wrote: > > > On May 31, 2017, at 7:52 AM, r...@open-mpi.org wrote: > >

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Barrett, Brian via devel
> On May 31, 2017, at 7:52 AM, r...@open-mpi.org wrote: > >> On May 31, 2017, at 7:48 AM, Jeff Squyres (jsquyres) >> wrote: >> >> On May 30, 2017, at 11:37 PM, Barrett, Brian via devel >> wrote: >>> >>> We have now created a v3.0.x branch based on today’s v3.x branch. I’ve >>> reset all o

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread r...@open-mpi.org
> On May 31, 2017, at 7:48 AM, Jeff Squyres (jsquyres) > wrote: > > On May 30, 2017, at 11:37 PM, Barrett, Brian via devel > wrote: >> >> We have now created a v3.0.x branch based on today’s v3.x branch. I’ve >> reset all outstanding v3.x PRs to the v3.0.x branch. No one has permissions

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
On May 30, 2017, at 11:37 PM, Barrett, Brian via devel wrote: > > We have now created a v3.0.x branch based on today’s v3.x branch. I’ve reset > all outstanding v3.x PRs to the v3.0.x branch. No one has permissions to > pull into the v3.x branch, although I’ve left it in place for a couple o

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
On May 31, 2017, at 9:59 AM, Jeff Squyres (jsquyres) wrote: > >https://www.open-mpi.org/nightly/v3.0/ Ah, you actually already moved it to: https://www.open-mpi.org/nightly/v3.0.x/ Got it; thanks. -- Jeff Squyres jsquy...@cisco.com ___ dev

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
Brian -- Do we need to change the nightly snapshot URL from https://www.open-mpi.org/nightly/v3.x/ to https://www.open-mpi.org/nightly/v3.0/ > On May 30, 2017, at 11:37 PM, Barrett, Brian via devel > wrote: > > We have now created a v3.0.x branch based on today’s v3.x branch. I’

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-30 Thread Barrett, Brian via devel
We have now created a v3.0.x branch based on today’s v3.x branch. I’ve reset all outstanding v3.x PRs to the v3.0.x branch. No one has permissions to pull into the v3.x branch, although I’ve left it in place for a couple of weeks so that people can slowly update their local git repositories.

  1   2   3   4   5   >