Re: [OMPI devel] External PMIx/PRRTE and "make dist"

2021-11-12 Thread Heinz, Michael William via devel
Nevermind - I see you haven't actually pushed to ompi/master yet.

I've been hitting this issue so I'll give your branch a try.

-Original Message-
From: devel  On Behalf Of Heinz, Michael 
William via devel
Sent: Friday, November 12, 2021 4:40 PM
To: Open MPI Developers 
Cc: Heinz, Michael William 
Subject: Re: [OMPI devel] External PMIx/PRRTE and "make dist"

Brian, just a heads up - I still see

=== Submodule: 3rd-party/openpmix
==> ERROR: Missing

The submodule "3rd-party/openpmix" is missing.

Perhaps you forgot to "git clone --recursive ...", or you need to "git 
submodule update --init --recursive"...?

Even though I specified --with-pmix=/usr/local.

-Original Message-
From: devel  On Behalf Of Barrett, Brian via 
devel
Sent: Friday, November 12, 2021 3:35 PM
To: Open MPI Developers 
Cc: Barrett, Brian 
Subject: [OMPI devel] External PMIx/PRRTE and "make dist"

Just a quick heads up that I just committed 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopen-mpi%2Fompi%2Fpull%2F9649data=04%7C01%7Cmichael.william.heinz%40cornelisnetworks.com%7Ca9a8bf2e363c409668a908d9a6252e3c%7C4dbdb7da74ee4b458747ef5ce5ebe68a%7C0%7C0%7C637723500910203342%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=s8sZHHpUf7VQNfTwtKihYTVJnJFbaTjAe8%2FSU53%2B9XY%3Dreserved=0,
 which changes Open MPI's behavior around PMIX/PRRTE and external builds.  
Previously, the configure script for the internally packaged PMIX and PRRTE 
were always run.  Now, if the user specifies 
--with-{pmix,prrte}={external,[path]}, Open MPI's configure will not run the 
sub-configure for the package that the user has asked to be an external 
dependency.  This has the side-effect of breaking "make dist" in those 
situations.  So, going forward, if you add --with-pmix=external or 
--with-prrte=external on master (and likely soon 5.0), you will *not* be able 
to successfully run "make dist" in that build tree.  You can run Open MPI's 
configure with no pmix/prrte arguments if you need to run "make dist".  Given 
the general split in use cases between where you'd want to link against an 
external PMIX/PRRTE and where you would want to build a distribution tarball, 
this is not anticipated to be a problem in practice.

Thanks,

Brian

External recipient
External recipient


Re: [OMPI devel] External PMIx/PRRTE and "make dist"

2021-11-12 Thread Heinz, Michael William via devel
Brian, just a heads up - I still see

=== Submodule: 3rd-party/openpmix
==> ERROR: Missing

The submodule "3rd-party/openpmix" is missing.

Perhaps you forgot to "git clone --recursive ...", or you need to
"git submodule update --init --recursive"...?

Even though I specified --with-pmix=/usr/local.

-Original Message-
From: devel  On Behalf Of Barrett, Brian via 
devel
Sent: Friday, November 12, 2021 3:35 PM
To: Open MPI Developers 
Cc: Barrett, Brian 
Subject: [OMPI devel] External PMIx/PRRTE and "make dist"

Just a quick heads up that I just committed 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopen-mpi%2Fompi%2Fpull%2F9649data=04%7C01%7Cmichael.william.heinz%40cornelisnetworks.com%7C7f53e4e2a9cc48d7a9a308d9a61c0133%7C4dbdb7da74ee4b458747ef5ce5ebe68a%7C0%7C0%7C637723461509758169%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=mYdwzx5qHVKf8l8TYQTeJFK%2F9MK0bJ5MkzJ8kHDB4K0%3Dreserved=0,
 which changes Open MPI's behavior around PMIX/PRRTE and external builds.  
Previously, the configure script for the internally packaged PMIX and PRRTE 
were always run.  Now, if the user specifies 
--with-{pmix,prrte}={external,[path]}, Open MPI's configure will not run the 
sub-configure for the package that the user has asked to be an external 
dependency.  This has the side-effect of breaking "make dist" in those 
situations.  So, going forward, if you add --with-pmix=external or 
--with-prrte=external on master (and likely soon 5.0), you will *not* be able 
to successfully run "make dist" in that build tree.  You can run Open MPI's 
configure with no pmix/prrte arguments if you need to run "make dist".  Given 
the general split in use cases between where you'd want to link against an 
external PMIX/PRRTE and where you would want to build a distribution tarball, 
this is not anticipated to be a problem in practice.

Thanks,

Brian

External recipient


Re: [OMPI devel] Support for AMD M100?

2021-02-11 Thread Heinz, Michael William via devel
That’s what I thought. Thanks.

From: Jeff Squyres (jsquyres) 
Sent: Thursday, February 11, 2021 1:11 PM
To: Heinz, Michael William 
Cc: Open MPI Developers List 
Subject: Re: [OMPI devel] Support for AMD M100?

There's not really any generic "accelerator" infrastructure in Open MPI itself 
-- there's a bunch of explicit CUDA support.

But even some of that moved downward into both Libfabric and UCX and (at least 
somewhat) out of OMPI.

That being said, we just added the AVX MPI_Op component -- equivalent 
components could be added for CUDA and/or AMD's GPU (what API does it use -- 
OpenCL?).  That being said, I would imagine that the data inputs would need to 
be very large to make it worthwhile (wall-clock execution-wise) to offload 
MPI_Op operations to a discrete GPU on the other side of the PCI bus.




On Feb 11, 2021, at 1:02 PM, Heinz, Michael William 
mailto:michael.william.he...@cornelisnetworks.com>>
 wrote:

Pretty much, yeah.

From: Jeff Squyres (jsquyres) mailto:jsquy...@cisco.com>>
Sent: Thursday, February 11, 2021 12:58 PM
To: Open MPI Developers List 
mailto:devel@lists.open-mpi.org>>
Cc: Heinz, Michael William 
mailto:michael.william.he...@cornelisnetworks.com>>
Subject: Re: [OMPI devel] Support for AMD M100?

On Feb 11, 2021, at 12:23 PM, Heinz, Michael William via devel 
mailto:devel@lists.open-mpi.org>> wrote:

Has the subject of supporting AMD’s new GPUs come up?

We’re discussing supporting it in PSM2 but it occurred to me that that won’t 
help much if higher-level APIs don’t support it, too…

You mean supporting the AMD GPU in the same way that we have CUDA support for 
NVIDIA GPUs?

--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>



Re: [OMPI devel] Support for AMD M100?

2021-02-11 Thread Heinz, Michael William via devel
Pretty much, yeah.

From: Jeff Squyres (jsquyres) 
Sent: Thursday, February 11, 2021 12:58 PM
To: Open MPI Developers List 
Cc: Heinz, Michael William 
Subject: Re: [OMPI devel] Support for AMD M100?

On Feb 11, 2021, at 12:23 PM, Heinz, Michael William via devel 
mailto:devel@lists.open-mpi.org>> wrote:

Has the subject of supporting AMD’s new GPUs come up?

We’re discussing supporting it in PSM2 but it occurred to me that that won’t 
help much if higher-level APIs don’t support it, too…

You mean supporting the AMD GPU in the same way that we have CUDA support for 
NVIDIA GPUs?

--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>



[OMPI devel] Support for AMD M100?

2021-02-11 Thread Heinz, Michael William via devel
Has the subject of supporting AMD's new GPUs come up?

We're discussing supporting it in PSM2 but it occurred to me that that won't 
help much if higher-level APIs don't support it, too...

---
Michael Heinz
Fabric Software Engineer, Cornelis Networks



Re: [OMPI devel] v3.0.6rc2 and v3.1.6rc2 available for testing

2020-01-31 Thread Heinz, Michael William via devel
I've run the 3.1.6rc2 and 4.0.3rc3 src rpms through some smoke tests and they 
both built and ran properly on RHEL 8.

> -Original Message-
> From: devel  On Behalf Of Jeff Squyres
> (jsquyres) via devel
> Sent: Thursday, January 30, 2020 3:39 PM
> To: Open MPI Developers List 
> Cc: Jeff Squyres (jsquyres) 
> Subject: [OMPI devel] v3.0.6rc2 and v3.1.6rc2 available for testing
> 
> Minor updates since rc1:
> 
> 3.0.6rc2 and 3.1.6rc2:
> - Fix run-time linker issues with OMPIO on newer Linux distros.
> 
> 3.1.6rc2 only:
> - Fix issue with zero-length blockLength in MPI_TYPE_INDEXED.
> 
> Please test:
> 
>https://www.open-mpi.org/software/ompi/v3.0/
>https://www.open-mpi.org/software/ompi/v3.1/
> 
> --
> Jeff Squyres
> jsquy...@cisco.com



Re: [OMPI devel] v4.0.3rc3 ready for testing

2020-01-31 Thread Heinz, Michael William via devel
I’ve run the 3.1.6rc2 and 4.0.3rc3 src rpms through some smoke tests and they 
both built and ran properly on RHEL 8.

From: devel  On Behalf Of Geoffrey Paulsen 
via devel
Sent: Wednesday, January 29, 2020 7:03 PM
To: devel@lists.open-mpi.org
Cc: Geoffrey Paulsen 
Subject: [OMPI devel] v4.0.3rc3 ready for testing

Please test v4.0.3rc3:
   https://www.open-mpi.org/software/ompi/v4.0/

Changes since v4.0.2 include:

  4.0.3 -- January, 2020
  
- Add support for Mellanox Connectx6.
- Fix a problem with Fortran compiler wrappers ignoring use of
  disable-wrapper-runpath configure option.  Thanks to David
  Shrader for reporting.
- Fixed an issue with trying to use mpirun on systems where neither
  ssh nor rsh is installed.
- Address some problems found when using XPMEM for intra-node message
  transport.
- Improve dimensions returned by MPI_Dims_create for certain
  cases.  Thanks to @aw32 for reporting.
- Fix an issue when sending messages larger than 4GB. Thanks to
  Philip Salzmann for reporting this issue.
- Add ability to specify alternative module file path using
  Open MPI's RPM spec file.  Thanks to @jschwartz-cray for reporting.
- Clarify use of --with-hwloc configuration option in the README.
  Thanks to Marcin Mielniczuk for raising this documentation issue.
- Fix an issue with shmem_atomic_set.  Thanks to Sameh Sharkawi for reporting.
- Fix a problem with MPI_Neighbor_alltoall(v,w) for cartesian communicators
  with cyclic boundary conditions.  Thanks to Ralph Rabenseifner and
  Tony Skjellum for reporting.
- Fix an issue using Open MPIO on 32 bit systems.  Thanks to
  Orion Poplawski for reporting.
- Fix an issue with NetCDF test deadlocking when using the vulcan
  Open MPIO component.  Thanks to Orion Poplawski for reporting.
- Fix an issue with the mpi_yield_when_idle parameter being ignored
  when set in the Open MPI MCA parameter configuration file.
  Thanks to @iassiour for reporting.
- Address an issue with Open MPIO when writing/reading more than 2GB
  in an operation.  Thanks to Richard Warren for reporting.


---
Geoffrey Paulsen
Software Engineer, IBM Spectrum MPI
Email: gpaul...@us.ibm.com



Re: [OMPI devel] Open MPI BTL TCP interface mapping review request

2019-12-17 Thread Heinz, Michael William via devel
William,

You seem to have posted the same pull request twice?

From: devel  On Behalf Of Zhang, William via 
devel
Sent: Tuesday, December 17, 2019 2:16 PM
To: devel@lists.open-mpi.org
Cc: Zhang, William 
Subject: [OMPI devel] Open MPI BTL TCP interface mapping review request

Hello devel,

Can somebody review these two patches before they get lost in the mires of 
time? https://github.com/open-mpi/ompi/pull/7167   
https://github.com/open-mpi/ompi/pull/7167

These PR’s fix some common issues with Open MPI TCP BTL

Thanks,
William Zhang


Re: [OMPI devel] Intel OPA and Open MPI

2019-04-24 Thread Heinz, Michael William via devel
So, 

Would it be worthwhile for us to start doing test builds now? Is the code ready 
for that at this time?

> -Original Message-
> From: devel [mailto:devel-boun...@lists.open-mpi.org] On Behalf Of
> Nathan Hjelm via devel
> Sent: Friday, April 12, 2019 11:19 AM
> To: Open MPI Developers 
> Cc: Nathan Hjelm ; Castain, Ralph H
> ; Yates, Brandon 
> Subject: Re: [OMPI devel] Intel OPA and Open MPI
> 
> That is accurate. We expect to support OPA with the btl/ofi component. It
> should give much better performance than osc/pt2pt + mtl/ofi. What would
> be good for you to do on your end is verify everything works as expected
> and that the performance is on par for what you expect.
> 
> -Nathan
> 
> > On Apr 12, 2019, at 9:11 AM, Heinz, Michael William
>  wrote:
> >
> > Hey guys,
> >
> > So, I’ve watched the videos, dug through the release notes, and
> participated in a few of the weekly meetings and I’m feeling a little more
> comfortable about being a part of Open MPI - and I’m looking forward to it.
> >
> > But I find myself needing to look for some direction for my participation
> over the next few months.
> >
> > First - a little background. Historically, I’ve been involved with IB/OPA
> development for 17+ years now, but for the past decade or so I’ve been
> entirely focused on fabric management rather than application-level stuff.
> (Heck, if you ever wanted to complain about why OPA management
> datagrams are different from IB MADs, feel free to point the finger at me,
> I’m happy to explain why the new ones are better… ;-) ) However, it was only
> recently that the FM team were given the additional responsibility for
> maintaining / participating in our MPI efforts with very little opportunity 
> for a
> transfer of information with the prior team.
> >
> > So, while I’m looking forward to this new role I’m feeling a bit
> overwhelmed - not least of which because I will be unavailable for about 8
> weeks this summer…
> >
> > In particular, I found an issue in our internal tracking systems that says 
> > (and
> I may have mentioned this before…)
> >
> > OMPI v5.0.0 will remove osc/pt2pt component that is the only component
> that MTLs use (PSM2 and OFI). OMPI v5.0.0 is planned to be released during
> summer 2019 (no concrete dates).  https://github.com/open-
> mpi/ompi/wiki/5.0.x-FeatureList. The implications is that none of the MTLs
> used for Omni-Path will support running one sided MPI APIs (RMA).
> >
> > Is this still accurate? The current feature list says:
> >
> > If osc/rdma supports all possible scenarios (e.g., all BTLs support the RDMA
> methods osc/rdma needs), this should allow us to remove osc/pt2pt (i.e.,
> 100% migrated to osc/rdma).
> >
> > If this is accurate, I’m going to need help from the other maintainers to
> understand the reason this is being done, the scope of this effort and where
> we need to focus our attention. To deal with the lack of coverage over the
> summer, I’ve asked a co-worker, Brandon Yates to start sitting in on the
> weekly meetings with me.
> >
> > Again, I’m looking forward to both the opportunity of working with an open
> source team, and the chance to focus on the users of our software instead of
> just the management of the fabric - I’m just struggling at the moment to get
> a handle on this potential deadline.
> >
> > ---
> > Mike Heinz
> > Networking Fabric Software Engineer
> > Intel Corporation
> >
> > ___
> > devel mailing list
> > devel@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/devel
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel