Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Ralph Castain via users
Sure - but then we aren't talking about containers any more, just vendor vs OMPI. I'm not getting in the middle of that one! On Jan 27, 2022, at 6:28 PM, Gilles Gouaillardet via users mailto:users@lists.open-mpi.org> > wrote: Thanks Ralph, Now I get what you had in mind. Strictly speaking,

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Gilles Gouaillardet via users
Thanks Ralph, Now I get what you had in mind. Strictly speaking, you are making the assumption that Open MPI performance matches the system MPI performances. This is generally true for common interconnects and/or those that feature providers for libfabric or UCX, but not so for "exotic"

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Ralph Castain via users
See inline Ralph On Jan 27, 2022, at 10:05 AM, Brian Dobbins mailto:bdobb...@gmail.com> > wrote: Hi Ralph,   Thanks again for this wealth of information - we've successfully run the same container instance across multiple systems without issues, even surpassing 'native' performance in edge

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Brian Dobbins via users
Hi Ralph, Thanks again for this wealth of information - we've successfully run the same container instance across multiple systems without issues, even surpassing 'native' performance in edge cases, presumably because the native host MPI is either older or simply tuned differently (eg, 'eager

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Ralph Castain via users
Just to complete this - there is always a lingering question regarding shared memory support. There are two ways to resolve that one: * run one container per physical node, launching multiple procs in each container. The procs can then utilize shared memory _inside_ the container. This is the

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Ralph Castain via users
> Fair enough Ralph! I was implicitly assuming a "build once / run everywhere" > use case, my bad for not making my assumption clear. > If the container is built to run on a specific host, there are indeed other > options to achieve near native performances. > Err...that isn't actually what I

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Jeff Squyres (jsquyres) via users
This is part of the challenge of HPC: there are general solutions, but no specific silver bullet that works in all scenarios. In short: everyone's setup is different. So we can offer advice, but not necessarily a 100%-guaranteed solution that will work in your environment. In general, we

Re: [OMPI users] Gadget2 error 818 when using more than 1 process?

2022-01-27 Thread Jeff Squyres (jsquyres) via users
I'm afraid that without any further details, it's hard to help. I don't know why Gadget2 would complain about its parameters file. From what you've stated, it could be a problem with the application itself. Have you talked to the Gadget2 authors? -- Jeff Squyres jsquy...@cisco.com

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Diego Zuccato via users
Sorry for the noob question, but: what should I configure for OpenMPI "to perform on the host cluster"? Any link to a guide would be welcome! Slightly extended rationale for the question: I'm currently using "unconfigured" Debian packages and getting some strange behaviour... Maybe it's just