Re: [OMPI users] Q: Getting MPI-level memory use from OpenMPI?

2023-04-17 Thread Brian Dobbins via users
. The second might > be a little more generic, but depend on external tools and might take a > little time to setup. > > George. > > > On Fri, Apr 14, 2023 at 3:31 PM Brian Dobbins via users < > users@lists.open-mpi.org> wrote: > >> >> Hi all, >>

[OMPI users] Q: Getting MPI-level memory use from OpenMPI?

2023-04-14 Thread Brian Dobbins via users
Hi all, I'm wondering if there's a simple way to get statistics from OpenMPI as to how much memory the *MPI* layer in an application is taking. For example, I'm running a model and I can get the RSS size at various points in the code, and that reflects the user data for the application, *plus*,

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-27 Thread Brian Dobbins via users
Hi Ralph, Thanks again for this wealth of information - we've successfully run the same container instance across multiple systems without issues, even surpassing 'native' performance in edge cases, presumably because the native host MPI is either older or simply tuned differently (eg, 'eager li

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Brian Dobbins via users
ence of IMPI doesn't preclude > using OMPI containers so long as the OMPI library is fully contained in > that container. Choice of launch method just depends on how your system is > setup. > > Ralph > > > On Jan 26, 2022, at 3:17 PM, Brian Dobbins wrote: >

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Brian Dobbins via users
Hi Ralph, Afraid I don't understand. If your image has the OMPI libraries installed > in it, what difference does it make what is on your host? You'll never see > the IMPI installation. > > We have been supporting people running that way since Singularity was > originally released, without any pr

Re: [OMPI users] Issues with compilers

2021-01-22 Thread Brian Dobbins via users
As a work-around, but not a 'solution', it's worth pointing out that the (new) Intel compilers are now *usable* for free - no licensing cost or login needed. (As are the MKL, Intel MPI, etc). Link: https://software.intel.com/content/www/us/en/develop/tools/oneapi/all-toolkits.html They've got Yu

Re: [OMPI users] Q: Binding to cores on AWS?

2017-12-22 Thread Brian Dobbins
e bound by orted/mpirun before they are execv'ed, i have > some hard time understanding how not binding MPI tasks to > memory can have a significant impact on performances as long as they > are bound on cores. > > Cheers, > > Gilles > > > On Sat, Dec 23, 2017 at 7

Re: [OMPI users] Q: Binding to cores on AWS?

2017-12-22 Thread Brian Dobbins
ri, Dec 22, 2017 at 2:14 PM, r...@open-mpi.org wrote: > I honestly don’t know - will have to defer to Brian, who is likely out for > at least the extended weekend. I’ll point this one to him when he returns. > > > On Dec 22, 2017, at 1:08 PM, Brian Dobbins wrote: > > >

Re: [OMPI users] Q: Binding to cores on AWS?

2017-12-22 Thread Brian Dobbins
gt; binding pattern by adding --report-bindings to your cmd line. > > > On Dec 22, 2017, at 11:58 AM, Brian Dobbins wrote: > > > Hi all, > > We're testing a model on AWS using C4/C5 nodes and some of our timers, > in a part of the code with no communication, show

[OMPI users] Q: Binding to cores on AWS?

2017-12-22 Thread Brian Dobbins
Hi all, We're testing a model on AWS using C4/C5 nodes and some of our timers, in a part of the code with no communication, show really poor performance compared to native runs. We think this is because we're not binding to a core properly and thus not caching, and a quick 'mpirun --bind-to cor

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Brian Dobbins
Hi Dave, With which compiler, and even optimized? > > $ `mpif90 --showme` --version | head -n1 > GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17) > $ cat a.f90 > use mpi > if (mpi_version == 3) call undefined() > print *, mpi_version > end > $ mpif90 a.f90 && ./a.out >

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Brian Dobbins
Hi Jeff, On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond wrote: > You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf > to see the status of all implementations w.r.t. MPI-3 as of one year ago. > Thank you - that's something I was curious about, and it's incredibly helpful.

[OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Brian Dobbins
Hi everyone, This isn't really a problem, per se, but rather a search for a more elegant solution. It also isn't specific to OpenMPI, but I figure the experience and knowledge of people here made it a suitable place to ask: I'm working on some code that'll be used and downloaded by others on

Re: [OMPI users] MPI-IO Inconsistency over Lustre using OMPI 1.3

2009-03-03 Thread Brian Dobbins
Hi Nathan, I just ran your code here and it worked fine - CentOS 5 on dual Xeons w/ IB network, and the kernel is 2.6.18-53.1.14.el5_lustre.1.6.5smp. I used an OpenMPI 1.3.0 install compiled with Intel 11.0.081 and, independently, one with GCC 4.1.2. I tried a few different times with varying

Re: [OMPI users] Problem with feupdateenv

2008-12-07 Thread Brian Dobbins
Hi Sangamesh, I think the problem is that you're loading a different version of OpenMPI at runtime: *[master:17781] [ 1] /usr/lib64/openmpi/libmpi.so.0 [0x34b19544b8]* .. The path there is to '/usr/lib64/openmpi', which is probably a system-installed GCC version. You want to use your versio

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Brian Dobbins
Hi guys, On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote: > Actually I had a much differnt results, > > gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 > pgi/7.2 > mpich2 gcc > For some reason, the difference in minutes didn't come through, it seems, but I would gue

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Brian Dobbins
On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote: > On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: > >> OpenMPI : 120m 6s >> MPICH2 : 67m 44s >> > > That seems to indicate that something else is going on -- with -np 1, there > should be no MPI communication, right? I wonder if the memory all

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brian Dobbins
anyone can download and run? Cheers, - Brian Brian Dobbins Yale Engineering HPC

[OMPI users] Q: OpenMPI's use of /tmp and hanging apps via FS problems?

2008-08-16 Thread Brian Dobbins
DOES succeed sometimes) is about 25% right now. My best guess is that this is because the file system is overloaded, thus not allowing timely I/O or access to OpenMPI's files, but I wanted to get a quick understanding of how these files are used by OpenMPI and whether the FS does indeed seem

Re: [OMPI users] Problem with WRF and pgi-7.2

2008-07-23 Thread Brian Dobbins
rally pretty responsive. (And if you don't, I will, since it'd be nice to see it work without a hybrid MPI installation!) Cheers, - Brian Brian Dobbins Yale Engineering HPC On Wed, Jul 23, 2008 at 12:09 PM, Brock Palen wrote: > Not yet, if you have no ideas I will op

Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-17 Thread Brian Dobbins
Hi Marco and Jeff, My own knowledge of OpenMPI's internals is limited, but I thought I'd add my less-than-two-cents... > I've found only a way in order to have tcp connections binded only to > > the eth1 interface, using both the following MCA directives in the > > command line: > > > > mpirun

Re: [OMPI users] Q: Problems launching MPMD applications? ('mca_oob_tcp_peer_try_connect' error 103)

2007-12-05 Thread Brian Dobbins
this again to be sure. Again, many thanks for the help! With best wishes, - Brian Brian Dobbins Yale University HPC

Re: [OMPI users] Q: Problems launching MPMD applications? ('mca_oob_tcp_peer_try_connect' error 103)

2007-12-05 Thread Brian Dobbins
st missing a crucial mca parameter? Thanks very much, - Brian Brian Dobbins Yale University HPC

[OMPI users] Q: Problems launching MPMD applications? ('mca_oob_tcp_peer_try_connect' error 103)

2007-12-05 Thread Brian Dobbins
ry duplicating the mca parameters after the colon since I figured they might not propagate, thus perhaps it was trying to use the wrong interface, but that didn't help either. Thanks very much, - Brian Brian Dobbins Yale University HPC

Re: [OMPI users] OpenIB problems

2007-11-21 Thread Brian Dobbins
before on a 32-bit Xeon with gigabit links, so while there are still lots of variables, it should help me narrow things down. Cheers, - Brian Brian Dobbins Yale University HPC

Re: [OMPI users] OpenIB problems

2007-11-21 Thread Brian Dobbins
r up to 20 (from 7), but that didn't fix it. Cheers, - Brian Brian Dobbins Yale University HPC

Re: [OMPI users] [Fwd: MPI question/problem] including code attachments

2007-06-27 Thread Brian Dobbins
Hi guys, I just came across this thread while googling when I faced a similar problem with a certain code - after scratching my head for a bit, it turns out the solution is pretty simple. My guess is that Jeff's code has it's own copy of 'mpif.h' in its source directory, and in all likelihood, i