. The second might
> be a little more generic, but depend on external tools and might take a
> little time to setup.
>
> George.
>
>
> On Fri, Apr 14, 2023 at 3:31 PM Brian Dobbins via users <
> users@lists.open-mpi.org> wrote:
>
>>
>> Hi all,
>>
Hi all,
I'm wondering if there's a simple way to get statistics from OpenMPI as
to how much memory the *MPI* layer in an application is taking. For
example, I'm running a model and I can get the RSS size at various points
in the code, and that reflects the user data for the application, *plus*,
Hi Ralph,
Thanks again for this wealth of information - we've successfully run the
same container instance across multiple systems without issues, even
surpassing 'native' performance in edge cases, presumably because the
native host MPI is either older or simply tuned differently (eg, 'eager
li
ence of IMPI doesn't preclude
> using OMPI containers so long as the OMPI library is fully contained in
> that container. Choice of launch method just depends on how your system is
> setup.
>
> Ralph
>
>
> On Jan 26, 2022, at 3:17 PM, Brian Dobbins wrote:
>
Hi Ralph,
Afraid I don't understand. If your image has the OMPI libraries installed
> in it, what difference does it make what is on your host? You'll never see
> the IMPI installation.
>
> We have been supporting people running that way since Singularity was
> originally released, without any pr
As a work-around, but not a 'solution', it's worth pointing out that the
(new) Intel compilers are now *usable* for free - no licensing cost or
login needed. (As are the MKL, Intel MPI, etc).
Link:
https://software.intel.com/content/www/us/en/develop/tools/oneapi/all-toolkits.html
They've got Yu
e bound by orted/mpirun before they are execv'ed, i have
> some hard time understanding how not binding MPI tasks to
> memory can have a significant impact on performances as long as they
> are bound on cores.
>
> Cheers,
>
> Gilles
>
>
> On Sat, Dec 23, 2017 at 7
ri, Dec 22, 2017 at 2:14 PM, r...@open-mpi.org wrote:
> I honestly don’t know - will have to defer to Brian, who is likely out for
> at least the extended weekend. I’ll point this one to him when he returns.
>
>
> On Dec 22, 2017, at 1:08 PM, Brian Dobbins wrote:
>
>
>
gt; binding pattern by adding --report-bindings to your cmd line.
>
>
> On Dec 22, 2017, at 11:58 AM, Brian Dobbins wrote:
>
>
> Hi all,
>
> We're testing a model on AWS using C4/C5 nodes and some of our timers,
> in a part of the code with no communication, show
Hi all,
We're testing a model on AWS using C4/C5 nodes and some of our timers, in
a part of the code with no communication, show really poor performance
compared to native runs. We think this is because we're not binding to a
core properly and thus not caching, and a quick 'mpirun --bind-to cor
Hi Dave,
With which compiler, and even optimized?
>
> $ `mpif90 --showme` --version | head -n1
> GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)
> $ cat a.f90
> use mpi
> if (mpi_version == 3) call undefined()
> print *, mpi_version
> end
> $ mpif90 a.f90 && ./a.out
>
Hi Jeff,
On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond
wrote:
> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf
> to see the status of all implementations w.r.t. MPI-3 as of one year ago.
>
Thank you - that's something I was curious about, and it's incredibly
helpful.
Hi everyone,
This isn't really a problem, per se, but rather a search for a more
elegant solution. It also isn't specific to OpenMPI, but I figure the
experience and knowledge of people here made it a suitable place to ask:
I'm working on some code that'll be used and downloaded by others on
Hi Nathan,
I just ran your code here and it worked fine - CentOS 5 on dual Xeons w/
IB network, and the kernel is 2.6.18-53.1.14.el5_lustre.1.6.5smp. I used an
OpenMPI 1.3.0 install compiled with Intel 11.0.081 and, independently, one
with GCC 4.1.2. I tried a few different times with varying
Hi Sangamesh,
I think the problem is that you're loading a different version of OpenMPI
at runtime:
*[master:17781] [ 1] /usr/lib64/openmpi/libmpi.so.0 [0x34b19544b8]*
.. The path there is to '/usr/lib64/openmpi', which is probably a
system-installed GCC version. You want to use your versio
Hi guys,
On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote:
> Actually I had a much differnt results,
>
> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7
> pgi/7.2
> mpich2 gcc
>
For some reason, the difference in minutes didn't come through, it seems,
but I would gue
On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote:
> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>> OpenMPI : 120m 6s
>> MPICH2 : 67m 44s
>>
>
> That seems to indicate that something else is going on -- with -np 1, there
> should be no MPI communication, right? I wonder if the memory all
anyone can download and run?
Cheers,
- Brian
Brian Dobbins
Yale Engineering HPC
DOES succeed sometimes) is about 25% right now. My
best guess is that this is because the file system is overloaded, thus not
allowing timely I/O or access to OpenMPI's files, but I wanted to get a
quick understanding of how these files are used by OpenMPI and whether the
FS does indeed seem
rally pretty responsive. (And if you don't, I
will, since it'd be nice to see it work without a hybrid MPI installation!)
Cheers,
- Brian
Brian Dobbins
Yale Engineering HPC
On Wed, Jul 23, 2008 at 12:09 PM, Brock Palen wrote:
> Not yet, if you have no ideas I will op
Hi Marco and Jeff,
My own knowledge of OpenMPI's internals is limited, but I thought I'd add
my less-than-two-cents...
> I've found only a way in order to have tcp connections binded only to
> > the eth1 interface, using both the following MCA directives in the
> > command line:
> >
> > mpirun
this again to be
sure.
Again, many thanks for the help!
With best wishes,
- Brian
Brian Dobbins
Yale University HPC
st missing a crucial mca parameter?
Thanks very much,
- Brian
Brian Dobbins
Yale University HPC
ry
duplicating the mca parameters after the colon since I figured they might
not propagate, thus perhaps it was trying to use the wrong interface, but
that didn't help either.
Thanks very much,
- Brian
Brian Dobbins
Yale University HPC
before on a 32-bit Xeon
with gigabit links, so while there are still lots of variables, it
should help me narrow things down.
Cheers,
- Brian
Brian Dobbins
Yale University HPC
r up to 20 (from 7), but
that didn't fix it.
Cheers,
- Brian
Brian Dobbins
Yale University HPC
Hi guys,
I just came across this thread while googling when I faced a similar
problem with a certain code - after scratching my head for a bit, it
turns out the solution is pretty simple. My guess is that Jeff's code
has it's own copy of 'mpif.h' in its source directory, and in all
likelihood, i
27 matches
Mail list logo