-mpi.org/en/v5.0.x/launching-apps/ssh.html#finding-open-mpi-executables-and-libraries.
From: T Brouns
Sent: Sunday, May 5, 2024 4:37 PM
To: users@lists.open-mpi.org
Cc: Jeff Squyres (jsquyres) ; hear...@gmail.com
Subject: Re: [OMPI users] Fwd: Unable to run basic
, your could prefix your LD_LIBRARY_PATH environment variable
with the libdir from the Open MPI installation you just created.
From: T Brouns
Sent: Saturday, May 4, 2024 10:56 AM
To: Jeff Squyres (jsquyres) ; users@lists.open-mpi.org
Subject: Re: [OMPI users
Your config.log file shows that you are trying to build Open MPI 2.1.6 and that
configure failed.
I'm not sure how to square this with the information that you provided in your
message... did you upload the wrong config.log?
Can you provide all the information from
(sorry this is so long – it's a bunch of explanations followed by 2 suggestions
at the bottom)
One additional thing worth mentioning is that your mpirun command line does not
seem to explicitly be asking for the "ucx" PML component, but the error message
you're getting indicates that you
No worries – glad you figured it out!
From: users on behalf of afernandez via
users
Sent: Wednesday, January 31, 2024 10:56 AM
To: Open MPI Users
Cc: afernandez
Subject: Re: [OMPI users] Seg error when using v5.0.1
Hello,
I'm sorry as I totally messed up
Cool!
I dimly remember this project; it was written independently of the main Open
MPI project.
It looks like it supports the TCP OOB and TCP BTL.
The TCP OOB has since moved from Open MPI's "ORTE" sub-project to the
independent PRRTE project. Regardless, TCP OOB traffic is effectively about
We develop and build with clang on macOS frequently; it would be surprising if
it didn't work.
That being said, I was able to replicate both errors report here. One macOS
Sonoma with XCode 15.x and the OneAPI compilers:
* configure fails in the PMIx libevent section, complaining about how
ion as
well, but chances are: if you have a question, others have the same question.
So submit your question to
us<https://docs.google.com/forms/d/e/1FAIpQLSefccrJaKOjkEDLroO3Fq4fvn7o8v6N5WNSIaQ9VbSY16x_Rw/viewform>
so that we can include them in the presentation!
Hope to see you in Denver!
--
Jeff Squyres
Volker --
If that doesn't work, send all the information requested here:
https://docs.open-mpi.org/en/v5.0.x/getting-help.html
From: users on behalf of Volker Blum via
users
Sent: Saturday, October 28, 2023 8:47 PM
To: Matt Thompson
Cc: Volker Blum ; Open MPI
From: caitlin lamirez
Sent: Wednesday, October 25, 2023 1:17 PM
To: Jeff Squyres (jsquyres)
Subject: Re: [OMPI users] MPI4Py Only Using Rank 0
Hi Jeff,
After getting that error, I did reinstall MPI4py using conda remove mpi4py and
conda install mpi4py. However, I am still getting th
This usually means that you have accidentally switched to using a different
MPI implementation under the covers somehow. E.g., did you somehow
accidentally start using mpiexec from MPICH instead of Open MPI? Or did MPI4Py
somehow get upgraded or otherwise re-build itself for MPICH, but
In addition to what Gilles mentioned, I'm curious: is there a reason you have
hardware threads enabled? You could disable them in the BIOS, and then each of
your MPI processes can use the full core, not just a single hardware thread.
From: users on behalf of
application that replicates the
issue? That would be something we could dig into and investigate.
From: Aziz Ogutlu
Sent: Wednesday, August 9, 2023 10:31 AM
To: Jeff Squyres (jsquyres) ; Open MPI Users
Subject: Re: [OMPI users] Segmentation fault
Hi Jeff,
I'm
.
From: Aziz Ogutlu
Sent: Wednesday, August 9, 2023 10:08 AM
To: Jeff Squyres (jsquyres) ; Open MPI Users
Subject: Re: [OMPI users] Segmentation fault
Hi Jeff,
I also tried with OpenMPI 4.1.5, I got same error.
On 8/9/23 17:05, Jeff Squyres (jsquyres) wrote:
I'm afraid I
I'm afraid I don't know anything about the SU2 application.
You are using Open MPI v4.0.3, which is fairly old. Many bug fixes have been
released since that version. Can you upgrade to the latest version of Open MPI
(v4.1.5)?
From: users on behalf of Aziz
MPI_Allreduce should work just fine, even with negative numbers. If you are
seeing something different, can you provide a small reproducer program that
shows the problem? We can dig deeper into if if we can reproduce the problem.
mpirun's exit status can't distinguish between MPI processes
It's not clear if that message is being emitted by Open MPI.
It does say it's falling back to a different behavior if libnuma.so is not
found, so it appears if it's treating it as a warning, not an error.
From: users on behalf of Luis Cebamanos via
users
Sent:
nt: Tuesday, July 18, 2023 12:51 PM
To: Jeff Squyres (jsquyres)
Cc: Open MPI Users
Subject: Re: [OMPI users] Error build Open MPI 4.1.5 with GCC 11.3
As soon as you pointed out /usr/lib/gcc/x86_64-linux-gnu/9/include/float.h
that made me think of the previous build.
I did "make clean" a _
if you had run make clean and then re-ran configure, it probably would
have built ok. But deleting the whole source tree and re-configuring +
re-building also works.
From: Jeffrey Layton
Sent: Tuesday, July 18, 2023 11:38 AM
To: Jeff Squyres (jsquyres)
Cc: Ope
That's a little odd. Usually, the specific .h files that are listed as
dependencies came from somewhere -- usually either part of the GNU Autotools
dependency analysis.
I'm guessing that /usr/lib/gcc/x86_64-linux-gnu/9/include/float.h doesn't
actually exist on your system -- but then how did
ge Bosilca
Sent: Wednesday, July 12, 2023 2:26 PM
To: Open MPI Users
Cc: Jeff Squyres (jsquyres) ; Elad Cohen
Subject: Re: [OMPI users] OMPI compilation error in Making all datatypes
I can't replicate this on my setting, but I am not using the tar archive from
the OMPI website (I use the git ta
The output you sent (in the attached tarball) in doesn't really make much sense:
libtool: link: ar cru .libs/libdatatype_reliable.a
.libs/libdatatype_reliable_la-opal_datatype_pack.o
.libs/libdatatype_reliable_la-opal_datatype_unpack.o
libtool: link: ranlib .libs/libdatatype_reliable.a
plications in the "examples" directory.
From: 深空探测
Sent: Tuesday, June 13, 2023 8:59 PM
To: Open MPI Users
Cc: John Hearns ; Jeff Squyres (jsquyres)
; gilles.gouaillar...@gmail.com
; t...@pasteur.fr
Subject: Re: [OMPI users] Issue with Running MPI Job on Cent
Your steps are generally correct, but I cannot speak for whether your
/home/wude/.bashrc file is executed for both non-interactive and interactive
logins. If /home/wude is your $HOME, it probably is, but I don't know about
your specific system.
Also, you should be aware that MPI applications
Greetings Macro. I think you directed this email to the wrong mailing list --
this list is for users of the MPI Testing Tool, which is a specific tool that
we use in the development and testing of Open MPI itself.
General user errors should likely be reported either to the user's mailing list
is selected make my above
comment moot. Sorry for any confusion!
From: users on behalf of Jeff Squyres
(jsquyres) via users
Sent: Monday, March 6, 2023 10:40 AM
To: Chandran, Arun ; Open MPI Users
Cc: Jeff Squyres (jsquyres)
Subject: Re: [OMPI users] What
is an open question to George / the UCX team)
From: Chandran, Arun
Sent: Monday, March 6, 2023 10:31 AM
To: Jeff Squyres (jsquyres) ; Open MPI Users
Subject: RE: [OMPI users] What is the best choice of pml and btl for intranode
communication
[Public]
Hi,
Yes, i
If this run was on a single node, then UCX probably disabled itself since it
wouldn't be using InfiniBand or RoCE to communicate between peers.
Also, I'm not sure your command line was correct:
perf_benchmark $ mpirun -np 32 --map-by core --bind-to core ./perf --mca pml
ucx
You probably
Open MPI's libraries and from what's available in
$prefix/lib/openmpi [the latter is the default]). If ompi_info doesn't show
any output with "ucx", "openib", and/or "psm", then your Open MPI does not
contain any IB support.
--
Jeff Squyres
jsquy...@cisco.com
__
is continuing to investigate. If it turns into a problem with Open MPI,
we'll report back here.
--
Jeff Squyres
jsquy...@cisco.com
From: Jeff Squyres (jsquyres)
Sent: Wednesday, November 30, 2022 7:42 AM
To: timesir ; Open MPI Users
Subject: Re: mpi program gets stuck
Ok
Can you try steps 1-3 in
https://docs.open-mpi.org/en/v5.0.x/validate.html#testing-your-open-mpi-installation
?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Blaze Kort via
users
Sent: Saturday, December 3, 2022 5:52 AM
To: users@lists.open
plm_base_verbose 100 --mca
rmaps_base_verbose 100 --mca ras_base_verbose 100 --prtemca
grpcomm_base_verbose 5 --prtemca state_base_verbose 5 ./ring_c
And please send the output back here to the list.
--
Jeff Squyres
jsquy...@cisco.com
From: timesir
Sent: Tuesday
Python MPI program?
That would just eliminate a few more variables from the troubleshooting process.
In the "examples" directory in the tarball I provided are trivial "hello world"
and "ring" MPI programs. A "make" should build them all. Try running hello_
More specifically, Gilles created a skeleton "ceph" component in this draft
pull request: https://github.com/open-mpi/ompi/pull/11122
If anyone has any cycles to work on it and develop it beyond the skeleton that
is currently there, that would be great!
--
Jeff Squyres
jsquy...
ent. You
might want to investigate the suggestion from the help message to set the
memlock limits correctly, and see if using the qib0 interfaces would yield
better performance.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Gilles Gouaillardet
via use
Ok, this is a good / consistent output. That being said, I don't grok what is
happening here: it says it finds 2 slots, but then it tells you it doesn't have
enough slots.
Let me dig deeper and get back to you...
--
Jeff Squyres
jsquy...@cisco.com
From
ot;em
dash", or somesuch.
--
Jeff Squyres
jsquy...@cisco.com
From: timesir
Sent: Friday, November 18, 2022 8:59 AM
To: Jeff Squyres (jsquyres) ; users@lists.open-mpi.org
; gilles.gouaillar...@gmail.com
Subject: Re: users Digest, Vol 4818, Issue 1
The
tfile" component altogether.
How did you install Open MPI? Can you send the information from "Run time
problems" on
https://docs.open-mpi.org/en/v5.0.x/getting-help.html#for-run-time-problems ?
--
Jeff Squyres
jsquy...@cisco.com
From: timesir
Sent:
I see 2 config.log files -- can you also send the other information requested
on that page? I.e, the version you're using (I think you said in a prior
email that it was 5.0rc9, but I'm not 100% sure), and the output from ompi_info
--all.
--
Jeff Squyres
jsquy...@cisco.com
of detailed internal function call tracing inside
Open MPI itself, due to performance considerations. You might want to look
into flamegraphs, or something similar...?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of arun c via users
Sent: Saturday
Yes, somehow I'm not seeing all the output that I expect to see. Can you
ensure that if you're copy-and-pasting from the email, that it's actually using
"dash dash" in front of "mca" and "machinefile" (vs. a copy-and-pasted "em
dash&quo
asked you to run with 2 variables last time -- can you re-run
with "mpirun --mca rmaps_base_verbose 100 --mca ras_base_verbose 100 ..."?
Turning on the RAS verbosity should show us what the hostfile component is
doing.
--
Jeff Squyres
jsquy...@cisco.com
___
Sorry for the delay in replying.
To tie up this thread for the web mail archives: this same question was
cross-posted over in the devel list; I replied there.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of mrlong via users
Sent: Sunday, October
n get some
debugging output and see why the slots aren't working for you? Show the full
output, like I did above (e.g., cat the hostfile, and then mpirun with the MCA
param and all the output). Thanks!
--
Jeff Squyres
jsquy...@cisco.com
From: devel on behalf of
uot; is in this file? If that's the
case, then that's where Open MPI is getting these CLI arguments.
--
Jeff Squyres
jsquy...@cisco.com
From: Jeffrey D. (JD) Tamucci
Sent: Wednesday, October 5, 2022 5:16 PM
To: Jeff Squyres (jsquyres)
Cc: Open MPI Users ; Pritchard Jr.,
configure+build with just one of those two options, does it work?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Pritchard Jr.,
Howard via users
Sent: Wednesday, October 5, 2022 11:47 AM
To: Jeffrey D. (JD) Tamucci
Cc: Pritchard Jr., Howard ; Open MPI
.deps/signal.Tpo -c \
../../../../../../opal/mca/event/libevent2022/libevent/signal.c -fPIC \
-DPIC -E > signal-preprocessed.c
--
Jeff Squyres
jsquy...@cisco.com
From: Zilore Mumba
Sent: Wednesday, September 28, 2022 1:50 AM
To: Jeff Squyres (jsquyres)
Cc: users@lists.op
;& ./foo
NSIG is 65
You can see that NSIG is definitely defined for me.
It's likely that until the above trivial program can compile properly, Open MPI
won't compile properly, either.
--
Jeff Squyres
jsquy...@cisco.com
From: Zilore Mumba
Sent: Tuesday, Septe
Can you re-try with the latest Open MPI v4.1.x release (v4.1.4)? There have
been many bug fixes since v4.1.0.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Zilore Mumba via
users
Sent: Tuesday, September 27, 2022 5:10 AM
To: users@lists.open
Just to follow up for the email web archives: this issue was followed up in
https://github.com/open-mpi/ompi/issues/10841.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Rob Kudyba via
users
Sent: Thursday, September 22, 2022 2:15 PM
To: users
). This will allow the MPI processes to
use shared memory for on-node communication.
--
Jeff Squyres
jsquy...@cisco.com
From: Jeff Squyres (jsquyres)
Sent: Tuesday, September 13, 2022 10:08 AM
To: Open MPI Users
Cc: Gilles Gouaillardet
Subject: Re: [OMPI users
ons. It's a surprisingly complicated topic.
In the v4.x series, note that you can use "mpirun --report-bindings ..." to see
exactly where Open MPI thinks it has bound each process. Note that this
binding occurs before each MPI process starts; it's nothing that the
application itself n
No, it does not, sorry.
What are you trying to do?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Mccall, Kurt E.
(MSFC-EV41) via users
Sent: Friday, September 9, 2022 2:30 PM
To: OpenMpi User List (users@lists.open-mpi.org)
Cc: Mccall, Kurt E
MPI message passing (and ignore the "normal" Ethernet interfaces).
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Harutyun Umrshatyan
via users
Sent: Tuesday, September 6, 2022 2:58 AM
To: Open MPI Users
Cc: Harutyun Umrshatyan
Subject:
param defaults to:
rc_verbs,ud_verbs,rc_mlx5,dc_mlx5,ud_mlx5,cuda_ipc,rocm_ipc
(you'll need to ask the UCX community what each of those do/are)
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Bernstein, Noam CIV
USN NRL (6393) Washington DC (USA) via
Fair point.
If there's anyone out there who's unwilling to reply publicly, please feel free
to reply directly to me.
Specifically: we want to know if Open MPI v5.0.0 stops supporting < SLURM
2017.11 is going to be a problem.
--
Jeff Squyres
jsquy...@cisco.
These are great data points!
I'd love to hear from others, too.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Andrew Reid via
users
Sent: Tuesday, August 16, 2022 10:21 AM
To: Open MPI Users
Cc: Andrew Reid
Subject: Re: [OMPI users] Oldest
before they are refreshed, which
fits nicely within that 5-year window. But in less well-funded institutions,
HPC clusters could have lifetimes longer than 5 years.
Do any of you run versions of SLURM that are more than 5 years old?
--
Jeff Squyres
jsquy...@cisco.com
Thanks for the feedback! I made a follow-up PR
https://github.com/open-mpi/ompi/pull/10652 incorporating your feedback and
feedback from Harmen Stoppels.
I would have @mentioned you in the PR, but it doesn't appear that you have a
Github ID (or, I couldn't find it, at least).
--
Jeff Squyres
Reuti -- thanks for the comments+fix about missing "-Wl," (oops!). In addition
to yours, some more came in on https://github.com/open-mpi/ompi/pull/10624
after it was merged. I'll make a follow-on PR with these suggestions.
--
Jeff Squyres
jsquy...
, or
know of anyone who is using them.
Thank you!
--
Jeff Squyres
jsquy...@cisco.com
for your environment, but you might
want to check the output of "readelf -d ..." to be sure.
Does that additional text help explain things?
--
Jeff Squyres
jsquy...@cisco.com
____
From: Jeff Squyres (jsquyres)
Sent: Saturday, August 6, 2022 9:36 AM
To: Open
I can't see the image that you sent; it seems to be broken.
But I think you're asking about this:
https://www.open-mpi.org/faq/?category=building#installdirs
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Sebastian Gutierrez
via users
Sent
Reuti --
See my disclaimers on other posts about apologies for taking so long to reply!
This code was written forever ago; I had to dig through it a bit, read the
comments and commit messages, and try to remember why it was done this way.
What I thought would be a 5-minute search turned into
Can you send the full output of "ifconfig" (or "ip addr") from one of your
compute nodes?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of George Johnson via
users
Sent: Monday, July 4, 2022 11:06 AM
To: users@lists.ope
. Hence: the handle is
meaningless to the application -- it's just an opaque value that the user
program can pass around.
User applications *can* compare it to the value for MPI_COMM_NULL, but that's
about it.
--
Jeff Squyres
jsquy...@cisco.com
From
(connect/accept/etc.) has always been a bit shaky; they have been tested to
work in very, very specific conditions, and not made super robust to work in
many different / generalized cases. Is there a chance you can orient your app
to not use the MPI dynamic APIs?
--
Jeff Squyres
jsquy
of the hostname). I'm surprised that using the naive module
(instead of the fwd module) doesn't solve your problem. ...oh shoot, I see
why. It's because I had a typo in what I suggested to you.
Please try: mpirun --mca regx naive ...
(i.e., "regx", not "regex")
--
Jeff Squyre
t;fwd" regex component is selected by default, but it has
certain expectations about the format of hostnames. Try using the "naive"
regex component, instead.
--
Jeff Squyres
jsquy...@cisco.com
From: Patrick Begou
Sent: Thursday, June 16, 2
What exactly is the error that is occurring?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Patrick Begou via
users
Sent: Thursday, June 16, 2022 3:21 AM
To: Open MPI Users
Cc: Patrick Begou
Subject: [OMPI users] OpenMPI and names
obituaries/downers-grove-il/ewing-lusk-10754811/amp
--
Jeff Squyres
jsquy...@cisco.com
ts
source code repo: https://github.com/openpmix/openpmix/. It's a different
project than Open MPI, but you can certainly ask questions on their mailing
lists, too.
--
Jeff Squyres
jsquy...@cisco.com
From: victor sv
Sent: Tuesday, May 17, 2022 4:00 AM
To: Je
, by definition, the OS won't have
visibility of the packets). Regardless, all of those structs are defined in
their BTL / MTL / PML / etc. components. We don't have formal documentation of
any of them, sorry!
--
Jeff Squyres
jsquy...@cisco.com
From
k and Open MPI transport are you looking to sniff?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of victor sv via users
Sent: Sunday, May 15, 2022 3:55 PM
To: users@lists.open-mpi.org
Cc: victor sv
Subject: [OMPI users] Network traff
necessary on some M1s (e.g., Scott's)
but not others (e.g., George's).
We'll put a guard in against the "unlimited" case in future releases.
See https://github.com/open-mpi/ompi/issues/10358 for more details, but I
figured I'd put the workaround out here on the mailing list.
-
Scott and I conversed a bit off list, and I got more data. I posted everything
in https://github.com/open-mpi/ompi/issues/10358 -- let's follow up on this
issue there.
--
Jeff Squyres
jsquy...@cisco.com
From: George Bosilca
Sent: Thursday, May 5, 2022
You can use "lldb -p PID" to attach to a running process.
--
Jeff Squyres
jsquy...@cisco.com
From: Scott Sayres
Sent: Thursday, May 5, 2022 11:22 AM
To: Jeff Squyres (jsquyres)
Cc: Open MPI Users
Subject: Re: [OMPI users] mpirun hangs on m1 mac
you posted earlier implies that
the parent mpirun hadn't even finished its fork/exec sequence (i.e., mpirun
itself is still in the "do_parent()" function, which implies that it didn't
complete the pipe handshake that happens immediately after forking the child
process... which is weird).
-
start seeing output, good!If it completes, better!
If it hangs, and/or if you don't see any output at all, do this:
ps auxwww | egrep 'mpirun|foo.sh'
It should show mpirun and 2 copies of foo.sh (and probably a grep). Does it?
--
Jeff Squyres
jsquy...@cisco.com
. E.g.:
./configure CFLAGS=-g ...
make -j 8 all
[sudo] make install
(put whatever other configure flags you want in there, such as a custom prefix,
... etc.)
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of George Bosilca via
users
Sent
ples
make
mpirun -np 4 hello_c
mpirun -np 4 ring_c
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Scott Sayres via
users
Sent: Tuesday, May 3, 2022 1:07 PM
To: users@lists.open-mpi.org
Cc: Scott Sayres
Subject: [OMPI users] mpirun hangs on m1 ma
Can you send all the information listed under "For compile problems" (please
compress!):
https://www.open-mpi.org/community/help/
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Cici Feng via users
Sent: Friday, April 22, 20
else? It might be useful to compile Open MPI (and/or
other libraries that you're using) with -g so that you can get more meaningful
stack traces upon error -- that might give some insight into where / why the
failure is occurring.
--
Jeff Squyres
jsquy...@cisco.com
A little more color on Gilles' answer: I believe that we had some Open MPI
community members work on adding M1 support to Open MPI, but Gilles is
absolutely correct: the underlying compiler has to support the M1, or you won't
get anywhere.
--
Jeff Squyres
jsquy...@cisco.com
Thanks for the poke! Sorry we missed replying to your github issue. Josh
replied to it this morning.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Bernstein, Noam CIV
USN NRL (6393) Washington DC (USA) via users
Sent: Tuesday, March 15
occur.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Crni Gorac via
users
Sent: Tuesday, February 22, 2022 7:37 AM
To: users@lists.open-mpi.org
Cc: Crni Gorac
Subject: [OMPI users] handle_wc() in openib and
IBV_WC_DRIVER2/MLX5DV_WC_RAW_WQE
got the
message. From back in my IB days, the typical first place to look for errors
like this is to check the layer 0 and layer 1 networking with Nvidia-level
diagnostics to ensure that the network itself is healthy.
--
Jeff Squyres
jsquy...@cisco.com
). If you can upgrade to Open MPI v4.1.2 and the latest UCX, see
if you are still getting those MXM error messages.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Angel de Vicente
via users
Sent: Friday, February 18, 2022 5:46 PM
To: Gilles
It's used for compressing the startup time messages in PMIx. I.e., the traffic
for when you "mpirun ...".
It's mostly beneficial when launching very large MPI jobs. If you're only
launching across several nodes, the performance improvement isn't really
noticeable.
--
Jeff Squ
and 2 are probably the most relevant, and you can
probably skip the parts about PMIx (circle back to that later for more advanced
knowledge).
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Diego Zuccato via
users
Sent: Thursday, January
I'm afraid that without any further details, it's hard to help. I don't know
why Gadget2 would complain about its parameters file. From what you've stated,
it could be a problem with the application itself.
Have you talked to the Gadget2 authors?
--
Jeff Squyres
jsquy...@cisco.com
I'm afraid I don't know anything about Gadget, so I can't comment there. How
exactly does the application fail?
Can you try upgrading to Open MPI v4.1.2?
What networking are you using?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Diego
fixed everything yet.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Paul Kapinos via users
Sent: Tuesday, January 4, 2022 4:27 AM
To: Jeff Squyres (jsquyres) via users
Cc: Paul Kapinos
Subject: Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI
I filed https://github.com/open-mpi/ompi/issues/9795 to track the issue; let's
followup there.
I tried to tag everyone on this thread; feel free to subscribe to the issue if
I didn't guess your github ID properly.
--
Jeff Squyres
jsquy...@cisco.com
, you should be able to find the corresponding .m4
file for the test source code.
--
Jeff Squyres
jsquy...@cisco.com
From: Matt Thompson
Sent: Thursday, December 30, 2021 4:01 PM
To: Jeff Squyres (jsquyres)
Cc: Wadud Miah; Open MPI Users
Subject: Re
that process is fairly tricky, and
somewhat fragile (because we have to patch Libtool _after_ it is created). But
if someone wants to make a PR, we can evaluate it.
--
Jeff Squyres
jsquy...@cisco.com
From: Matt Thompson
Sent: Thursday, December 30, 2021 3:55
are... complicated.
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Matt Thompson via
users
Sent: Thursday, December 23, 2021 11:41 AM
To: Wadud Miah
Cc: Matt Thompson; Open MPI Users
Subject: Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI
The conclusion we came to on that issue was that this was an issue with Intel
ifort. Was anyone able to raise this with Intel ifort tech support?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Matt Thompson via
users
Sent: Thursday
o a single
MPI process, who then gathers and emits them (i.e., if there's only
stdout/stderr coming from a single MPI process, the output won't get
interleaved with anything else).
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Fisher
/stderr lines can just
get munged together.
Can you check for convergence a different way?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Fisher (US), Mark S
via users
Sent: Thursday, December 2, 2021 10:48 AM
To: users@lists.open-mpi.org
Cc
1 - 100 of 4152 matches
Mail list logo