, but it has certain
expectations about the format of hostnames. Try using the "naive" regex component,
instead.
--
Jeff Squyres
jsquy...@cisco.com
________
From: Patrick Begou
Sent: Thursday, June 16, 2022 9:48 AM
To: Jeff Squyres (jsquyres); Open MPI
that is occurring?
--
Jeff Squyres
jsquy...@cisco.com
From: users on behalf of Patrick Begou via
users
Sent: Thursday, June 16, 2022 3:21 AM
To: Open MPI Users
Cc: Patrick Begou
Subject: [OMPI users] OpenMPI and names of the nodes in a cluster
Hi all,
we
Hi all,
we are facing a serious problem with OpenMPI (4.0.2) that we have
deployed on a cluster. We do not manage this large cluster and the names
of the nodes do not agree with Internet standards for protocols: they
contain a "_" (underscore) character.
So OpenMPI complains about this and
btl?
If the former, is it built with multi threading support?
If the latter, I suggest you give UCX - built with multi threading
support - a try and see how it goes
Cheers,
Gilles
On Thu, Mar 24, 2022 at 5:43 PM Patrick Begou via users
wrote:
Le 28/02/2022 à 17:56, Patrick Begou via
Le 28/02/2022 à 17:56, Patrick Begou via users a écrit :
Hi,
I meet a performance problem with OpenMPI on my cluster. In some
situation my parallel code is really slow (same binary running on a
different mesh).
To investigate, the fortran code code is built with profiling option
(mpifort
Hi,
I meet a performance problem with OpenMPI on my cluster. In some
situation my parallel code is really slow (same binary running on a
different mesh).
To investigate, the fortran code code is built with profiling option
(mpifort -p -O3.) and launched on 91 cores.
One mon.out file
help
>
> if i had to guess totally pulling junk from the air, there's probably
> something incompatible with PSM and OPA when running specifically on debian
> (likely due to library versioning). i don't know how common that is, so it's
> not clear how flushed out and tested it is
e if it's supposed to stop at some point
>
> i'm running rhel7, gcc 10.1, openmpi 4.0.5rc2, with-ofi,
> without-{psm,ucx,verbs}
>
> On Tue, Jan 26, 2021 at 3:44 PM Patrick Begou via users
> wrote:
> >
> > Hi Michael
> >
at reproduces
> the problem? I can’t think of another way I can give you more help
> without being able to see what’s going on. It’s always possible
> there’s a bug in the PSM2 MTL but it would be surprising at this point.
>
> Sent from my iPad
>
>> On Jan 26, 2021, at 1:1
Hi all,
I ran many tests today. I saw that an older 4.0.2 version of OpenMPI
packaged with Nix was running using openib. So I add the --with-verbs
option to setup this module.
That I can see now is that:
mpirun -hostfile $OAR_NODEFILE *--mca mtl psm -mca btl_openib_allow_ib
true*
-
t expect 4007
but it fails too.
Patrick
Le 25/01/2021 à 19:34, Ralph Castain via users a écrit :
> I think you mean add "--mca mtl ofi" to the mpirun cmd line
>
>
>> On Jan 25, 2021, at 10:18 AM, Heinz, Michael William via users
>> wrote:
>>
>> What
Hi Howard and Michael,
thanks for your feedback. I did not want to write a toot long mail with
non pertinent information so I just show how the two different builds
give different result. I'm using a small test case based on my large
code, the same used to show the memory leak with mpi_Alltoallv
Hi,
I'm trying to deploy OpenMPI 4.0.5 on the university's supercomputer:
* Debian GNU/Linux 9 (stretch)
* Intel Corporation Omni-Path HFI Silicon 100 Series [discrete] (rev 11)
and for several days I have a bug (wrong results using MPI_AllToAllW) on
this server when using OmniPath.
--
===
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI| |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX| Fax 04
multiple --enable-debug --enable-mem-debug
Any help appreciated
Patrick
--
===
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:pat
Solved.
Strange conflict (not explained) after several compilation test of OpenMPI with
gcc7. Solved by removing the destination directory before any new "make install"
command.
Patrick
Patrick Begou wrote:
I am compiling openmpi-3.1.2 on CentOS 7 with GCC 7.3 installed in /opt
en-mpi.org/mailman/listinfo/users
--
===
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI|
f5e7ae1b000)
libc.so.6 => /lib64/libc.so.6 (0x7f5e7aa4e000)
/lib64/ld-linux-x86-64.so.2 (0x7f5e7b945000)
--
=======
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI
e/PROJECTS/...
Patrick
--
===
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI|
*/
Cheers,
Gilles
On Friday, September 18, 2015, Patrick Begou
<patrick.be...@legi.grenoble-inp.fr
<mailto:patrick.be...@legi.grenoble-inp.fr>> wrote:
Gilles Gouaillardet wrote:
Patrick,
by the way, this will work when running on a single node.
i do not kn
than cpus on a resource:
Bind to: CORE
Node:frog5
#processes: 2
#cpus: 1
You can override this protection by adding the "overload-allowed"
option to your binding directive.
Cheers,
Gilles
On 9/18/2015 4:54 PM, Patrick Begou wrote:
Ralph Castain wrot
Patrick
On Sep 16, 2015, at 1:00 AM, Patrick Begou
<patrick.be...@legi.grenoble-inp.fr
<mailto:patrick.be...@legi.grenoble-inp.fr>> wrote:
Thanks all for your answers, I've added some details about the tests I have
run. See below.
Ralph Castain wrote:
Not precisely correct
icitly sets "max_slots" equal to the "slots" value for each node.
It also looks like -map-by has a way to implement it as well (see man page).
Thanks for letting me/us know about this. On a system of mine I sort of
depend on the -nooversubscribe behavior!
Matt
On Tu
M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI| |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX| Fax 04 76 82 52
this yet.
Patrick
--
===
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI
Jeff Squyres (jsquyres) wrote:
Can you manually install a recent version of hwloc
(http://www.open-mpi.org/projects/hwloc/) on kareline, and run lstopo on it?
Send the output here.
What kind of machine is kareline?
On Oct 21, 2013, at 11:09 AM, Patrick Begou <patrick.be...@legi.greno
then you need to add that directive to
your default MCA param file:
/etc/openmpi-mca-params.conf
On Oct 21, 2013, at 3:17 AM, Patrick Begou <patrick.be...@legi.grenoble-inp.fr>
wrote:
I am compiling OpenMPI 1.7.3 and 1.7.2 with GCC 4.8.1 but I'm unable to
activate some binding policy
==
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI| |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX| Fax 04 76 82 52 71 |
===
--
===
| Equipe M.O.S.T. | http://most.hmg.inpg.fr |
| Patrick BEGOU | |
| LEGI| mailto:patrick.be...@hmg.inpg.fr |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX| Fax 04
?
Patrick
--
===
| Equipe M.O.S.T. | http://most.hmg.inpg.fr |
| Patrick BEGOU | |
| LEGI| mailto:patrick.be...@hmg.inpg.fr |
| BP 53 X | Tel 0
for the suggestion, I was fixed on a syntax error in my config...
Patrick
On Oct 26, 2011, at 3:11 AM, Patrick Begou wrote:
I need to change system wide how OpenMPI launch the jobs on the nodes of my
cluster.
Setting:
export OMPI_MCA_plm_rsh_agent=oarsh
works fine but I would like this config
MPI SVN revision: r23834
Open MPI release date: Oct 05, 2010
Thanks
Patrick
--
===
| Equipe M.O.S.T. | http://most.hmg.inpg.fr |
| Patrick BEGOU | |
| LEGI
32 matches
Mail list logo