Sounds like I need to resync the PMIx lustre configury with the OMPI one - I'll
do that.
On Feb 4, 2021, at 11:56 AM, Gabriel, Edgar via devel
mailto:devel@lists.open-mpi.org>> wrote:
I have a weird problem running configure on master on our cluster. Basically,
configure fails whe
defined reference to symbol
'sem_open@@GLIBC_2.2.5'
//usr/lib64/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
The test passes on v4.1.x because the '-pthread' flag is passed
Cheers,
Gilles
On Sat, Feb 6, 2021
The library that contains sem_init and sem_open, the two functions that we
check for in the configure script
From: Jeff Squyres (jsquyres)
Sent: Friday, February 5, 2021 7:31 PM
To: Open MPI Developers List
Cc: Gabriel, Edgar
Subject: Re: [OMPI devel] configure problem on master
Which
and is not anymore?
Thanks
Edgar
From: devel On Behalf Of Gabriel, Edgar via
devel
Sent: Thursday, February 4, 2021 2:15 PM
To: Open MPI Developers
Cc: Gabriel, Edgar
Subject: Re: [OMPI devel] configure problem on master
excellent, thanks! I have meanwhile a more detailed suspicion:
--
looking f
] configure problem on master
Sounds like I need to resync the PMIx lustre configury with the OMPI one - I'll
do that.
On Feb 4, 2021, at 11:56 AM, Gabriel, Edgar via devel
mailto:devel@lists.open-mpi.org>> wrote:
I have a weird problem running configure on master on our cluster
I have a weird problem running configure on master on our cluster. Basically,
configure fails when I request lustre support, but not from ompio but openpmix.
What makes our cluster setup maybe a bit special is that the lustre libraries
are not installed in the standard path, but in /opt, and thu
yes please, file a bug report and I will check it out in the next few days.
Thanks!
Edgar
From: devel On Behalf Of Dave Taflin via
devel
Sent: Thursday, December 3, 2020 1:05 PM
To: devel@lists.open-mpi.org
Cc: Dave Taflin
Subject: [OMPI devel] Apparent bug in MPI_File_seek() using MPI_SEEK_END
I performed some tests on our Omnipath cluster, and I have a mixed bag of
results with 4.0.0rc1
1. Good news, the problems with the psm2 mtl that I reported in June/July
seem to be fixed. I still get however a warning every time I run a job with
4.0.0, e.g.
compute-1-1.local.4351PSM2
I had recently problems running ompi master in our Omnipath cluster, 3.0 and
3.1 work however without problems. After some digging, I found that I have to
set the environment variable PSM2_MULTI_EP for master to work at all for us.
Not sure whether this is intended or an inadvertent consequenc
Well, I am still confused. What is different on nixOS vs. other linux distros
that makes this error appear, and is it relevant enough for the backport or
should we just go forward for 4.0? Is it again a RTLD_GLOBAL issue as it was
back 2014? And last but not least, I raised on the github discuss
> (among other abstraction violations)
> >
> > What about following up in github ?
> >
> > Cheers,
> >
> > Gilles
> >
> > On Tuesday, June 12, 2018, Gabriel, Edgar
> wrote:
> > So , I am still surprised to see this error message: if you look at lets
So , I am still surprised to see this error message: if you look at lets say
just one error message (and all others are the same):
> > [orc-login2:107400] mca_base_component_repository_open: unable to open
> > mca_fcoll_individual: .../lib/openmpi/mca_fcoll_individual.so:
> > undefined symbol: mc
I wanted to add one item before I forget (although I agree with what Jeff
said): The error messages shown reminds me of the problem that we had with
ompio in 1.8/1.10 series when the RTLD_GLOBAL option was not correctly set.
However, that was fixed in the 2.0 series and going forward, so if th
13 matches
Mail list logo