Hi
I am trying to run open-mpi 1.3.3. between a linux box running ubuntu
server v.9.04 and a Macintosh. I have configured openmpi with the
following options.:
./configure --prefix=/usr/local/ --enable-heterogeneous --disable-shared
--enable-static
When both the machines are connected to the netwo
Hi Jeff,
Sorry about the ambiguity. I just had another conversation with our
TotalView person and the problem -seems- to be unrelated to OMPI.
Guess I jumped the gun...
Thanks,
Samuel K. Gutierrez
On Sep 21, 2009, at 8:58 AM, Jeff Squyres wrote:
Can you more precisely define "not work
As a workaround, Lisandro could just pre-seed the cache variables of the
respective configure tests that come out wrong.
./configure lt_cv_dlopen_self=yes lt_cv_dlopen_self_static=yes
HTH.
Cheers,
Ralf
* Jeff Squyres wrote on Mon, Sep 21, 2009 at 02:45:28PM CEST:
> Ick; I appreciate Lisandro'
Can you more precisely define "not working properly"?
On Sep 21, 2009, at 10:26 AM, Samuel K. Gutierrez wrote:
Hi,
According to our TotalView person, PGI and Intel versions of OMPI
1.3.3 are not working properly. She noted that 1.2.8 and 1.3.2 work
fine.
Thanks,
Samuel K. Gutierrez
On Sep
Hi,
According to our TotalView person, PGI and Intel versions of OMPI
1.3.3 are not working properly. She noted that 1.2.8 and 1.3.2 work
fine.
Thanks,
Samuel K. Gutierrez
On Sep 21, 2009, at 7:19 AM, Terry Dontje wrote:
Ralph Castain wrote:
I see it declared "extern" in orte/tools/ort
Jeff Squyres wrote:
> Do you just want to wait for the ummunotify stuff in OMPI? I'm half
> done making a merged "linux" memory component (i.e., it merges the
> ptmalloc2 component with the new ummunotify stuff).
>
> It won't help for kernels <2.6.32, of course. :-)
Yeah that's another solution
Do you just want to wait for the ummunotify stuff in OMPI? I'm half
done making a merged "linux" memory component (i.e., it merges the
ptmalloc2 component with the new ummunotify stuff).
It won't help for kernels <2.6.32, of course. :-)
On Sep 21, 2009, at 9:11 AM, Brice Goglin wrote:
J
Ralph Castain wrote:
I see it declared "extern" in orte/tools/orterun/debuggers.h, but not
DECLSPEC'd
FWIW: LANL uses intel compilers + totalview on a regular basis, and I
have yet to hear of an issue.
It actually will work if you attach to the job or if you are not relying
on the MPIR_Brea
Does declspec matter for executables? (I don't recall)
On Sep 21, 2009, at 9:15 AM, Ralph Castain wrote:
I see it declared "extern" in orte/tools/orterun/debuggers.h, but not
DECLSPEC'd
FWIW: LANL uses intel compilers + totalview on a regular basis, and I
have yet to hear of an issue.
On Sep
I see it declared "extern" in orte/tools/orterun/debuggers.h, but not
DECLSPEC'd
FWIW: LANL uses intel compilers + totalview on a regular basis, and I
have yet to hear of an issue.
On Sep 21, 2009, at 7:03 AM, Terry Dontje wrote:
I was kind of amazed no one else managed to run into this bu
Jeff Squyres wrote:
> On Sep 21, 2009, at 5:50 AM, Brice Goglin wrote:
>
>> I am playing with mx__regcache_clean() in Open-MX so as to have OpenMPI
>> cleanup the Open-MX regcache when needed. It causes some deadlocks since
>> OpenMPI intercepts Open-MX' own free() calls. Is there a "safe" way to
>
I was kind of amazed no one else managed to run into this but it was
brought to my attention that compiling OMPI with Intel compilers and
visibility on that the MPIR_Breakpoint symbol was not being exposed. I
am assuming this is due to MPIR_Breakpoint not being ORTE or OMPI_DECLSPEC'd
Do other
Ick; I appreciate Lisandro's quandry, but don't quite know what to do.
How about keeping libltdl fvisibility=hidden inside mpi4py?
On Sep 17, 2009, at 11:16 AM, Josh Hursey wrote:
So I started down this road a couple months ago. I was using the
lt_dlopen() and friends in the OPAL CRS self mod
On Sep 21, 2009, at 5:50 AM, Brice Goglin wrote:
I am playing with mx__regcache_clean() in Open-MX so as to have
OpenMPI
cleanup the Open-MX regcache when needed. It causes some deadlocks
since
OpenMPI intercepts Open-MX' own free() calls. Is there a "safe" way to
have Open-MX free/munmap ca
You were faster to fix the bug than I was to send my bug report :-)
So I confirm : this fixes the problem.
Thanks !
Sylvain
On Mon, 21 Sep 2009, Edgar Gabriel wrote:
what version of OpenMPI did you use? Patch #21970 should have fixed this
issue on the trunk...
Thanks
Edgar
Sylvain Jeaugey
what version of OpenMPI did you use? Patch #21970 should have fixed this
issue on the trunk...
Thanks
Edgar
Sylvain Jeaugey wrote:
Hi list,
We are currently experiencing deadlocks when using communicators other
than MPI_COMM_WORLD. So we made a very simple reproducer (Comm_create
then MPI_B
Hi list,
We are currently experiencing deadlocks when using communicators other
than MPI_COMM_WORLD. So we made a very simple reproducer (Comm_create then
MPI_Barrier on the communicator - see end of e-mail).
We can reproduce the deadlock only with openib and with at least 8 cores
(no succes
Hello,
I am playing with mx__regcache_clean() in Open-MX so as to have OpenMPI
cleanup the Open-MX regcache when needed. It causes some deadlocks since
OpenMPI intercepts Open-MX' own free() calls. Is there a "safe" way to
have Open-MX free/munmap calls not invoke OpenMPI interception hooks? Or
is
18 matches
Mail list logo