On Jan 30, 2008, at 4:43 AM, Thomas Ropars wrote:

After running autogen.sh, the file opal/libltdl/loaders/dlopen.c doesn't exist and more generally the directory opal/libltdl/loaders/ doesn't exist. That's why I need to add the RTLD_GLOBAL flag after running autogen.sh.

I'm using the following version of the autotools.

autoconf (GNU Autoconf) 2.61
automake (GNU automake) 1.10
libtoolize (GNU libtool) 1.5.22

If you're running LT 1.5, that makes sense -- the loaders/ directory is new in the LT 2.x series, IIRC. However, 1.5.22 should *default* to RTLD_GLOBAL; the switch to RTLD_LOCAL was a change in the LT 2.x series.

Indeed, when I used 1.5.22, I see the following in my sys_dl_open():

  lt_module   module   = dlopen (filename, LT_GLOBAL | LT_LAZY_OR_NOW);

And LT_GLOBAL is a #define for RTLD_GLOBAL.

Are you seeing something different?

What OS are you using?




Thomas
Jeff Squyres wrote:
We are testing for a specific line when looking for this patch:

            test ! -z "`grep 'filename, LT_LAZY_OR_NOW' opal/libltdl/
loaders/dlopen.c`"; then

If this line is different in your dlopen.c, then it doesn't find it
and therefore autogen.sh doesn't patch it.

Did you already patch dlopen.c, perchance, or is your original
dlopen.c different than this?


On Jan 29, 2008, at 9:35 AM, Thomas Ropars wrote:


I've solved the problem by adding the flag RTLD_GLOBAL in the call to
dlopen() in function "sys_dl_open (loader_data, filename)"
(opal/libltdl/ltdl.c)

It seems that I need this flag. However when I run autogen.sh, I get
the
following:
** Adjusting libltdl for OMPI :-(
++ patching for argz bugfix in libtool 1.5
   -- your libtool doesn't need this! yay!
++ patching 64-bit OS X bug in ltmain.sh
   -- your libtool doesn't need this! yay!
++ RTLD_GLOBAL in libltdl
   -- your libltdl doesn't need this! yay!

Thomas

Thomas Ropars wrote:

Hi,

I have the same error message when fault tolerance is activated.
I'm using gcc version 4.1.3, with Ubuntu 7.10 (i686) (kernel
2.6.22-14-generic)

Thomas

Aurelien Bouteiller wrote:


If you want to use the pessimist message logging you have to use
the "-
mca vprotocol pessimist" flag on your command line. This should work
despite the bug because if I understand correctly, the issue you
experience should occur only when fault tolerance is disabled.
I have troubles to reproduce the particular bug you are
experiencing.
What compiler and what architecture are you using ?

Aurelien
Le 13 déc. 07 à 07:58, Thomas Ropars a écrit :




I still have the same error after update (r16951).

I have the lib/openmpi/mca_pml_v.so file in my builld and the
command
line I use is: mpirun -np 4 my_application

Thomas


Aurelien Bouteiller wrote:



I could reproduce and fix the bug. It will be corrected in trunk
as
soon as the svn is online again. Thanks for reporting the problem.

Aurelien

Le 11 déc. 07 à 15:02, Aurelien Bouteiller a écrit :





I cannot reproduce the error. Please make sure you have the lib/ openmpi/mca_pml_v.so file in your build. If you don't, maybe you
forgot to run autogen.sh at the root of the trunk when you
removed .ompi_ignore.

If this does not fix the problem, please let me know your command
line
options to mpirun.

Aurelien

Le 11 déc. 07 à 14:36, Aurelien Bouteiller a écrit :





Mmm, I'll investigate this today.

Aurelien
Le 11 déc. 07 à 08:46, Thomas Ropars a écrit :





Hi,

I've tried to test the message logging component vprotocol
pessimist.
(svn checkout revision 16926)
When I run an mpi application, I get the following error :

mca: base: component_find: unable to open vprotocol pessimist:
/local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so:
undefined
symbol:
pml_v_output (ignored)


Regards

Thomas
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Dr. Aurelien Bouteiller, Sr. Research Associate
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996



--
Jeff Squyres
Cisco Systems


Reply via email to