Re: [hwloc-users] Creating a D wrapper around hwloc
Hi Jim Burnes, if D is Digital Mars D 1.0 you might want to know that I already did a wrapper, and I am using it since quite some it is part of blip which is available with an apache 2.0 license. http://dsource.org/projects/blip ciao Fawzi On 16-apr-10, at 22:17, Jim Burnes wrote: Hi, I'm creating a D wrapper around hwloc and so far it's going well, but I need some advice... One of the last issues I running into is at link time. Since a number of functions (especially in helper.h) are define as "static __inline" they are essentially macros. This is why they don't appear in the compiled libraries. I can make these available to D in several different ways, but I need to know the true intent of marking them as "static __inline". 1. Are they marked that way simply to increase performance? 2. Are they marked that way to avoid some sort of thread safety issue? If the answer is (1) then I can safely remove their "static __inline" markup and compile them into the library. If the answer is closer to (2) and you truly need them inlined into the source code where they are referenced then I can create a template mixin in D for them and include them like that. This is a cool library. Thanks for the extensive work. J Burnes ___ hwloc-users mailing list hwloc-us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-users
Re: [OMPI users] mpirun links wrong library with BLACS tester
On 28-gen-10, at 12:35, Jeff Squyres (jsquyres) wrote: What was blacs compiled against, lam or ompi? What is your LD_LIBRARY_PATH set to? Are you ensuring to use ompi's mpirun (vs, for example, lam's mpirun) yes everything was ok, I had tried everything I could think of, rpath, --prefix,... everything and was really getting mad. I spent an inordinate amount of time on this, and now that I realized what it was I just want to hit myself. On that machine someone just installed all the blacs and scalapack tests in /usr/bin. I was doing mpirun -np 8 xFbtest_MPI-LINUX-0 instead of mpirun -np 8 ./xFbtest_MPI-LINUX-0 so mpirun was using the version in /usr/bin but mpirun -np 1 env mpirun -np 1 ldd xFbtest_MPI-LINUX-0 and so on did return correct things. As initially I tested only the blacs and scalapack things it took me a long time to figure out. Yesterday before posting I tested the ompi examples and to my surprise they did work. Probably those time that it worked I had typed ./ without really realizing it. Anyway sorry for the noise, a really stupid mistake that had nothing to do with ompi ciao Fawzi -jms Sent from my PDA. No type good. - Original Message - From: users-boun...@open-mpi.orgTo: us...@open-mpi.org Sent: Wed Jan 27 21:11:21 2010 Subject: [OMPI users] mpirun links wrong library with BLACS tester I have installed openmpi 1.4.1 locally for one user on a cluster, where some other mpi were installed. when I try to run an executable through mpirun (I am running the BLACS tester) I get xFbtest_MPI-LINUX-0: error while loading shared libraries: liblam.so. 0: cannot open shared object file: No such file or directory if I run the executable it works ldd always shows the correct libraries (even when run in mpirun) and no liblam also the environment looks normal in both cases (both PATH and LD_RUN_PATH have the installation as first path). I did try to set -rpath to */lib and */lib/openmpi, and generally reduce the environment to a basic one, and use that in all the shells both when compiling and running, but to no avail. The examples in the openmpi directory seem to work without problems. I did manage to run the blacs tester, but in no reproducible way (I really don't know what I did to make it work and it stopped working really fast (the same binary)). The same setup works in another machine (and I think BLACS flags are ok) I am getting really crazy, any pointer at what else I could try would be greatly appreciated. gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42) G95 (GCC 4.0.3 (g95 0.92!) Jun 24 2009) thanks Fawzi
[OMPI users] mpirun links wrong library with BLACS tester
I have installed openmpi 1.4.1 locally for one user on a cluster, where some other mpi were installed. when I try to run an executable through mpirun (I am running the BLACS tester) I get xFbtest_MPI-LINUX-0: error while loading shared libraries: liblam.so. 0: cannot open shared object file: No such file or directory if I run the executable it works ldd always shows the correct libraries (even when run in mpirun) and no liblam also the environment looks normal in both cases (both PATH and LD_RUN_PATH have the installation as first path). I did try to set -rpath to */lib and */lib/openmpi, and generally reduce the environment to a basic one, and use that in all the shells both when compiling and running, but to no avail. The examples in the openmpi directory seem to work without problems. I did manage to run the blacs tester, but in no reproducible way (I really don't know what I did to make it work and it stopped working really fast (the same binary)). The same setup works in another machine (and I think BLACS flags are ok) I am getting really crazy, any pointer at what else I could try would be greatly appreciated. gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42) G95 (GCC 4.0.3 (g95 0.92!) Jun 24 2009) thanks Fawzi