Hello Brian, sure, attached is output of ompi_info -a on: model name : AMD Opteron(tm) Processor 246
Linux c3-19 2.4.21-OC_NUMA_fix #4 SMP Tue Nov 30 16:03:38 CET 2004 x86_64 unknown It's a SuSE SLES8 distribution with the following libc: hpcraink@c3-19:~ > /lib64/libc.so.6 GNU C Library stable release version 2.2.5, by Roland McGrath et al. Copyright (C) 1992-2001, 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 3.2.2 (SuSE Linux). Compiled on a Linux 2.4.19 system on 2003-03-27. Available extensions: GNU libio by Per Bothner crypt add-on version 2.1 by Michael Glad and others Berkeley DB glibc 2.1 compat library by Thorsten Kukuk linuxthreads-0.9 by Xavier Leroy BIND-8.2.3-T5B libthread_db work sponsored by Alpha Processor Inc NIS(YP)/NIS+ NSS modules 0.19 by Thorsten Kukuk Report bugs using the `glibcbug' script to <b...@gnu.org>. Compilation was done with pgcc-5.2-4 CU, raY On Thursday 18 August 2005 20:05, Brian Barrett wrote: > Just to double check, can you run ompi_info and send me the results? > > Thanks, > > Brian > > On Aug 18, 2005, at 10:45 AM, Rainer Keller wrote: > > Hello, > > see the "same" (well probably not exactly same) thing here in > > Opteron with > > 64bit (-g and so on), I get: > > > > #0 0x0000000040085160 in orte_sds_base_contact_universe () > > at ../../../../../orte/mca/sds/base/sds_base_interface.c:29 > > 29 return orte_sds_base_module->contact_universe(); > > (gdb) where > > #0 0x0000000040085160 in orte_sds_base_contact_universe () > > at ../../../../../orte/mca/sds/base/sds_base_interface.c:29 > > #1 0x0000000040063e95 in orte_init_stage1 () > > at ../../../orte/runtime/orte_init_stage1.c:185 > > #2 0x0000000040017e7d in orte_system_init () > > at ../../../orte/runtime/orte_system_init.c:38 > > #3 0x00000000400148f5 in orte_init () at ../../../orte/runtime/ > > orte_init.c:46 > > #4 0x000000004000dfc7 in main (argc=4, argv=0x7fbfffe8a8) > > at ../../../../orte/tools/orterun/orterun.c:291 > > #5 0x0000002a95c0c017 in __libc_start_main () from /lib64/libc.so.6 > > #6 0x000000004000bf2a in _start () > > (gdb) > > within mpirun > > > > orte_sds_base_module here is Null... > > This is without persistent orted; Just mpirun... > > > > CU, > > ray > > > > On Thursday 18 August 2005 16:57, Nathan DeBardeleben wrote: > >> FYI, this only happens when I let OMPI compile 64bit on Linux. > >> When I > >> throw in there CFLAGS=FFLAGS=CXXFLAGS=-m32 orted, my myriad of test > >> codes, mpirun, registry subscription codes, and JNI all work like > >> a champ. > >> Something's wrong with the 64bit it appears to me. > >> > >> -- Nathan > >> Correspondence > >> --------------------------------------------------------------------- > >> Nathan DeBardeleben, Ph.D. > >> Los Alamos National Laboratory > >> Parallel Tools Team > >> High Performance Computing Environments > >> phone: 505-667-3428 > >> email: ndeb...@lanl.gov > >> --------------------------------------------------------------------- > >> > >> Tim S. Woodall wrote: > >>> Nathan, > >>> > >>> I'll try to reproduce this sometime this week - but I'm pretty > >>> swamped. > >>> Is Greg also seeing the same behavior? > >>> > >>> Thanks, > >>> Tim > >>> > >>> Nathan DeBardeleben wrote: > >>>> To expand on this further, orte_init() seg faults on both bluesteel > >>>> (32bit linux) and sparkplug (64bit linux) equally. The required > >>>> condition is that orted must be running first (which of course we > >>>> require for our work - a persistent orte daemon and registry). > >>>> > >>>>> [bluesteel]~/ptp > ./dump_info > >>>>> Segmentation fault > >>>>> [bluesteel]~/ptp > gdb dump_info > >>>>> GNU gdb 6.1 > >>>>> Copyright 2004 Free Software Foundation, Inc. > >>>>> GDB is free software, covered by the GNU General Public > >>>>> License, and > >>>>> you are > >>>>> welcome to change it and/or distribute copies of it under certain > >>>>> conditions. > >>>>> Type "show copying" to see the conditions. > >>>>> There is absolutely no warranty for GDB. Type "show warranty" for > >>>>> details. > >>>>> This GDB was configured as "x86_64-suse-linux"...Using host > >>>>> libthread_db library "/lib64/tls/libthread_db.so.1". > >>>>> > >>>>> (gdb) run > >>>>> Starting program: /home/ndebard/ptp/dump_info > >>>>> > >>>>> Program received signal SIGSEGV, Segmentation fault. > >>>>> 0x0000000000000000 in ?? () > >>>>> (gdb) where > >>>>> #0 0x0000000000000000 in ?? () > >>>>> #1 0x000000000045997d in orte_init_stage1 () at > >>>>> orte_init_stage1.c:419 > >>>>> #2 0x00000000004156a7 in orte_system_init () at > >>>>> orte_system_init.c:38 > >>>>> #3 0x00000000004151c7 in orte_init () at orte_init.c:46 > >>>>> #4 0x0000000000414cbb in main (argc=1, argv=0x7fbffff298) at > >>>>> dump_info.c:185 > >>>>> (gdb) > >>>> > >>>> -- Nathan > >>>> Correspondence > >>>> ------------------------------------------------------------------- > >>>> -- > >>>> Nathan DeBardeleben, Ph.D. > >>>> Los Alamos National Laboratory > >>>> Parallel Tools Team > >>>> High Performance Computing Environments > >>>> phone: 505-667-3428 > >>>> email: ndeb...@lanl.gov > >>>> ------------------------------------------------------------------- > >>>> -- > >>>> > >>>> Nathan DeBardeleben wrote: > >>>>> Just to clarify: > >>>>> 1: no orted started (meaning the MPIrun or registry programs will > >>>>> start one by themselves) causes those programs to lock up. > >>>>> 2: starting orted by hand (trying to get these programs to > >>>>> connect to > >>>>> a centralized one) causes the connecting programs to seg fault. > >>>>> > >>>>> -- Nathan > >>>>> Correspondence > >>>>> ------------------------------------------------------------------ > >>>>> --- > >>>>> Nathan DeBardeleben, Ph.D. > >>>>> Los Alamos National Laboratory > >>>>> Parallel Tools Team > >>>>> High Performance Computing Environments > >>>>> phone: 505-667-3428 > >>>>> email: ndeb...@lanl.gov > >>>>> ------------------------------------------------------------------ > >>>>> --- > >>>>> > >>>>> Nathan DeBardeleben wrote: > >>>>>> So I dropped an .ompi_ignore into that directory, > >>>>>> reconfigured, and > >>>>>> compile worked (yay!). > >>>>>> However, not a lot of progress: mpirun locks up, all my > >>>>>> registry test > >>>>>> programs lock up as well. If I start the orted by hand, then > >>>>>> any of my > >>>>>> > >>>>>> registry calling programs cause segfault: > >>>>>>> [sparkplug]~/ptp > gdb sub_test > >>>>>>> GNU gdb 6.1 > >>>>>>> Copyright 2004 Free Software Foundation, Inc. > >>>>>>> GDB is free software, covered by the GNU General Public > >>>>>>> License, and > >>>>>>> you are > >>>>>>> welcome to change it and/or distribute copies of it under > >>>>>>> certain > >>>>>>> conditions. > >>>>>>> Type "show copying" to see the conditions. > >>>>>>> There is absolutely no warranty for GDB. Type "show > >>>>>>> warranty" for > >>>>>>> details. > >>>>>>> This GDB was configured as "x86_64-suse-linux"...Using host > >>>>>>> libthread_db library "/lib64/tls/libthread_db.so.1". > >>>>>>> > >>>>>>> (gdb) run > >>>>>>> Starting program: /home/ndebard/ptp/sub_test > >>>>>>> > >>>>>>> Program received signal SIGSEGV, Segmentation fault. > >>>>>>> 0x0000000000000000 in ?? () > >>>>>>> (gdb) where > >>>>>>> #0 0x0000000000000000 in ?? () > >>>>>>> #1 0x00000000004598a5 in orte_init_stage1 () at > >>>>>>> orte_init_stage1.c:419 #2 0x00000000004155cf in > >>>>>>> orte_system_init () > >>>>>>> at orte_system_init.c:38 #3 0x00000000004150ef in orte_init > >>>>>>> () at > >>>>>>> orte_init.c:46 > >>>>>>> #4 0x00000000004148a1 in main (argc=1, argv=0x7fbffff178) at > >>>>>>> sub_test.c:60 > >>>>>>> (gdb) > >>>>>> > >>>>>> Yes, I recompiled everything. > >>>>>> > >>>>>> Here's an example of me trying something a little more > >>>>>> complicated > >>>>>> (which I believe locks up for the same reason - something > >>>>>> borked with > >>>>>> the registry interaction). > >>>>>> > >>>>>>>> [sparkplug]~/ompi-test > bjssub -s 10000 -n 10 -i bash > >>>>>>>> Waiting for interactive job nodes. > >>>>>>>> (nodes 18 16 17 18 19 20 21 22 23 24 25) > >>>>>>>> Starting interactive job. > >>>>>>>> NODES=16,17,18,19,20,21,22,23,24,25 > >>>>>>>> JOBID=18 > >>>>>>> > >>>>>>> so i got my nodes > >>>>>>> > >>>>>>>> ndebard@sparkplug:~/ompi-test> export > >>>>>>>> OMPI_MCA_ptl_base_exclude=sm > >>>>>>>> ndebard@sparkplug:~/ompi-test> export > >>>>>>>> OMPI_MCA_pls_bproc_seed_priority=101 > >>>>>>> > >>>>>>> and set these envvars like we need to use Greg's bproc, > >>>>>>> without the > >>>>>>> 2nd export the machine's load maxes and locks up. > >>>>>>> > >>>>>>>> ndebard@sparkplug:~/ompi-test> bpstat > >>>>>>>> Node(s) Status Mode > >>>>>>>> User Group 100-128 down > >>>>>>>> ---------- root root 0-15 > >>>>>>>> up ---x------ vchandu vchandu > >>>>>>>> 16-25 up ---x------ > >>>>>>>> ndebard ndebard > >>>>>>>> 26-27 up ---x------ > >>>>>>>> root root 28-30 up > >>>>>>>> ---x--x--x root root ndebard@sparkplug:~/ompi-test> > >>>>>>>> env | grep > >>>>>>>> NODES > >>>>>>>> NODES=16,17,18,19,20,21,22,23,24,25 > >>>>>>> > >>>>>>> yes, i really have the nodes > >>>>>>> > >>>>>>>> ndebard@sparkplug:~/ompi-test> mpicc -o test-mpi test-mpi.c > >>>>>>>> ndebard@sparkplug:~/ompi-test> > >>>>>>> > >>>>>>> recompile for good measure > >>>>>>> > >>>>>>>> ndebard@sparkplug:~/ompi-test> ls /tmp/openmpi-sessions- > >>>>>>>> ndebard* > >>>>>>>> /bin/ls: /tmp/openmpi-sessions-ndebard*: No such file or > >>>>>>>> directory > >>>>>>> > >>>>>>> proof that there's no left over old directory > >>>>>>> > >>>>>>>> ndebard@sparkplug:~/ompi-test> mpirun -np 1 test-mpi > >>>>>>> > >>>>>>> it never responds at this point - but I can kill it with ^C. > >>>>>>> > >>>>>>>> mpirun: killing job... > >>>>>>>> Killed > >>>>>>>> ndebard@sparkplug:~/ompi-test> > >>>>>> > >>>>>> -- Nathan > >>>>>> Correspondence > >>>>>> ----------------------------------------------------------------- > >>>>>> ---- > >>>>>> Nathan DeBardeleben, Ph.D. > >>>>>> Los Alamos National Laboratory > >>>>>> Parallel Tools Team > >>>>>> High Performance Computing Environments > >>>>>> phone: 505-667-3428 > >>>>>> email: ndeb...@lanl.gov > >>>>>> ----------------------------------------------------------------- > >>>>>> ---- > >>>>>> > >>>>>> Jeff Squyres wrote: > >>>>>>> Is this what Tim Prins was working on? > >>>>>>> > >>>>>>> On Aug 16, 2005, at 5:21 PM, Tim S. Woodall wrote: > >>>>>>>> I'm not sure why this is even building... Is someone working > >>>>>>>> on this? > >>>>>>>> I thought we had .ompi_ignore files in this directory. > >>>>>>>> > >>>>>>>> Tim > >>>>>>>> > >>>>>>>> Nathan DeBardeleben wrote: > >>>>>>>>> So I'm seeing all these nice emails about people developing > >>>>>>>>> on OMPI > >>>>>>>>> today yet I can't get it to compile. Am I out here in > >>>>>>>>> limbo on this > >>>>>>>>> or > >>>>>>>>> are others in the same boat? The errors I'm seeing are > >>>>>>>>> about some > >>>>>>>>> bproc > >>>>>>>>> code calling undefined functions and they are linked again > >>>>>>>>> below. > >>>>>>>>> > >>>>>>>>> -- Nathan > >>>>>>>>> Correspondence > >>>>>>>>> -------------------------------------------------------------- > >>>>>>>>> ------ > >>>>>>>>> - Nathan DeBardeleben, Ph.D. > >>>>>>>>> Los Alamos National Laboratory > >>>>>>>>> Parallel Tools Team > >>>>>>>>> High Performance Computing Environments > >>>>>>>>> phone: 505-667-3428 > >>>>>>>>> email: ndeb...@lanl.gov > >>>>>>>>> -------------------------------------------------------------- > >>>>>>>>> ------ > >>>>>>>>> - > >>>>>>>>> > >>>>>>>>> Nathan DeBardeleben wrote: > >>>>>>>>>> Back from training and trying to test this but now OMPI > >>>>>>>>>> doesn't > >>>>>>>>>> compile > >>>>>>>>>> > >>>>>>>>>> at all: > >>>>>>>>>>> gcc -DHAVE_CONFIG_H -I. -I. -I../../../../include > >>>>>>>>>>> -I../../../../include -I../../../.. -I../../../.. > >>>>>>>>>>> -I../../../../include -I../../../../opal -I../../../../orte > >>>>>>>>>>> -I../../../../ompi -g -Wall -Wundef -Wno-long-long -Wsign- > >>>>>>>>>>> compare > >>>>>>>>>>> -Wmissing-prototypes -Wstrict-prototypes -Wcomment -pedantic > >>>>>>>>>>> -Werror-implicit-function-declaration -fno-strict- > >>>>>>>>>>> aliasing -MT > >>>>>>>>>>> ras_lsf_bproc.lo -MD -MP -MF .deps/ras_lsf_bproc.Tpo -c > >>>>>>>>>>> ras_lsf_bproc.c -o ras_lsf_bproc.o > >>>>>>>>>>> ras_lsf_bproc.c: In function > >>>>>>>>>>> `orte_ras_lsf_bproc_node_insert': > >>>>>>>>>>> ras_lsf_bproc.c:32: error: implicit declaration of function > >>>>>>>>>>> `orte_ras_base_node_insert' > >>>>>>>>>>> ras_lsf_bproc.c: In function > >>>>>>>>>>> `orte_ras_lsf_bproc_node_query': > >>>>>>>>>>> ras_lsf_bproc.c:37: error: implicit declaration of function > >>>>>>>>>>> `orte_ras_base_node_query' > >>>>>>>>>>> make[4]: *** [ras_lsf_bproc.lo] Error 1 > >>>>>>>>>>> make[4]: Leaving directory > >>>>>>>>>>> `/home/ndebard/ompi/orte/mca/ras/lsf_bproc' > >>>>>>>>>>> make[3]: *** [all-recursive] Error 1 > >>>>>>>>>>> make[3]: Leaving directory `/home/ndebard/ompi/orte/mca/ras' > >>>>>>>>>>> make[2]: *** [all-recursive] Error 1 > >>>>>>>>>>> make[2]: Leaving directory `/home/ndebard/ompi/orte/mca' > >>>>>>>>>>> make[1]: *** [all-recursive] Error 1 > >>>>>>>>>>> make[1]: Leaving directory `/home/ndebard/ompi/orte' > >>>>>>>>>>> make: *** [all-recursive] Error 1 > >>>>>>>>>>> [sparkplug]~/ompi > > >>>>>>>>>> > >>>>>>>>>> Clean SVN checkout this morning with configure: > >>>>>>>>>>> [sparkplug]~/ompi > ./configure --enable-static --disable- > >>>>>>>>>>> shared > >>>>>>>>>>> --without-threads --prefix=/home/ndebard/local/ompi > >>>>>>>>>>> --with-devel-headers > >>>>>>>>>> > >>>>>>>>>> -- Nathan > >>>>>>>>>> Correspondence > >>>>>>>>>> ------------------------------------------------------------- > >>>>>>>>>> ------ > >>>>>>>>>> -- Nathan DeBardeleben, Ph.D. > >>>>>>>>>> Los Alamos National Laboratory > >>>>>>>>>> Parallel Tools Team > >>>>>>>>>> High Performance Computing Environments > >>>>>>>>>> phone: 505-667-3428 > >>>>>>>>>> email: ndeb...@lanl.gov > >>>>>>>>>> ------------------------------------------------------------- > >>>>>>>>>> ------ > >>>>>>>>>> -- > >>>>>>>>>> > >>>>>>>>>> Brian Barrett wrote: > >>>>>>>>>>> This is now fixed in SVN. You should no longer need the > >>>>>>>>>>> --build=i586... hack to compile 32 bit code on Opterons. > >>>>>>>>>>> > >>>>>>>>>>> Brian > >>>>>>>>>>> > >>>>>>>>>>> On Aug 12, 2005, at 3:17 PM, Brian Barrett wrote: > >>>>>>>>>>>> On Aug 12, 2005, at 3:13 PM, Nathan DeBardeleben wrote: > >>>>>>>>>>>>> We've got a 64bit Linux (SUSE) box here. For a variety of > >>>>>>>>>>>>> reasons (Java, JNI, linking in with OMPI libraries, etc > >>>>>>>>>>>>> which I > >>>>>>>>>>>>> won't get into) > >>>>>>>>>>>>> I need to compile OMPI 32 bit (or get 64bit versions of > >>>>>>>>>>>>> a lot of > >>>>>>>>>>>>> other > >>>>>>>>>>>>> libraries). > >>>>>>>>>>>>> I get various compile errors when I try different > >>>>>>>>>>>>> things, but > >>>>>>>>>>>>> first > >>>>>>>>>>>>> let > >>>>>>>>>>>>> me explain the system we have: > >>>>>>>>>>>> > >>>>>>>>>>>> <snip> > >>>>>>>>>>>> > >>>>>>>>>>>>> This goes on and on and on actually. And the 'is > >>>>>>>>>>>>> incompatible > >>>>>>>>>>>>> with > >>>>>>>>>>>>> i386:x86-64 output' looks to be repeated for every line > >>>>>>>>>>>>> before > >>>>>>>>>>>>> this > >>>>>>>>>>>>> error which actually caused the Make to bomb. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Any suggestions at all? Surely someone must have tried > >>>>>>>>>>>>> to force > >>>>>>>>>>>>> OMPI to > >>>>>>>>>>>>> build in 32bit mode on a 64bit machine. > >>>>>>>>>>>> > >>>>>>>>>>>> I don't think anyone has tried to build 32 bit on an > >>>>>>>>>>>> Opteron, > >>>>>>>>>>>> which is the cause of the problems... > >>>>>>>>>>>> > >>>>>>>>>>>> I think I know how to fix this, but won't happen until > >>>>>>>>>>>> later in > >>>>>>>>>>>> the weekend. I can't think of a good workaround until > >>>>>>>>>>>> then. > >>>>>>>>>>>> Well, one possibility is to set the target like you were > >>>>>>>>>>>> doing > >>>>>>>>>>>> and disable ROMIO. Actually, you'll also need to disable > >>>>>>>>>>>> Fortran 77. So something like: > >>>>>>>>>>>> > >>>>>>>>>>>> ./configure [usual options] --build=i586-suse-linux -- > >>>>>>>>>>>> disable-io- > >>>>>>>>>>>> romio --disable-f77 > >>>>>>>>>>>> > >>>>>>>>>>>> might just do the trick. > >>>>>>>>>>>> > >>>>>>>>>>>> Brian > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> -- > >>>>>>>>>>>> Brian Barrett > >>>>>>>>>>>> Open MPI developer > >>>>>>>>>>>> http://www.open-mpi.org/ > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>> devel mailing list > >>>>>>>>>>>> de...@open-mpi.org > >>>>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>>>>>>>> > >>>>>>>>>> _______________________________________________ > >>>>>>>>>> devel mailing list > >>>>>>>>>> de...@open-mpi.org > >>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>>> devel mailing list > >>>>>>>>> de...@open-mpi.org > >>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>>>>>> > >>>>>>>> _______________________________________________ > >>>>>>>> devel mailing list > >>>>>>>> de...@open-mpi.org > >>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>>>> > >>>>>> _______________________________________________ > >>>>>> devel mailing list > >>>>>> de...@open-mpi.org > >>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>>> > >>>>> _______________________________________________ > >>>>> devel mailing list > >>>>> de...@open-mpi.org > >>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>>> > >>>> _______________________________________________ > >>>> devel mailing list > >>>> de...@open-mpi.org > >>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >>> > >>> _______________________________________________ > >>> devel mailing list > >>> de...@open-mpi.org > >>> http://www.open-mpi.org/mailman/listinfo.cgi/devel > >> > >> _______________________________________________ > >> devel mailing list > >> de...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/devel > > > > -- > > --------------------------------------------------------------------- > > Dipl.-Inf. Rainer Keller email: kel...@hlrs.de > > High Performance Computing Tel: ++49 (0)711-685 5858 > > Center Stuttgart (HLRS) Fax: ++49 (0)711-678 7626 > > Nobelstrasse 19, R. O0.030 http://www.hlrs.de/people/keller > > 70550 Stuttgart > > _______________________________________________ > > devel mailing list > > de...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/devel > > _______________________________________________ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel -- --------------------------------------------------------------------- Dipl.-Inf. Rainer Keller email: kel...@hlrs.de High Performance Computing Tel: ++49 (0)711-685 5858 Center Stuttgart (HLRS) Fax: ++49 (0)711-678 7626 Nobelstrasse 19, R. O0.030 http://www.hlrs.de/people/keller 70550 Stuttgart
Open MPI: 1.0a1r6896 Open MPI SVN revision: r6896 Open RTE: 1.0a1r6896 Open RTE SVN revision: r6896 OPAL: 1.0a1r6896 OPAL SVN revision: r6896 MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component v1.0) MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0) MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0) MCA coll: basic (MCA v1.0, API v1.0, Component v1.0) MCA coll: self (MCA v1.0, API v1.0, Component v1.0) MCA io: romio (MCA v1.0, API v1.0, Component v1.0) MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0) MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0) MCA pml: teg (MCA v1.0, API v1.0, Component v1.0) MCA pml: uniq (MCA v1.0, API v1.0, Component v1.0) MCA ptl: self (MCA v1.0, API v1.0, Component v1.0) MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0) MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0) MCA ptl: gm (MCA v1.0, API v1.0, Component v1.0) MCA btl: self (MCA v1.0, API v1.0, Component v1.0) MCA btl: sm (MCA v1.0, API v1.0, Component v1.0) MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0) MCA topo: unity (MCA v1.0, API v1.0, Component v1.0) Prefix: /strider/rus/rus/raink/ompi-pgcc Bindir: /strider/rus/rus/raink/ompi-pgcc/bin Libdir: /strider/rus/rus/raink/ompi-pgcc/lib Incdir: /strider/rus/rus/raink/ompi-pgcc/include Pkglibdir: /strider/rus/rus/raink/ompi-pgcc/lib/openmpi Sysconfdir: /strider/rus/rus/raink/ompi-pgcc/etc Configured architecture: x86_64-unknown-linux-gnu Configured by: hpcraink Configured on: Thu Aug 18 16:48:09 CEST 2005 Configure host: strider Built by: hpcraink Built on: Thu Aug 18 17:04:52 CEST 2005 Built host: strider C bindings: yes C++ bindings: yes Fortran77 bindings: yes (all) Fortran90 bindings: no C compiler: pgcc C compiler absolute: /opt/pgi/5.2.4/linux86-64/5.2/bin/pgcc C char size: 1 C bool size: 1 C short size: 2 C int size: 4 C long size: 8 C float size: 4 C double size: 8 C pointer size: 8 C char align: 1 C bool align: 1 C int align: 4 C float align: 4 C double align: 8 C++ compiler: pgCC C++ compiler absolute: /opt/pgi/5.2.4/linux86-64/5.2/bin/pgCC Fortran77 compiler: pgf90 Fortran77 compiler abs: /opt/pgi/5.2.4/linux86-64/5.2/bin/pgf90 Fortran90 compiler: none Fortran90 compiler abs: none Fort integer size: 4 Fort have integer1: yes Fort have integer2: yes Fort have integer4: yes Fort have integer8: yes Fort have integer16: no Fort have real4: yes Fort have real8: yes Fort have real16: no Fort have complex8: yes Fort have complex16: yes Fort have complex32: no Fort integer1 size: 1 Fort integer2 size: 2 Fort integer4 size: 4 Fort integer8 size: 8 Fort integer16 size: -1 Fort real size: 4 Fort real4 size: 4 Fort real8 size: 8 Fort real16 size: -1 Fort dbl prec size: 4 Fort cplx size: 4 Fort dbl cplx size: 4 Fort cplx8 size: 8 Fort cplx16 size: 16 Fort cplx32 size: -1 Fort integer align: 4 Fort integer1 align: 1 Fort integer2 align: 2 Fort integer4 align: 4 Fort integer8 align: 8 Fort integer16 align: -1 Fort real align: 4 Fort real4 align: 4 Fort real8 align: 8 Fort real16 align: -1 Fort dbl prec align: 4 Fort cplx align: 4 Fort dbl cplx align: 4 Fort cplx8 align: 4 Fort cplx16 align: 8 Fort cplx32 align: -1 C profiling: yes C++ profiling: yes Fortran77 profiling: yes Fortran90 profiling: no C++ exceptions: no Thread support: posix (mpi: no, progress: no) Build CFLAGS: -g Build CXXFLAGS: -g Build FFLAGS: Build FCFLAGS: Build LDFLAGS: -export-dynamic Build LIBS: -laio -lm -lutil -lpthread Wrapper extra CFLAGS: Wrapper extra CXXFLAGS: Wrapper extra FFLAGS: Wrapper extra FCFLAGS: Wrapper extra LDFLAGS: -L/opt/gm/default/lib64 Wrapper extra LIBS: -lutil -lpthread -lgm -Wl,--export-dynamic -laio -lm -lutil -lpthread -ldl Internal debug support: yes MPI parameter check: runtime Memory profiling support: no Memory debugging support: no Memory hook support: yes libltdl support: 1 MCA mca: parameter "mca_param_files" (current value: "/strider/rus/rus/raink/.openmpi/mca-params.conf:/strider/rus/rus/raink/ompi-pgcc/etc/openmpi-mca-params.conf") Path for MCA configuration files containing default parameter values MCA mca: parameter "mca_component_path" (current value: "/strider/rus/rus/raink/ompi-pgcc/lib/openmpi:/strider/rus/rus/raink/.openmpi/components") Path where to look for Open MPI and ORTE components MCA mca: parameter "mca_verbose" (current value: <none>) Top-level verbosity parameter MCA mca: parameter "mca_component_show_load_errors" (current value: "1") Whether to show errors for components that failed to load or not MCA mca: parameter "mca_component_disable_dlopen" (current value: "0") Whether to attempt to disable opening dynamic components or not MCA mpi: parameter "mpi_param_check" (current value: "1") Whether you want MPI API parameters checked at run-time or not. Possible values are 0 (no checking) and 1 (perform checking at run-time) MCA mpi: parameter "mpi_signal" (current value: <none>) If a signal is received, display the stack trace frame MCA mpi: parameter "mpi_yield_when_idle" (current value: "0") Yield the processor when waiting for MPI communication (for MPI processes, will default to 1 when oversubscribing nodes) MCA mpi: parameter "mpi_event_tick_rate" (current value: "-1") How often to progress TCP communications (0 = never, all positive integers [N] indicate a fraction of progression time that is devoted to TCP progression [i.e., 1/N]) MCA mpi: parameter "mpi_show_handle_leaks" (current value: "0") Whether MPI_FINALIZE shows all MPI handles that were not freed or not MCA mpi: parameter "mpi_no_free_handles" (current value: "0") Whether to actually free MPI objects when their handles are freed MCA mpi: parameter "mpi_show_mca_params" (current value: "0") Whether to show all MCA parameter value during MPI_INIT or not (good for reproducability of MPI jobs) MCA mpi: parameter "mpi_show_mca_params_file" (current value: <none>) If mpi_show_mca_params is true, setting this string to a valid filename tells Open MPI to dump all the MCA parameter values into a file suitable for reading via the mca_param_files parameter (good for reproducability of MPI jobs) MCA mpi: parameter "mpi_leave_pinned" (current value: "0") leave_pinned MCA memory: parameter "memory_base_verbose" (current value: "0") MCA memory: parameter "memory_base" (current value: <none>) memory MCA memory: parameter "memory_malloc_hooks_priority" (current value: "0") MCA paffinity: parameter "paffinity_base" (current value: <none>) paffinity MCA allocator: parameter "allocator_base_verbose" (current value: "0") MCA allocator: parameter "allocator_base" (current value: <none>) allocator MCA allocator: parameter "allocator_basic_priority" (current value: "0") MCA allocator: parameter "allocator_bucket_num_buckets" (current value: "30") MCA allocator: parameter "allocator_bucket_priority" (current value: "0") MCA coll: parameter "coll_base_verbose" (current value: "0") MCA coll: parameter "coll_base" (current value: <none>) coll MCA coll: parameter "coll_basic_priority" (current value: "10") MCA coll: parameter "coll_self_priority" (current value: "75") MCA io: parameter "io_base_freelist_initial_size" (current value: "16") Initial MPI-2 IO request freelist size MCA io: parameter "io_base_freelist_max_size" (current value: "64") Max size of the MPI-2 IO request freelist MCA io: parameter "io_base_freelist_increment" (current value: "16") Increment size of the MPI-2 IO request freelist MCA io: parameter "io_base_verbose" (current value: "0") MCA io: parameter "io_base" (current value: <none>) io MCA io: parameter "io_romio_priority" (current value: "10") Priority of the io romio component MCA io: parameter "io_romio_delete_priority" (current value: "10") Delete priority of the io romio component MCA mpool: parameter "mpool_base_verbose" (current value: "0") MCA mpool: parameter "mpool_base" (current value: <none>) mpool MCA mpool: parameter "mpool_sm_size" (current value: "536870912") MCA mpool: parameter "mpool_sm_allocator" (current value: "bucket") MCA mpool: parameter "mpool_sm_priority" (current value: "0") MCA pml: parameter "pml_base_verbose" (current value: "0") MCA pml: parameter "pml_base" (current value: <none>) pml MCA pml: parameter "pml_ob1_free_list_num" (current value: "256") MCA pml: parameter "pml_ob1_free_list_max" (current value: "-1") MCA pml: parameter "pml_ob1_free_list_inc" (current value: "256") MCA pml: parameter "pml_ob1_priority" (current value: "0") MCA pml: parameter "pml_ob1_eager_limit" (current value: "131072") MCA pml: parameter "pml_ob1_send_pipeline_depth" (current value: "3") MCA pml: parameter "pml_ob1_recv_pipeline_depth" (current value: "4") MCA pml: parameter "pml_teg_free_list_num" (current value: "256") MCA pml: parameter "pml_teg_free_list_max" (current value: "-1") MCA pml: parameter "pml_teg_free_list_inc" (current value: "256") MCA pml: parameter "pml_teg_poll_iterations" (current value: "100000") MCA pml: parameter "pml_teg_priority" (current value: "1") MCA pml: parameter "pml_uniq_free_list_num" (current value: "256") MCA pml: parameter "pml_uniq_free_list_max" (current value: "-1") MCA pml: parameter "pml_uniq_free_list_inc" (current value: "256") MCA pml: parameter "pml_uniq_poll_iterations" (current value: "100000") MCA pml: parameter "pml_uniq_priority" (current value: "0") MCA pml: parameter "pml" (current value: <none>) MCA ptl: parameter "ptl_sm_free_list_num" (current value: "256") MCA ptl: parameter "ptl_sm_free_list_max" (current value: "-1") MCA ptl: parameter "ptl_sm_free_list_inc" (current value: "256") MCA ptl: parameter "ptl_sm_max_procs" (current value: "-1") MCA ptl: parameter "ptl_sm_sm_extra_procs" (current value: "2") MCA ptl: parameter "ptl_sm_mpool" (current value: "sm") MCA ptl: parameter "ptl_sm_eager_limit" (current value: "1024") MCA ptl: parameter "ptl_sm_max_frag_size" (current value: "8192") MCA ptl: parameter "ptl_sm_size_of_cb_queue" (current value: "128") MCA ptl: parameter "ptl_sm_cb_lazy_free_freq" (current value: "120") MCA ptl: parameter "ptl_base_verbose" (current value: "0") MCA ptl: parameter "ptl_base" (current value: <none>) ptl MCA ptl: parameter "ptl_self_buffer_size" (current value: "65536") MCA ptl: parameter "ptl_self_nonblocking" (current value: "1") MCA ptl: parameter "ptl_self_priority" (current value: "0") MCA ptl: parameter "ptl_sm_first_frag_free_list_num" (current value: "256") MCA ptl: parameter "ptl_sm_first_frag_free_list_max" (current value: "-1") MCA ptl: parameter "ptl_sm_first_frag_free_list_inc" (current value: "256") MCA ptl: parameter "ptl_sm_second_frag_free_list_num" (current value: "256") MCA ptl: parameter "ptl_sm_second_frag_free_list_max" (current value: "-1") MCA ptl: parameter "ptl_sm_second_frag_free_list_inc" (current value: "256") MCA ptl: parameter "ptl_sm_first_fragment_size" (current value: "1024") MCA ptl: parameter "ptl_sm_max_fragment_size" (current value: "8192") MCA ptl: parameter "ptl_sm_fragment_alignment" (current value: "128") MCA ptl: parameter "ptl_sm_priority" (current value: "0") MCA ptl: parameter "ptl_tcp_if_include" (current value: <none>) MCA ptl: parameter "ptl_tcp_if_exclude" (current value: "lo") MCA ptl: parameter "ptl_tcp_free_list_num" (current value: "256") MCA ptl: parameter "ptl_tcp_free_list_max" (current value: "-1") MCA ptl: parameter "ptl_tcp_free_list_inc" (current value: "256") MCA ptl: parameter "ptl_tcp_sndbuf" (current value: "131072") MCA ptl: parameter "ptl_tcp_rcvbuf" (current value: "131072") MCA ptl: parameter "ptl_tcp_exclusivity" (current value: "0") MCA ptl: parameter "ptl_tcp_first_frag_size" (current value: "65536") MCA ptl: parameter "ptl_tcp_min_frag_size" (current value: "65536") MCA ptl: parameter "ptl_tcp_max_frag_size" (current value: "-1") MCA ptl: parameter "ptl_tcp_frag_size" (current value: "65536") MCA ptl: parameter "ptl_tcp_priority" (current value: "0") MCA ptl: parameter "ptl_gm_port_name" (current value: "OMPI_GM") MCA ptl: parameter "ptl_gm_max_ports_number" (current value: "16") MCA ptl: parameter "ptl_gm_max_boards_number" (current value: "4") MCA ptl: parameter "ptl_gm_max_ptl_modules" (current value: "1") MCA ptl: parameter "ptl_gm_segment_size" (current value: "32768") MCA ptl: parameter "ptl_gm_first_frag_size" (current value: "32720") MCA ptl: parameter "ptl_gm_min_frag_size" (current value: "65536") MCA ptl: parameter "ptl_gm_max_frag_size" (current value: "268435456") MCA ptl: parameter "ptl_gm_eager_limit" (current value: "131072") MCA ptl: parameter "ptl_gm_rndv_burst_limit" (current value: "524288") MCA ptl: parameter "ptl_gm_rdma_frag_size" (current value: "131072") MCA ptl: parameter "ptl_gm_free_list_num" (current value: "256") MCA ptl: parameter "ptl_gm_free_list_inc" (current value: "32") MCA ptl: parameter "ptl_gm_priority" (current value: "0") MCA ptl: parameter "ptl_base_include" (current value: <none>) MCA ptl: parameter "ptl_base_exclude" (current value: <none>) MCA btl: parameter "btl_base_debug" (current value: "0") If btl_base_debug is 1 standard debug is output, if > 1 verbose debug is output MCA btl: parameter "btl_base_verbose" (current value: "0") MCA btl: parameter "btl_base" (current value: <none>) btl MCA btl: parameter "btl_self_free_list_num" (current value: "256") MCA btl: parameter "btl_self_free_list_max" (current value: "-1") MCA btl: parameter "btl_self_free_list_inc" (current value: "256") MCA btl: parameter "btl_self_eager_limit" (current value: "131072") MCA btl: parameter "btl_self_max_send_size" (current value: "262144") MCA btl: parameter "btl_self_max_rdma_size" (current value: "2147483647") MCA btl: parameter "btl_self_exclusivity" (current value: "65536") MCA btl: parameter "btl_self_flags" (current value: "2") MCA btl: parameter "btl_self_priority" (current value: "0") MCA btl: parameter "btl_sm_priority" (current value: "0") MCA btl: parameter "btl_tcp_if_include" (current value: <none>) MCA btl: parameter "btl_tcp_if_exclude" (current value: "lo") MCA btl: parameter "btl_tcp_free_list_num" (current value: "8") MCA btl: parameter "btl_tcp_free_list_max" (current value: "1024") MCA btl: parameter "btl_tcp_free_list_inc" (current value: "32") MCA btl: parameter "btl_tcp_sndbuf" (current value: "131072") MCA btl: parameter "btl_tcp_rcvbuf" (current value: "131072") MCA btl: parameter "btl_tcp_exclusivity" (current value: "0") MCA btl: parameter "btl_tcp_eager_limit" (current value: "65536") MCA btl: parameter "btl_tcp_min_send_size" (current value: "65536") MCA btl: parameter "btl_tcp_max_send_size" (current value: "262144") MCA btl: parameter "btl_tcp_min_rdma_size" (current value: "262144") MCA btl: parameter "btl_tcp_max_rdma_size" (current value: "2147483647") MCA btl: parameter "btl_tcp_flags" (current value: "268435458") MCA btl: parameter "btl_tcp_priority" (current value: "0") MCA btl: parameter "btl_base_include" (current value: <none>) MCA btl: parameter "btl_base_exclude" (current value: <none>) MCA topo: parameter "topo_base_verbose" (current value: "0") MCA topo: parameter "topo_base" (current value: <none>) topo MCA errmgr: parameter "errmgr_base" (current value: <none>) errmgr MCA gpr: parameter "gpr_base_maxsize" (current value: "2147483647") MCA gpr: parameter "gpr_base_blocksize" (current value: "512") MCA gpr: parameter "gpr_base" (current value: <none>) gpr MCA iof: parameter "iof_base_window_size" (current value: "4096") MCA iof: parameter "iof_base_service" (current value: "0.0.0") MCA iof: parameter "iof_base" (current value: <none>) iof MCA ns: parameter "ns_base" (current value: <none>) ns MCA oob: parameter "oob_base_verbose" (current value: "0") MCA oob: parameter "oob_base" (current value: <none>) oob MCA ras: parameter "ras_base" (current value: <none>) ras MCA rds: parameter "rds_base" (current value: <none>) rds MCA rmaps: parameter "rmaps_base" (current value: <none>) rmaps MCA rmgr: parameter "rmgr_base" (current value: <none>) rmgr MCA rml: parameter "rml_base_verbose" (current value: "0") MCA rml: parameter "rml_base" (current value: <none>) rml MCA pls: parameter "pls_base" (current value: <none>) pls MCA sds: parameter "sds_base_verbose" (current value: "0") MCA sds: parameter "sds_base" (current value: <none>) sds MCA soh: parameter "soh_base" (current value: <none>) soh