Hello,

I have built OpenMPI 1.4.5rc2 with Intel 12.1 compilers in an HPC
environment.  We are running RHEL 5, kernel 2.6.18-238 with Intel Xeon
X5660 cpus.  You can find my build options below.  In an effort to
test the OpenMPI build, I compiled "Hello world" with an mpi_init call
in C and Fortran.  Mpirun of both versions on a single node results in
a segfault.  I have attached the pertinent portion of gdb's output of
the "Hello world" core dump.  Submitting a parallel "Hello world" job
to torque results in segfaults across the respective nodes.  However,
if I execute mpirun of C or Fortran "Hello world" following a segfault
the program will exit successfully.  Additionally, if I strace mpirun
on either a single node or on multiple nodes in parallel "Hello world"
runs successfully.  I am unsure how to proceed- any help would be
greatly appreciated.


Thank you in advance,

Dan Milroy


Build options:

        source /ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/iccvars.sh intel64
        source /ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/ifortvars.sh
intel64
        export CC=/ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/intel64/icc
        export CXX=/ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/intel64/icpc
        export F77=/ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/intel64/ifort
        export F90=/ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/intel64/ifort
        export FC=/ics_2012.0.032/composer_xe_2011_sp1.6.233/bin/intel64/ifort
        ./configure --prefix=/openmpi-1.4.5rc2_intel-12.1
--with-tm=/torque-2.5.8/ --enable-shared --enable-static --without-psm

Attachment: GDB_hello.c_core_dump
Description: Binary data

Reply via email to