Intel compiler 11.0.074
OpenMPI 1.4.1

Two different OSes: centos 5.4 (2.6.18 kernel) and Fedora-12 (2.6.32 kernel)
Two different CPUs: Opteron 248 and Opteron 8356.

same binary for OpenMPI. Same binary for user code (vasp compiled for older 
arch)

When I supply rankfile, then depending on combo of OS and CPU results are 
different

centos+Opt8356 : works
centos+Opt248 : works
fedora+Opt8356 : works
fedora+Opt248 : fails

rankfile is (in case of Opt248)

rank 0=node014 slot=1
rank 1=node014 slot=0

I tried play with formats, leave one slot (and start one process) - it doesn't 
change result
Without rankfile it works on all combos.
Just in case, all this happens inside of cpuset which always wraps all slots 
given in rankfile (I just use torque with cpusets and my custom patch for 
torque which also creates rankfile for openmpi, in this case MPI tasks are 
bound to particular cores and multithreaded codes limited by given cpuset).

AFAIR, it also works without problem on both hardware setups with 1.3.x/1.4.0 
and 2.6.30 kernel from OpenSuSE 11.1.

Strangely, but when I run OSU benchmarks (osu_bw etc), it works without any 
problems.


And finally two errorlogs (starting 1 and 2 processes):

mpirun -mca paffinity_base_verbose 8  -np 1 vasp
[node014:26373] mca:base:select:(paffinity) Querying component [linux]
[node014:26373] mca:base:select:(paffinity) Query of component [linux] set 
priority to 10
[node014:26373] mca:base:select:(paffinity) Selected component [linux]
[node014:26373] paffinity slot assignment: slot_list == 1
[node014:26373] paffinity slot assignment: rank 0 runs on cpu #1 (#1)
[node014:26374] mca:base:select:(paffinity) Querying component [linux]
[node014:26374] mca:base:select:(paffinity) Query of component [linux] set 
priority to 10
[node014:26374] mca:base:select:(paffinity) Selected component [linux]
[node014:26374] paffinity slot assignment: slot_list == 1
[node014:26374] paffinity slot assignment: rank 0 runs on cpu #1 (#1)
[node014:26374] *** An error occurred in MPI_Comm_rank
[node014:26374] *** on a NULL communicator
[node014:26374] *** Unknown error
[node014:26374] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source      
       
libmpi.so.0        00002ACC26BB36C3  Unknown               Unknown  Unknown
libmpi.so.0        00002ACC26BA0EB8  Unknown               Unknown  Unknown
libmpi.so.0        00002ACC26BA0B4B  Unknown               Unknown  Unknown
libmpi.so.0        00002ACC26BCF77E  Unknown               Unknown  Unknown
libmpi_f77.so.0    00002ACC269528FB  Unknown               Unknown  Unknown
vasp               000000000046FE66  Unknown               Unknown  Unknown
vasp               0000000000486102  Unknown               Unknown  Unknown
vasp               000000000042A1AB  Unknown               Unknown  Unknown
vasp               000000000042A02C  Unknown               Unknown  Unknown
libc.so.6          000000364DE1EB1D  Unknown               Unknown  Unknown
vasp               0000000000429F29  Unknown               Unknown  Unknown
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 26374 on
node node014 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

$ mpirun -mca paffinity_base_verbose 8  -np 2 vasp
[node014:26402] mca:base:select:(paffinity) Querying component [linux]
[node014:26402] mca:base:select:(paffinity) Query of component [linux] set 
priority to 10
[node014:26402] mca:base:select:(paffinity) Selected component [linux]
[node014:26402] paffinity slot assignment: slot_list == 1
[node014:26402] paffinity slot assignment: rank 0 runs on cpu #1 (#1)
[node014:26402] paffinity slot assignment: slot_list == 0
[node014:26402] paffinity slot assignment: rank 1 runs on cpu #0 (#0)
[node014:26403] mca:base:select:(paffinity) Querying component [linux]
[node014:26403] mca:base:select:(paffinity) Query of component [linux] set 
priority to 10
[node014:26403] mca:base:select:(paffinity) Selected component [linux]
[node014:26404] mca:base:select:(paffinity) Querying component [linux]
[node014:26404] mca:base:select:(paffinity) Query of component [linux] set 
priority to 10
[node014:26404] mca:base:select:(paffinity) Selected component [linux]
[node014:26403] paffinity slot assignment: slot_list == 1
[node014:26403] paffinity slot assignment: rank 0 runs on cpu #1 (#1)
[node014:26403] *** An error occurred in MPI_Comm_rank
[node014:26403] *** on a NULL communicator
[node014:26403] *** Unknown error
[node014:26403] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[node014:26404] paffinity slot assignment: slot_list == 0
[node014:26404] paffinity slot assignment: rank 1 runs on cpu #0 (#0)
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source      
       
libmpi.so.0        00002B06529E76C3  Unknown               Unknown  Unknown
libmpi.so.0        00002B06529D4EB8  Unknown               Unknown  Unknown
libmpi.so.0        00002B06529D4B4B  Unknown               Unknown  Unknown
libmpi.so.0        00002B0652A0377E  Unknown               Unknown  Unknown
libmpi_f77.so.0    00002B06527868FB  Unknown               Unknown  Unknown
vasp               000000000046FE66  Unknown               Unknown  Unknown
vasp               0000000000486102  Unknown               Unknown  Unknown
vasp               000000000042A1AB  Unknown               Unknown  Unknown
vasp               000000000042A02C  Unknown               Unknown  Unknown
libc.so.6          000000364DE1EB1D  Unknown               Unknown  Unknown
vasp               0000000000429F29  Unknown               Unknown  Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source      
       
libmpi.so.0        00002B808D8266C3  Unknown               Unknown  Unknown
libmpi.so.0        00002B808D813EB8  Unknown               Unknown  Unknown
libmpi.so.0        00002B808D813B4B  Unknown               Unknown  Unknown
libmpi.so.0        00002B808D84277E  Unknown               Unknown  Unknown
libmpi_f77.so.0    00002B808D5C58FB  Unknown               Unknown  Unknown
vasp               000000000046FE66  Unknown               Unknown  Unknown
vasp               0000000000486102  Unknown               Unknown  Unknown
vasp               000000000042A1AB  Unknown               Unknown  Unknown
vasp               000000000042A02C  Unknown               Unknown  Unknown
libc.so.6          000000364DE1EB1D  Unknown               Unknown  Unknown
vasp               0000000000429F29  Unknown               Unknown  Unknown
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 26403 on
node node014 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[node014:26402] 1 more process has sent help message help-mpi-errors.txt / 
mpi_errors_are_fatal unknown handle
[node014:26402] Set MCA parameter "orte_base_help_aggregate" to 0 to see all 
help / error messages



Anton

Reply via email to