I using Ver, 1.6.4 in all nodes.

(2013/05/15 7:10), Jeff Squyres (jsquyres) wrote:
Are you sure that you have exactly the same version of Open MPI on all your 
nodes?


On May 14, 2013, at 11:39 AM, Hayato KUNIIE <kuni...@oita.email.ne.jp> wrote:

Hello I'm kuni255

I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
MPI_REDUCE.

Then, Error occured.

This cluster system consist of one head node and 2 slave nodes.
And sharing home directory in head node by NFS. so Open MPI is installed
each nodes.

When I test this program on only head node, program is run correctly.
and output result.
But When I test this program on only slave node, same error occured.

Please tell me, good idea : )

Error message
[bwslv01:30793] *** An error occurred in MPI_Reduce: the reduction
operation MPI_SUM is not defined on the MPI_INTEGER datatype
[bwslv01:30793] *** on communicator MPI_COMM_WORLD
[bwslv01:30793] *** MPI_ERR_OP: invalid reduce operation
[bwslv01:30793] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 30793 on
node bwslv01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[bwhead.clnet:02147] 1 more process has sent help message
help-mpi-errors.txt / mpi_errors_are_fatal
[bwhead.clnet:02147] Set MCA parameter "orte_base_help_aggregate" to 0
to see all help / error messages




Fortran90 source code
include 'mpif.h'
parameter(nmax=12)
integer n(nmax)

call mpi_init(ierr)
call mpi_comm_size(MPI_COMM_WORLD, isize, ierr)
call mpi_comm_rank(MPI_COMM_WORLD, irank, ierr)
ista=irank*(nmax/isize) + 1
iend=ista+(nmax/isize-1)
isum=0
do i=1,nmax
n(i) = i
isum = isum + n(i)
end do
call mpi_reduce(isum, itmp, 1, MPI_INTEGER, MPI_SUM,
& 0, MPI_COMM_WORLD, ierr)

if (irank == 0) then
isum=itmp
WRITE(*,*) isum
endif
call mpi_finalize(ierr)
end
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to