Hi Ralph, thank you for your comment.

I understand what you mean. As you pointed out, I have one process sleep
before finalize. Then, mumps finalize might affect the behavior.

I will remove mumps finalize (and/or initialize) function from my testing
program ant try again on next Monday to make my point clear.

Regards, tmishima

> I'm not sure - just fishing for possible answers. When we see high cpu
usage, it usually occurs during MPI communications - when a process is
waiting for a message to arrive, it polls at a high rate
> to keep the latency as low as possible. Since you have one process
"sleep" before calling the finalize sequence, it could be that the other
process is getting held up on a receive and thus eating the
> cpu.
>
> There really isn't anything special going on during Init/Finalize, and
OMPI itself doesn't have any MPI communications in there. I'm not familiar
with MUMPS, but if MUMPS finalize is doing something
> like an MPI_Barrier to ensure the procs finalize together, then that
would explain what you see. The docs I could find imply there is some MPI
embedded in MUMPS, but I couldn't find anything specific
> about finalize.
>
>
> On Oct 25, 2012, at 6:43 PM, tmish...@jcity.maeda.co.jp wrote:
>
> >
> >
> > Hi Ralph,
> >
> > do you really mean "MUMPS finalize"? I don't think it has much relation
> > with
> > this behavior?
> >
> > Anyway, I'm just a mumps user. I have to ask mumps developers about
what
> > MUMPS
> > initailize and finalize does.
> >
> > Regartds,
> > tmishima
> >
> >> Out of curiosity, what does MUMPS finalize do? Does it send a message
or
> > do a barrier operation?
> >>
> >>
> >> On Oct 25, 2012, at 5:53 PM, tmish...@jcity.maeda.co.jp wrote:
> >>
> >>>
> >>>
> >>> Hi,
> >>>
> >>> I find that system CPU time of openmpi-1.7rc1 is quite different with
> >>> that of openmpi-1.6.2 as shown in the attached ganglia display.
> >>>
> >>> About 2 years ago, I reported a similar behavior of openmpi-1.4.3.
> >>> The testing method is what I used at that time.
> >>> (please see my post entitled "SYSTEM CPU with OpenMPI 1.4.3")
> >>>
> >>> Is this due to a pre-released version's check routine or does
> >>> something go wrong?
> >>>
> >>> Best regards,
> >>> Tetsuya Mishima
> >>>
> >>> ------------------
> >>> Testing program:
> >>>     INCLUDE 'mpif.h'
> >>>     INCLUDE 'dmumps_struc.h'
> >>>     TYPE (DMUMPS_STRUC) MUMPS_PAR
> >>> c
> >>>     MUMPS_PAR%COMM = MPI_COMM_WORLD
> >>>     MUMPS_PAR%SYM = 1
> >>>     MUMPS_PAR%PAR = 1
> >>>     MUMPS_PAR%JOB = -1 ! INITIALIZE MUMPS
> >>>     CALL MPI_INIT(IERR)
> >>>     CALL DMUMPS(MUMPS_PAR)
> >>> c
> >>>     CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
> >>>     IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
> >>> c
> >>>     MUMPS_PAR%JOB = -2 ! FINALIZE MUMPS
> >>>     CALL DMUMPS(MUMPS_PAR)
> >>>     CALL MPI_FINALIZE(IERR)
> >>> c
> >>>     END
> >>> ( This does nothing but just calls intialize & finalize
> >>> routine of MUMPS & MPI)
> >>>
> >>> command line : mpirun -host node03 -np 16 ./testrun
> >>>
> >>> (See attached file:
> >
openmpi17rc1-cmp.bmp)<openmpi17rc1-cmp.bmp>_______________________________________________

> >
> >>> users mailing list
> >>> us...@open-mpi.org
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >>
> >> _______________________________________________
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to