Alberto,
digo, Alvaro,
digo, Fernando,
digo, Ricardo Reis ...

Salve, oh pa'!

I think MPI doesn't ensure that the output will come ordered
according to process rank, as in your expected output list.
Even MPI_Barrier doesn't sync the output, I suppose.
It syncs only the communication among the processes,
but you actually have no communication on your code!
(Other than the barrier itself, of course.)

You have a different stdout buffer for each process,
and the processes probably compete for access
to the (single) output file,
when they hit "call flush", I would guess.
The Linux scheduler may set the game here,
and tell who's in first, in second, in third, etc.
But I'm not knowledgeable on these things,
I am just wildly guessing.

Note that both lists you sent have exactly the same lines,
though in different order.
I think this is telling that there is nothing wrong
with MPI_Barrier or with your code.
A shuffled output order is to be expected, no more no less.
And the order will probably vary from run to run, right?

Also, on your outer loop istep runs from 1 to 4,
and process rank zero prints an asterisk at each outer loop iteration.
Hence, I think four asterisks, not three, should be expected, right?
Four asterisks is what I see on your first list (the shuffled one),
not on the ordered one.

Now, the question is how to produce the
ordered output you want.

One way would be to send everything to process 0,
and let it order the messages, `a la mode de "hello_world",
but this would be kind of cheating.
Maybe there is a solution with MPI-IO,
to concatenate the output file they way you want first,
then flush it.

**

Arre lo'gica bina'ria impiedosa!

"Onde pode acolher-se um fraco humano,
Onde tera' segura a curta vida,
Que na~o se arme e se indigne o Ce'u sereno
contra um bicho da terra ta~o pequeno?"

Me diga?

Abrac,o
Gus
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
1
Ricardo Reis wrote:

 Hi

I'm testing this in a debian box, openmpi 1.3-2, compiled with gcc suite (all from packages). After compiling and running the code I'm baffled with the output, it seems MPI_Barrier is not working. Maybe it is such a basic error I'm doing that I can't figure it out... See the code below, the output it gives (one of because it's a bit erratic) and what I would expect as output. Any help would be aprecciated...

 Code was compiled with

 mpif90 -O0 -g -fbounds-check -Wall test_mpi.f90 -o test_mpi

 - > code - cut here ----------------------

program testmpi

  use iso_fortran_env

  implicit none

  include 'mpif.h'

  integer, parameter :: ni=16,nj=16,nk=16

  integer, parameter :: stdout=output_unit, stderr=error_unit, &
       stdin=input_unit

  integer :: istep,  idest, idx, &
       ierr, my_rank, world, nprocs

  ! > CODE STARTS ----------------------------------------------- *

  call MPI_Init(ierr)

  world = MPI_COMM_WORLD
  call MPI_comm_rank(world, my_rank, ierr)
  call MPI_comm_size(world, nprocs, ierr)

  call MPI_Barrier(world, ierr)

  do istep=1, nprocs

     idest=ieor(my_rank, istep)

     if(my_rank.eq.0) print '("*",/)'
     call flush(stdout)

     call MPI_Barrier(world, ierr)

     do idx=0,nprocs-1

        if(idx.eq.my_rank .and. idest.lt.nprocs)then
           print '("ISTEP",I2," IDX",I2," my_rank ",I5," idest ",I5)', &
               istep, idx, my_rank, idest
           call flush(stdout)
        endif

        call MPI_Barrier(world, ierr)
     enddo

      call MPI_Barrier(world, ierr)

  enddo


  call MPI_Barrier(world, ierr)
  call MPI_Finalize(ierr)


end program testmpi

 - < code - cut here ----------------------

 - > output - cut here ----------------------

*

ISTEP 1 IDX 1 my_rank     1 idest     0
ISTEP 2 IDX 1 my_rank     1 idest     3
ISTEP 1 IDX 2 my_rank     2 idest     3
ISTEP 2 IDX 2 my_rank     2 idest     0
ISTEP 1 IDX 3 my_rank     3 idest     2
ISTEP 1 IDX 0 my_rank     0 idest     1
*

ISTEP 2 IDX 0 my_rank     0 idest     2
ISTEP 2 IDX 3 my_rank     3 idest     1
ISTEP 3 IDX 3 my_rank     3 idest     0
ISTEP 3 IDX 1 my_rank     1 idest     2
ISTEP 3 IDX 2 my_rank     2 idest     1
*

ISTEP 3 IDX 0 my_rank     0 idest     3
*

 - < output - cut here ----------------------



 - > expected output - cut here ----------------------

*

ISTEP 1 IDX 0 my_rank     0 idest     1
ISTEP 1 IDX 1 my_rank     1 idest     0
ISTEP 1 IDX 2 my_rank     2 idest     3
ISTEP 1 IDX 3 my_rank     3 idest     2

*

ISTEP 2 IDX 0 my_rank     0 idest     2
ISTEP 2 IDX 1 my_rank     1 idest     3
ISTEP 2 IDX 2 my_rank     2 idest     0
ISTEP 2 IDX 3 my_rank     3 idest     1

*

ISTEP 3 IDX 0 my_rank     0 idest     3
ISTEP 3 IDX 1 my_rank     1 idest     2
ISTEP 3 IDX 2 my_rank     2 idest     1
ISTEP 3 IDX 3 my_rank     3 idest     0

 - < expected output - cut here ----------------------

 Ricardo Reis

 'Non Serviam'

 PhD candidate @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 http://www.lasef.ist.utl.pt

 Cultural Instigator @ Rádio Zero
 http://www.radiozero.pt

 Keep them Flying! Ajude a/help Aero Fénix!

 http://www.aeronauta.com/aero.fenix

 http://www.flickr.com/photos/rreis/


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to