Thanks for the reminder.
Note that from a standards perspective, note that MPI_REDUCE *does* require at
least one element -- MPI-2.2 p163:34-35:
"Each process can provide one element, or a sequence of elements..."
So I think that George's assertion is correct: your test code is incorrect.
But that's not what is causing your example to fail. Here's the issue in
OMPI's MPI_Reduce:
} else if ((ompi_comm_rank(comm) != root && MPI_IN_PLACE == sendbuf) ||
(ompi_comm_rank(comm) == root && ((MPI_IN_PLACE == recvbuf)
|| (sendbuf == recvbuf)))) {
err = MPI_ERR_ARG;
The "sendbuf == recvbuf" check is what causes the MPI exception. I would say
that we're not consistent about disallowing that (e.g., such checks are not in
MPI_SCAN and the others you cited).
But FWIW, we do have logic in there because a popular benchmark (IMB) gets it
wrong and calls MPI_REDUCE with a zero count (or at least, it used to -- I
don't know if it has been checked). This is a case where we were backed into a
corner because users kept complaining that OMPI was broken because it would
fail to run IMB (although the opposite was actually true). So even though we
didn't want to add the exception, we pretty much had to. :-\
Hence, we're not failing your example because of a 0 count -- we're failing
your example because you didn't use MPI_IN_PLACE. The following works (because
of the IMB exception), for example:
ierr = MPI_Reduce(
(void*) 1, (void*) 2,
0,
MPI_INT,
MPI_SUM,
0,
MPI_COMM_WORLD);
On Feb 9, 2010, at 5:01 PM, Lisandro Dalcín wrote:
> BUMP. See http://code.google.com/p/mpi4py/issues/detail?id=14
>
>
> On 12 December 2009 00:31, Lisandro Dalcin <[email protected]> wrote:
> > On Thu, Dec 10, 2009 at 4:26 PM, George Bosilca <[email protected]>
> > wrote:
> >> Lisandro,
> >>
> >> This code is not correct from the MPI standard perspective. The reason is
> >> independent of the datatype or count, it is solely related to the fact
> >> that the MPI_Reduce cannot accept a sendbuf equal to the recvbuf (or one
> >> has to use MPI_IN_PLACE).
> >>
> >
> > George, I have to disagree. Zero-length buffers are a very special
> > case, and the MPI std is not very explicit about this limit case. Try
> > the code pasted at the end.
> >
> > 1) In Open MPI, the only one of these failing for sbuf=rbuf=NULL is
> > MPI_Reduce()
> >
> > 2) As reference, all the calls succeed in MPICH2.
> >
> >
> >
> > #include <mpi.h>
> > #include <stdlib.h>
> >
> > int main( int argc, char ** argv ) {
> > int ierr;
> > MPI_Init(&argc, &argv);
> > ierr = MPI_Scan(
> > NULL, NULL,
> > 0,
> > MPI_INT,
> > MPI_SUM,
> > MPI_COMM_WORLD);
> > ierr = MPI_Exscan(
> > NULL, NULL,
> > 0,
> > MPI_INT,
> > MPI_SUM,
> > MPI_COMM_WORLD);
> > ierr = MPI_Allreduce(
> > NULL, NULL,
> > 0,
> > MPI_INT,
> > MPI_SUM,
> > MPI_COMM_WORLD);
> > #if 1
> > ierr = MPI_Reduce(
> > NULL, NULL,
> > 0,
> > MPI_INT,
> > MPI_SUM,
> > 0,
> > MPI_COMM_WORLD);
> > #endif
> > MPI_Finalize();
> > return 0;
> > }
> >
> >
> >
> > --
> > Lisandro Dalcín
> > ---------------
> > Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> > Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> > Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> > PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> > Tel/Fax: +54-(0)342-451.1594
> >
>
>
>
> --
> Lisandro Dalcín
> ---------------
> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> Tel/Fax: +54-(0)342-451.1594
>
> _______________________________________________
> devel mailing list
> [email protected]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
--
Jeff Squyres
[email protected]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/