On 28 September 2014 at 19:13, George Bosilca <bosi...@icl.utk.edu> wrote:
> Lisandro,
>
> Good catch. Indeed the MPI_Ireduce_scatter was not covering the case where
> MPI_IN_PLACE was used over a communicator with a single participant. I
> pushed a patch and schedule it for 1.8.4. Check
> https://svn.open-mpi.org/trac/ompi/ticket/4924 for more info.
>

While your change fixed the issues when using MPI_IN_PLACE, now 1.8.4
seems to fail when in-place is not used.

Please try the attached example:

$ mpicc -DNBCOLL=0 ireduce_scatter.c
$ mpiexec -n 2 ./a.out
[0] rbuf[0]= 2  expected: 2
[0] rbuf[1]= 0  expected: 0
[1] rbuf[0]= 2  expected: 2
[1] rbuf[1]= 0  expected: 0
$ mpiexec -n 1 ./a.out
[0] rbuf[0]= 1  expected: 1


$ mpicc -DNBCOLL=1 ireduce_scatter.c
$ mpiexec -n 2 ./a.out
[0] rbuf[0]= 2  expected: 2
[0] rbuf[1]= 0  expected: 0
[1] rbuf[0]= 2  expected: 2
[1] rbuf[1]= 0  expected: 0
$ mpiexec -n 1 ./a.out
[0] rbuf[0]= 0  expected: 1

The last one is wrong. Not sure what's going on. Am I missing something?


-- 
Lisandro Dalcin
============
Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Numerical Porous Media Center (NumPor)
King Abdullah University of Science and Technology (KAUST)
http://numpor.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 4332
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459
#include <stdlib.h>
#include <stdio.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
  int i,size,rank;
  int sendbuf[] = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1};
  int recvbuf[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
  int rcounts[] = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1};
  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &size);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  if (size > 16) MPI_Abort(MPI_COMM_WORLD,1);
#ifndef NBCOLL
#define NBCOLL 1
#endif
#if NBCOLL
  {
    MPI_Request request;
    MPI_Ireduce_scatter(sendbuf, recvbuf, rcounts, MPI_INT,
                        MPI_SUM, MPI_COMM_WORLD, &request);
    MPI_Wait(&request,MPI_STATUS_IGNORE);
  }
#else
  MPI_Reduce_scatter(sendbuf, recvbuf, rcounts, MPI_INT,
                     MPI_SUM, MPI_COMM_WORLD);
#endif
  printf("[%d] rbuf[%d]=%2d  expected:%2d\n", rank, 0, recvbuf[0], size);
  for (i=1; i<size; i++) {
    printf("[%d] rbuf[%d]=%2d  expected:%2d\n", rank, i, recvbuf[i], 0);
  }
  MPI_Finalize();
  return 0;
}

Reply via email to