Gabriele,
Can you clarify... those timings are what is reported for the reduction
call specifically, not the total execution time?
If so, then the difference is, to a first approximation, the time you
spend sitting idly by doing absolutely nothing waiting at the barrier.
Ciao
Terry
--
Dr. Ter
On 8 Sep 2010, at 10:21, Gabriele Fatigati wrote:
> So, im my opinion, it is better put MPI_Barrier before any MPI_Reduce to
> mitigate "asynchronous" behaviour of MPI_Reduce in OpenMPI. I suspect the
> same for others collective communications. Someone can explaine me why
> MPI_reduce has this
Doing Reduce without Barrier first allows one process to call Reduce and
exit immediately without waiting for other processes to call Reduce.
Therefore, this allows one process to advance faster than other processes.
I suspect the 2671 second result is the difference between the fastest and
slowest
Dear OpenMPI users,
i'm using OpenMPI 1.3.3 on Infiniband 4x interconnnection network. My
parallel application use intensive MPI_Reduce communication over
communicator created with MPI_Comm_split.
I've noted strange behaviour during execution. My code is instrumented with
Scalasca 1.3 to report s