Thanks,
Ernesto.
Schlumberger-Private
I have an "extreme" case below, for the sake of example.
Suppose one is running a MPI job with N >= 2 ranks, and at a certain moment the
code does the following:
.
.
.
If (rank == 0) {
MPI_Bcast(...);
}
.
.
.
std::cout << "Here A, rank = " << rank << std::endl;
MPI_Barrier(...);
std::cout <<
th matching but
different signatures.
Cheers,
Gilles
On Mon, Mar 14, 2022 at 4:09 PM Ernesto Prudencio via users
mailto:users@lists.open-mpi.org>> wrote:
Thanks, Gilles.
In the case of the application I am working on, all ranks call MPI with the
same signature / types of variables.
I do not
compilers
- MPICH (or a derivative such as Intel MPI)
- PETSc 3.16.5
=> a success would strongly point to Open MPI
Cheers,
Gilles
On Mon, Mar 14, 2022 at 2:56 PM Ernesto Prudencio via users
mailto:users@lists.open-mpi.org>> wrote:
Forgot to mention that in all 3 situations, mpirun is
command
line in order to make situation 2 successful?
Thanks,
Ernesto.
Schlumberger-Private
From: users On Behalf Of Ernesto Prudencio
via users
Sent: Monday, March 14, 2022 12:39 AM
To: Open MPI Users
Cc: Ernesto Prudencio
Subject: Re: [OMPI users] [Ext] Re: Call to MPI_Allreduce() returning
and one of your processes calls a different
MPI_Allreduce on the same communicator.
There is no simple way to get more information about this issue. If you have a
version of OMPI compiled in debug mode, you can increase the verbosity of the
collective framework to see if you get more interesting inf
Hello all,
The very simple code below returns mpiRC = 15.
const std::array< double, 2 > rangeMin { minX, minY };
std::array< double, 2 > rangeTempRecv { 0.0, 0.0 };
int mpiRC = MPI_Allreduce( rangeMin.data(), rangeTempRecv.data(),
rangeMin.size(), MPI_DOUBLE, MPI_MIN, PETSC_COMM_WORLD );
Some i