The overhead of cleanup doesn't go away; the MPI runtime would need to
create a similar cleanup list and process it. It looks to me like the
performance problem might actually be caused by the Ibarrier not making
asynchronous progress when application stuff is happening.
~Jim.
~Jim.
On Fri,
Sorry, we seem to have lost the mailing list for the last couple messages
below (my fault).
The text on MPI_FINALIZE does not mandate “no pending communication”, it
requires “all MPI calls needed to complete its involvement …"
"Before an MPI process invokes MPI_FINALIZE, the process must perform
Hi Dan,
I believe that Pavan was referring to my conversation with him about
MPI_Request_free. Here’s my situation: I’d like to use MPI_Ibarrier as a form
of “memory fence” between some of the metadata reads and writes in HDF5.
Here’s some [very] simplified pseudocode for what I’d
Hi all,
I just want to remind everyone one last time to register for next week’s
virtual meeting. We have enough orgs registered to meet quorum, but there are
still a few orgs who usually attend, but have not yet registered. Remember that
the first voting block is on the first day so if you