Hello,
This patch fixes a potential loss of a lock request in function
ompi_osc_rdma_passive_unlock_complete(). A new pending request is taken
from the m_locks_pending list. If m_lock_status is not equal to 0, this
new entry is then set to NULL and thus lost. This can lead to a deadlock
situation
Hello,
In function ompi_osc_rdma_passive_unlock_complete(), an object
copy_unlock_acks was built but it is never destroyed. The following
patch adds its destruction.
Tested on Open MPI v1.5
Regards,
Guillaume
---
diff --git a/ompi/mca/osc/rdma/osc_rdma_sync.c
b/ompi/mca/osc/rdma/osc_rdma_syn
Hello,
It seems that a return value is not updated during the setup of
process affinity in function ompi_mpi_init()
ompi/runtime/ompi_mpi_init.c:459
The problem is in the following piece of code:
[... here ret == OPAL_SUCCESS ...]
phys_cpu = opal_paffinity_base_get_physical_processor_i
Hello,
When calling MPI_Comm_get_name() on the predefined communicator
MPI_COMM_PARENT after a call to MPI_Comm_spawn(), we are expecting the
name MPI_COMM_PARENT as stated into the MPI Standard 2.2.
In practice, MPI_Comm_get_name() returns an empty string. As far as I
understand the problem,