I see your point about setting MPI_ERR_PENDING on the internal status
versus the status returned by MPI_Waitall. As I mentioned, the reason
I choose to do that is to support the ompi_errhandler_request_invoke()
function. I could not think of an better way to fix this, so I'm open
to ideas.
MPI_Wai
My point was that MPI_ERR_PENDING should never be set on a specific request.
MPI_ERR_PENDING should only be returned in the array of statuses attached to
MPI_Waitall. Thus, there is no need to remove it from any request.
In addition, there is another reason why this is unnecessary (and I was too
In the patch for errhandler_invoke.c, you can see that we need to
check for MPI_ERR_PENDING to make sure that we do not free the request
when we are trying to decide if we should invoke the error handler. So
setting the internal req->req_status.MPI_ERROR to MPI_ERR_PENDING made
it possible to check
Josh,
I don't agree that these changes are required. In the current standard (2.2),
MPI_ERR_PENDING is only allowed to be returned by MPI_WAITALL, in some very
specific conditions. Here is the snippet from the MPI standard clarifying this
behavior.
> When one or more of the communications comp
What: Change coll tuned default to pairwise exchange
Why: The linear algorithm does not scale to any reasonable number of PEs
When: Timeout in 2 days (Fri)
Is there any reason the default should not be changed?
-Nathan
HPC-3, LANL
Sorry the below cc line if for Solaris Studio compilers if you have gcc
replace "-G" with "-shared".
thanks,
--td
On 3/21/2012 11:32 AM, TERRY DONTJE wrote:
I ran into a problem on a Suse 10.1 system and was wondering if anyone
has a version of Suse newer than 10.1 that can try the following
I ran into a problem on a Suse 10.1 system and was wondering if anyone
has a version of Suse newer than 10.1 that can try the following test
and send me the results.
-testpci
cat
Hello,
I have a problem using Open MPI on my linux system (pandaboard running
Ubuntu precise). A call to MPI_Init_thread with the following parameters
hangs:
MPI_Init_thread(0, 0, MPI_THREAD_MULTIPLE, &provided);
it seems that we are stuck on this loop in function
opal_condition_wait():
whil