I didn't runt that specific test, but I did run a test that calls MPI_Abort. I
found a bug this morning, though (reported by Sam) that was causing the state
of remote procs to be incorrectly reported.
Try with r23048 or higher.
On Apr 27, 2010, at 9:15 AM, Rolf vandeVaart wrote:
> Ralph, did y
Ralph, did you get a chance to run the ibm/final test to see if these
changes fixed the problem? I just rebuilt the trunk and tried it and I
still get an exit status of 0 back. I will run it again to make sure I
have not made a mistake.
Rolf
On 04/26/10 23:43, Ralph Castain wrote:
Okay, thi
Okay, this should finally be fixed. See the commit message for r23045 for an
explanation.
It really wasn't anything in the cited changeset that caused the problem. The
root cause is that $#@$ abort file we dropped in the session dir to indicate
you called MPI_Abort vs trying to thoroughly clean
The ibm/final test does not call MPI_Abort directly. It is calling
MPI_Barrier after MPI_Finalize is called, which is a no-no. This is
detected and eventually the library calls ompi_mpi_abort(). This is
very similar to MPI_Abort() which ultimately calls ompi_mpi_abort as
well. So, I guess I
I'll try to keep it in mind as I continue the errmgr work. I gather these tests
all call MPI_Abort?
On Apr 26, 2010, at 12:31 PM, Rolf vandeVaart wrote:
>
> With our MTT testing we have noticed a problem that has cropped up in the
> trunk. There are some tests that are supposed to return a n
With our MTT testing we have noticed a problem that has cropped up in
the trunk. There are some tests that are supposed to return a non-zero
status because they are getting errors, but are instead returning 0.
This problem does not exist in r23022 but does exist in r23023.
One can use the