Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-15 Thread Kawashima, Takahiro
George, Thanks for your reply. Yes, as you explained, a case specifying an inactive request for MPI_Test is OK. But the problem is a case specifying an inactive request for MPI_Wait, MPI_Waitall, and MPI_Testall, as I explained in my first mail. See code below: /* make a inactive reques

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-15 Thread George Bosilca
Takahiro, I fail to see the cases your patch addresses. I recognize I did not have the time to look over all the instances where we deal with persistent inactive requests, but at the first occurrence, the one in req_test.c line 68, the case you exhibit there is already covered by the test "requ

Re: [OMPI devel] MPI_Reduce() is losing precision

2012-10-15 Thread N.M. Maclaren
On Oct 15 2012, Iliev, Hristo wrote: Numeric differences are to be expected with parallel applications. The basic reason for that is that on many architectures floating-point operations are performed using higher internal precision than that of the arguments and only the final result is round

Re: [OMPI devel] MPI_Reduce() is losing precision

2012-10-15 Thread Iliev, Hristo
Hi Santhosh, Numeric differences are to be expected with parallel applications. The basic reason for that is that on many architectures floating-point operations are performed using higher internal precision than that of the arguments and only the final result is rounded back to the lower output

[OMPI devel] MPI_Reduce() is losing precision

2012-10-15 Thread Santhosh Kokala
Hi All, I am having a strange problem with the floating precision. I get correct precision when I launch with one process, but when the same code is launched with 2 or more process I am losing precision in MPI_Redcue(..., MPI_FLOAT, MPI_SUM..); call. Output from my code (admin)host:~$ mpirun -n

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-15 Thread Kawashima, Takahiro
Hi Open MPI developers, How is my updated patch? If there is an another concern, I'll try to update it. > > > > The bugs are: > > > > > > > > (1) MPI_SOURCE of MPI_Status for a null request must be MPI_ANY_SOURCE. > > > > > > > > (2) MPI_Status for an inactive request must be an empty status. > >

[OMPI devel] Question about collective communication optimization for shared memory

2012-10-15 Thread Shigang Li
Dear Sir or Madam, I'm running application on SMP clusters and want to get good performance for collective communications utilizing shared memory feature. I browse official website of OpenMPI, and see that OpenMPI can automatically find the best network according to the hardware architecture, for