-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Grismer,Matthew J Civ USAF AFMC AFRL/RBAT
Sent: Monday, January 03, 2011 11:18 AM
To: Open MPI Users
Subject: Re: [OMPI users] Using MPI_Put/Get correctly?
Unfortunately correcting the integer type
ow to
proceed?
>
> Matt
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Grismer,Matthew J Civ USAF AFMC AFRL/RBAT
> Sent: Wednesday, December 29, 2010 1:42 PM
> To: Open MPI Users
> Subject: Re: [OMPI
ow to proceed?
>
> Matt
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Grismer,Matthew J Civ USAF AFMC AFRL/RBAT
> Sent: Wednesday, December 29, 2010 1:42 PM
> To: Open MPI Users
> Subject: Re: [OMPI
[mailto:users-boun...@open-mpi.org] On
Behalf Of Grismer,Matthew J Civ USAF AFMC AFRL/RBAT
Sent: Wednesday, December 29, 2010 1:42 PM
To: Open MPI Users
Subject: Re: [OMPI users] Using MPI_Put/Get correctly?
Someone correctly pointed out the bug in my examples. In the MPI_Put I
pass a 0
and target.
To build: mpif90 putoneway.f90
To run: mpiexec -np 2 a.out
Matt
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of James Dinan
Sent: Thursday, December 16, 2010 10:09 AM
To: Open MPI Users
Subject: Re: [OMPI users] Using MPI_Put
: mpif90 putoneway.f90
To run: mpiexec -np 2 a.out
Matt
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of James Dinan
Sent: Thursday, December 16, 2010 10:09 AM
To: Open MPI Users
Subject: Re: [OMPI users] Using MPI_Put/Get correctly
On 12/16/2010 08:34 AM, Jeff Squyres wrote:
> Additionally, since MPI-3 is updating the semantics of the one-sided
> stuff, it might be worth waiting for all those clarifications before
> venturing into the MPI one-sided realm. One-sided semantics are much
> more subtle and complex than two-sided
Open MPI uses RDMA under the covers for send/receive when it makes sense. See
these FAQ entries for more details:
http://www.open-mpi.org/faq/?category=openfabrics#large-message-tuning-1.2
http://www.open-mpi.org/faq/?category=openfabrics#large-message-tuning-1.3
I found a presentation on the web that showed significant performance
benefits for the one-sided communication, I presumed it was from hardware
RDMA support that the one-sided calls could take advantage of. But I gather
from the your question that is not necessarily the case. Are you aware of
Is there a reason to convert your code from send/receive to put/get?
The performance may not be that significantly different, and as you have noted,
the MPI-2 put/get semantics are a total nightmare to understand (I personally
advise people not to use them -- MPI-3 is cleaning up the put/get
I am trying to modify the communication routines in our code to use
MPI_Put's instead of sends and receives. This worked fine for several
variable Put's, but now I have one that is causing seg faults. Reading
through the MPI documentation it is not clear to me if what I am doing
is permissible or
11 matches
Mail list logo