le behavior would be seen with MPI_Ssend(). Now, if you really
> don’t want to see the sender affected by the receiver load, you need to
> move to non-blocking calls MPI_Isend().
>
>
>
>
>
> _MAC
>
>
>
> *From:* users [mailto:users-boun...@lists.open-mpi.org] *O
Sorry, forgot the attachments.
On Thu, Aug 11, 2016 at 5:06 PM, Xiaolong Cui wrote:
> Thanks! I tried it, but it didn't solve my problem. Maybe the reason is
> not eager/rndv.
>
> The reason why I want to always use eager mode is that I want to avoid a
> sender being slowe
PSM2_MQ_RNDV_HFI_THRESH -> Largest supported
> value.
>
>
>
> Regards,
>
>
>
> _MAC
>
>
>
> *From:* users [mailto:users-boun...@lists.open-mpi.org] *On Behalf Of
> *Xiaolong
> Cui
> *Sent:* Wednesday, August 10, 2016 7:19 PM
> *To:* Open MPI Users
ize = 16 -> does not apply to PSM2
>
> btl_openib_receive_queues = P,128,256,192,128:S,2048,1024,
> 1008,64:S,12288,1024,1008,64:S,16,1024,512,512 -> does not apply for
> PSM2.
>
>
>
> Thanks,
>
> Regards,
>
>
>
> _MAC
>
> BTW, should
I used to tune the performance of OpenMPI on InfiniBand by changing the
OpenMPI MCA parameters for openib component (see
https://www.open-mpi.org/faq/?category=openfabrics). Now I migrate to a new
cluster that deploys Intel's omni-path interconnect, and my previous
approach does not work any more.
Sorry, the figures do not display. They are attached to this message.
On Wed, May 18, 2016 at 3:24 PM, Xiaolong Cui wrote:
> Hi Nathan,
>
> I got one more question. I am measuring the number of messages that can be
> eagerly sent with a given SRQ. Again, as illustrated below, my
pair will prevent totally asynchronous
> connections even in 2.x but SRQ/XRC only should work.
>
> -Nathan
>
> On Tue, May 17, 2016 at 11:31:01AM -0400, Xiaolong Cui wrote:
> >I think it is the connection manager that blocks the first message.
> If I
> >add a pair
gt; https://www.open-mpi.org/software/ompi/v2.x/
>
> I know the per-peer queue pair will prevent totally asynchronous
> connections even in 2.x but SRQ/XRC only should work.
>
> -Nathan
>
> On Tue, May 17, 2016 at 11:31:01AM -0400, Xiaolong Cui wrote:
> >I think it
Tue, May 17, 2016 at 11:00 AM, Nathan Hjelm wrote:
>
> If it is blocking on the first message then it might be blocked by the
> connection manager. Removing the per-peer queue pair might help in that
> case.
>
> -Nathan
>
> On Mon, May 16, 2016 at 10:11:29PM -0400, Xiaol
; Additionally, if you are using infiniband I recommend against adding a
> per-peer queue pair to btl_openib_receive_queues. We have not seen any
> performance benefit to using per-peer queue pairs and they do not
> scale.
>
> -Nathan Hjelm
> HPC-ENV, LANL
>
> On Mon, May 16,
[/output]
So anyone knows the reason? My runtime configuration is also attached.
Thanks!
Sincerely,
Michael
--
Xiaolong Cui (Michael)
Department of Computer Science
Dietrich School of Arts & Science
University of Pittsburgh
Pittsburgh, PA 15260
btl = openib,vader,self
#btl_base_verbose = 10
11 matches
Mail list logo