Re: [OMPI devel] RDMA and OMPI implementation

2022-04-21 Thread Masoud Hemmatpour via devel
Hi Tomislav, Thank you very much for your answer! Sure, I ask my question in UCX mailing list. Thanks, On Thu, 21 Apr 2022 at 16:27, Tomislav Janjusic wrote: > Hi Masoud, > > > I would say how can I see a complete list of such factors like message > size, memory map, ... etc > For UCX, depe

Re: [OMPI devel] RDMA and OMPI implementation

2022-04-21 Thread Tomislav Janjusic via devel
Hi Masoud, > I would say how can I see a complete list of such factors like message size, > memory map, ... etc For UCX, depending on where you have it installed, you'll find 'ucx_info' which will list all available tuning parameters. For general ompi tuning I would start with ompi_info -a, and

Re: [OMPI devel] RDMA and OMPI implementation

2022-04-21 Thread Jeff Squyres (jsquyres) via devel
In UCX's case, the choice is almost entirely driven by the UCX library. You'll need to look at the UCX code and/or ask NVIDIA. -- Jeff Squyres jsquy...@cisco.com From: Masoud Hemmatpour Sent: Thursday, April 21, 2022 7:57 AM To: Jeff Squyres (jsquyres)

Re: [OMPI devel] RDMA and OMPI implementation

2022-04-21 Thread Masoud Hemmatpour via devel
Thanks again for your answer and I hope I dont bother you with my questions! If I can ask my last question here. I would say how can I see a complete list of such factors like *message size, memory map, ... etc*? Is there any reading or should I look at the code, if any, could you please give me a

Re: [OMPI devel] RDMA and OMPI implementation

2022-04-21 Thread Jeff Squyres (jsquyres) via devel
It means that your underlying network transport supports RDMA. To be clear, if you built Open MPI with UCX support, and you run on a system with UCX-enabled network interfaces (such as IB), Open MPI should automatically default to using those UCX interfaces. This means you'll get all the benefi