Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Saliya Ekanayake
Thank you George. This is what I was trying to find out after your reply yesterday. On Tue, Sep 1, 2015 at 1:32 AM, George Bosilca wrote: > The sm collective module has a priority of 0, which guarantees that it > never gets called. If you want to give it a try you should > set coll_sm_priority t

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread George Bosilca
The sm collective module has a priority of 0, which guarantees that it never gets called. If you want to give it a try you should set coll_sm_priority to any value over 30. George. On Tue, Sep 1, 2015 at 1:06 AM, Gilles Gouaillardet wrote: > Saliya, > > btl is a point to point thing only. >

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Gilles Gouaillardet
Saliya, btl is a point to point thing only. collectives are implemented by the coll mca the sm coll mca is optimized for shared memory, but support intra node communicators only. the ml and hierarch coll have some optimizations for intra node communications. as far as i know, none of these a

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread George Bosilca
Without going too much in details collective communications can be implemented as a collection of point-to-point. Open MPI uses point-to-point messages for collective communications inside the node-boundaries, so if your intra-node BTL is vader you will benefit from it not only on the point-to-poin

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Saliya Ekanayake
One more question. I found this blog from Jeff [1] on vader and I got the impression that it's used only for peer-to-peer communications and not for collectives. Is this true or did I misunderstand? [1] http://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Gilles Gouaillardet
you can try mpirun --mca btl_base_verbose 100 ... or you can simply blacklist the btl you do *not* want to use, for example mpirun --mca btl ^sm if you want to use vader you can run ompi_info --all | grep vader to check the btl parameters, of course, reading the source code is the best way to un

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Saliya Ekanayake
Thank you Gilles. Is there some documentation on vader btl and how I can check which (sm or vader) is being used? On Tue, Sep 1, 2015 at 12:18 AM, Gilles Gouaillardet wrote: > Saliya, > > OpenMPI uses btl for point to point communication, and automatically > selects the best one per pair. > Typi

Re: [OMPI users] OpenMPI optimizations for intra-node process communication

2015-09-01 Thread Gilles Gouaillardet
Saliya, OpenMPI uses btl for point to point communication, and automatically selects the best one per pair. Typically, the openib or tcp btl is used for inter node communication, and the sm or vader btl for intra node. note the vader btl uses the knem kernel module when available for even mor