Re: [OMPI users] COLL-ML ATTENTION

2018-07-05 Thread Deva
Mellanox HCOLL requires valid IPoIB setup for to use IB MCAST capabilities. You can disable MCAST features with -x HCOLL_ENABLE_MCAST_ALL=0 On Wed, Jul 4, 2018 at 7:00 PM, larkym via users wrote: > Good evening, > > Can someone help me understand the following error I am getting? > >

Re: [OMPI users] An equivalent to btl_openib_include_if when MXM over Infiniband ?

2016-08-19 Thread Deva
Hi Martin MXM default transport is UD (MXM_TLS=*ud*,shm,self), which is scalable when running with large applications. RC(MXM_TLS=*rc,*shm,self) is recommended for microbenchmarks and very small scale applications, yes, max seg size setting is too small. Did you check any message rate

Re: [OMPI users] An equivalent to btl_openib_include_if when MXM over Infiniband ?

2016-08-19 Thread Deva
Hi Martin, Can you check if it is any better with "-x MXM_TLS=rc,shm,self" ? -Devendar On Tue, Aug 16, 2016 at 11:28 AM, Audet, Martin wrote: > Hi Josh, > > Thanks for your reply. I did try setting MXM_RDMA_PORTS=mlx4_0:1 for all > my MPI processes > and it did

Re: [OMPI users] Open MPI 1.8.8 and hcoll in system space

2015-08-12 Thread Deva
g those, but forgot about them. I am curious, though, why > using '-mca coll ^ml' wouldn't work for me. > > We'll watch for the next HPCX release. Is there an ETA on when that > release may happen? Thank you for the help! > David > > > On 08/12/2015 04:04 PM, Deva wrote: >

Re: [OMPI users] Open MPI 1.8.8 and hcoll in system space

2015-08-12 Thread Deva
for duty > > This implies to me that some other library is being used instead of > /usr/lib64/libhcoll.so, but I am not sure how that could be... > > Thanks, > David > > On 08/12/2015 03:30 PM, Deva wrote: > > Hi David, > > I tried same tarball on OFED-1.5.4

Re: [OMPI users] Open MPI 1.8.8 and hcoll in system space

2015-08-12 Thread Deva
c4 ompi_mpi_init() ??:0 > 13 0x00092ea0 PMPI_Init() ??:0 > 14 0x004009b6 main() ??:0 > 15 0x0001ed5d __libc_start_main() ??:0 > 16 0x004008c9 _start() ??:0 > === > -- > mpirun noticed that process rank 1 with PID 14678 on node zo-fe1 exited on

Re: [OMPI users] Open MPI 1.8.8 and hcoll in system space

2015-08-12 Thread Deva
Hi David, This issue is from hcoll library. This could be because of symbol conflict with ml module. This is fixed recently in HCOLL. Can you try with "-mca coll ^ml" and see if this workaround works in your setup? -Devendar On Wed, Aug 12, 2015 at 9:30 AM, David Shrader

Re: [OMPI users] Max Registerable Memory Warning

2015-02-08 Thread Deva
What OFED version you are running? If not latest, is it possible to upgrade to latest OFED?. Otherwise, Can you try latest OMPI release (>= v1.8.4), where this warning is ignored on older OFEDs -Devendar On Sun, Feb 8, 2015 at 12:37 PM, Saliya Ekanayake wrote: > Hi, > >

Re: [OMPI users] Icreasing OFED registerable memory

2015-01-06 Thread Deva
will display the current configuration. -- -Devendar On Tue, Jan 6, 2015 at 1:37 PM, Deva <devendar.bure...@gmail.com> wrote: > Hi Waleed, > > -- >Memlock limit: 65536 > -- > > such a low limit should be due to per-user lock me

Re: [OMPI users] Icreasing OFED registerable memory

2015-01-06 Thread Deva
Hi Waleed, -- Memlock limit: 65536 -- such a low limit should be due to per-user lock memory limit . Can you make sure it is set to "unlimited" on all nodes ( "ulimit -l unlimited")? -Devendar On Tue, Jan 6, 2015 at 3:42 AM, Waleed Lotfy wrote: >

Re: [OMPI users] Icreasing OFED registerable memory

2014-12-29 Thread Deva
Hi Waleed, It is highly recommended to upgrade to latest OFED. Meanwhile, Can you try latest OMPI release (v1.8.4), where this warning is ignored on older OFEDs -Devendar On Sun, Dec 28, 2014 at 6:03 AM, Waleed Lotfy wrote: > I have a bunch of 8 GB memory nodes in a