Re: [OMPI devel] New binding warnings in master

2015-03-20 Thread Ralph Castain
In fixing a problem for Mellanox, I noted that we had somehow lost our default 
binding policy intentions - i.e., we were no longer automatically binding by 
default. I fixed that, and so you’d get this if the numactl and numactl-devel 
libs are missing.

I suspect we should eliminate those warnings if we bind by default?

On the other matter: that has been the agreed-upon behavior for some time now. 
If you don’t specify anything, we launch the number of procs equal to the 
number of slots, with the slots auto-detected using hwloc and equating slots to 
number of discovered cores.

However, if you use -host, then we assume you are telling us the number of 
slots=1 for every time you mention the host name. This overrides any 
auto-discovery.

HTH
Ralph


> On Mar 20, 2015, at 8:16 AM, Rolf vandeVaart  wrote:
> 
> Greetings:
>  
> I am now seeing the following message for all my calls to mpirun on ompi 
> master.  This started with last night’s MTT run.  Is this intentional?
>  
> [rvandevaart@ivy0 ~]$ mpirun -np 1 hostname
> --
> WARNING: a request was made to bind a process. While the system
> supports binding the process itself, at least one node does NOT
> support binding memory to the process location.
> 
>   Node:  ivy0
> 
> This usually is due to not having the required NUMA support installed
> on the node. In some Linux distributions, the required support is
> contained in the libnumactl and libnumactl-devel packages.
> This is a warning only; your job will continue, though performance may be 
> degraded.
> --
> ivy0.nvidia.com 
>  
> 
> On another note, I noticed on both 1.8 and master that we get different 
> number of nodes if we specify the hostname.  This is not too big a deal, but 
> surprised me.
> 
> [rvandevaart@ivy0 ~]$ /opt/openmpi/v1.8.4/bin/mpirun hostname
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> ivy0.nvidia.com 
> [rvandevaart@ivy0 ~]$ /opt/openmpi/v1.8.4/bin/mpirun -host ivy0 hostname
> ivy0.nvidia.com 
> [rvandevaart@ivy0 ~]$ 
> 
> This email message is for the sole use of the intended recipient(s) and may 
> contain confidential information.  Any unauthorized review, use, disclosure 
> or distribution is prohibited.  If you are not the intended recipient, please 
> contact the sender by reply email and destroy all copies of the original 
> message. 
> ___
> devel mailing list
> de...@open-mpi.org 
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel 
> 
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2015/03/17140.php 
> 


[OMPI devel] New binding warnings in master

2015-03-20 Thread Rolf vandeVaart
Greetings:

I am now seeing the following message for all my calls to mpirun on ompi 
master.  This started with last night's MTT run.  Is this intentional?


[rvandevaart@ivy0 ~]$ mpirun -np 1 hostname
--
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.

  Node:  ivy0

This usually is due to not having the required NUMA support installed
on the node. In some Linux distributions, the required support is
contained in the libnumactl and libnumactl-devel packages.
This is a warning only; your job will continue, though performance may be 
degraded.
--
ivy0.nvidia.com



On another note, I noticed on both 1.8 and master that we get different number 
of nodes if we specify the hostname.  This is not too big a deal, but surprised 
me.

[rvandevaart@ivy0 ~]$ /opt/openmpi/v1.8.4/bin/mpirun hostname
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
ivy0.nvidia.com
[rvandevaart@ivy0 ~]$ /opt/openmpi/v1.8.4/bin/mpirun -host ivy0 hostname
ivy0.nvidia.com
[rvandevaart@ivy0 ~]$

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---