Re: [OMPI users] warning message for process binding with openmpi-dev-4010-g6c9d65c

2016-05-07 Thread Siegmar Gross

Hi Gilles,

"loki" is a machine in our new lab and I tried "--slot-list 0:0-5,1:0-5"
the first time, so that I don't know if it worked before. I can ask our
admin on Monday, if numactl-devel is installed.


Kind regards

Siegmar


On 05/07/16 12:10, Gilles Gouaillardet wrote:

Siegmar,

did you upgrade your os recently ? or change hyper threading settings ?
this error message typically appears when the numactl-devel rpm is not installed
(numactl-devel on redhat, the package name might differ on sles)

if not, would you mind retesting frI'm scratch a previous tarball that used to
work without any warning ?

Cheers,

Gilles


On Saturday, May 7, 2016, Siegmar Gross > wrote:

Hi,

yesterday I installed openmpi-dev-4010-g6c9d65c on my "SUSE Linux
Enterprise Server 12 (x86_64)" with Sun C 5.13  and gcc-5.3.0.
Unfortunately I get the following warning message.

loki hello_1 128 ompi_info | grep -e "OPAL repo revision" -e "C compiler
absolute"
  OPAL repo revision: dev-4010-g6c9d65c
 C compiler absolute: /opt/solstudio12.4/bin/cc
loki hello_1 129 mpiexec -np 3 --host loki --slot-list 0:0-5,1:0-5 
hello_1_mpi
--
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.

  Node:  loki

Open MPI uses the "hwloc" library to perform process and memory
binding. This error message means that hwloc has indicated that
processor binding support is not available on this machine.

On OS X, processor and memory binding is not available at all (i.e.,
the OS does not expose this functionality).

On Linux, lack of the functionality can mean that you are on a
platform where processor and memory affinity is not supported in Linux
itself, or that hwloc was built without NUMA and/or processor affinity
support. When building hwloc (which, depending on your Open MPI
installation, may be embedded in Open MPI itself), it is important to
have the libnuma header and library files available. Different linux
distributions package these files under different names; look for
packages with the word "numa" in them. You may also need a developer
version of the package (e.g., with "dev" or "devel" in the name) to
obtain the relevant header files.

If you are getting this message on a non-OS X, non-Linux platform,
then hwloc does not support processor / memory affinity on this
platform. If the OS/platform does actually support processor / memory
affinity, then you should contact the hwloc maintainers:
https://github.com/open-mpi/hwloc.

This is a warning only; your job will continue, though performance may
be degraded.
--
Process 0 of 3 running on loki
Process 2 of 3 running on loki
Process 1 of 3 running on loki


Now 2 slave tasks are sending greetings.

Greetings from task 1:
  message type:3
...



loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 122 ls -l 
/usr/lib64/*numa*
-rwxr-xr-x 1 root root 48256 Nov 24 16:29 /usr/lib64/libnuma.so.1
loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 123 grep numa
log.configure.Linux.x86_64.64_cc
checking numaif.h usability... no
checking numaif.h presence... yes
configure: WARNING: numaif.h: present but cannot be compiled
configure: WARNING: numaif.h: check for missing prerequisite headers?
configure: WARNING: numaif.h: see the Autoconf documentation
configure: WARNING: numaif.h: section "Present But Cannot Be Compiled"
configure: WARNING: numaif.h: proceeding with the compiler's result
checking for numaif.h... no
loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 124



I didn't get the warning for openmpi-v1.10.2-176-g9d45e07 and
openmpi-v2.x-dev-1404-g74d8ea0 as you can see in my previous emails,
although I have the same messages in log.configure.*. I would be
grateful, if somebody can fix the problem if it is a problem
and not an intended message. Thank you very much for any help in
advance.


Kind regards

Siegmar



___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/05/29131.php


Re: [OMPI users] warning message for process binding with openmpi-dev-4010-g6c9d65c

2016-05-07 Thread Gilles Gouaillardet
Siegmar,

did you upgrade your os recently ? or change hyper threading settings ?
this error message typically appears when the numactl-devel rpm is not
installed
(numactl-devel on redhat, the package name might differ on sles)

if not, would you mind retesting frI'm scratch a previous tarball that used
to work without any warning ?

Cheers,

Gilles


On Saturday, May 7, 2016, Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de> wrote:

> Hi,
>
> yesterday I installed openmpi-dev-4010-g6c9d65c on my "SUSE Linux
> Enterprise Server 12 (x86_64)" with Sun C 5.13  and gcc-5.3.0.
> Unfortunately I get the following warning message.
>
> loki hello_1 128 ompi_info | grep -e "OPAL repo revision" -e "C compiler
> absolute"
>   OPAL repo revision: dev-4010-g6c9d65c
>  C compiler absolute: /opt/solstudio12.4/bin/cc
> loki hello_1 129 mpiexec -np 3 --host loki --slot-list 0:0-5,1:0-5
> hello_1_mpi
> --
> WARNING: a request was made to bind a process. While the system
> supports binding the process itself, at least one node does NOT
> support binding memory to the process location.
>
>   Node:  loki
>
> Open MPI uses the "hwloc" library to perform process and memory
> binding. This error message means that hwloc has indicated that
> processor binding support is not available on this machine.
>
> On OS X, processor and memory binding is not available at all (i.e.,
> the OS does not expose this functionality).
>
> On Linux, lack of the functionality can mean that you are on a
> platform where processor and memory affinity is not supported in Linux
> itself, or that hwloc was built without NUMA and/or processor affinity
> support. When building hwloc (which, depending on your Open MPI
> installation, may be embedded in Open MPI itself), it is important to
> have the libnuma header and library files available. Different linux
> distributions package these files under different names; look for
> packages with the word "numa" in them. You may also need a developer
> version of the package (e.g., with "dev" or "devel" in the name) to
> obtain the relevant header files.
>
> If you are getting this message on a non-OS X, non-Linux platform,
> then hwloc does not support processor / memory affinity on this
> platform. If the OS/platform does actually support processor / memory
> affinity, then you should contact the hwloc maintainers:
> https://github.com/open-mpi/hwloc.
>
> This is a warning only; your job will continue, though performance may
> be degraded.
> --
> Process 0 of 3 running on loki
> Process 2 of 3 running on loki
> Process 1 of 3 running on loki
>
>
> Now 2 slave tasks are sending greetings.
>
> Greetings from task 1:
>   message type:3
> ...
>
>
>
> loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 122 ls -l
> /usr/lib64/*numa*
> -rwxr-xr-x 1 root root 48256 Nov 24 16:29 /usr/lib64/libnuma.so.1
> loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 123 grep numa
> log.configure.Linux.x86_64.64_cc
> checking numaif.h usability... no
> checking numaif.h presence... yes
> configure: WARNING: numaif.h: present but cannot be compiled
> configure: WARNING: numaif.h: check for missing prerequisite headers?
> configure: WARNING: numaif.h: see the Autoconf documentation
> configure: WARNING: numaif.h: section "Present But Cannot Be Compiled"
> configure: WARNING: numaif.h: proceeding with the compiler's result
> checking for numaif.h... no
> loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 124
>
>
>
> I didn't get the warning for openmpi-v1.10.2-176-g9d45e07 and
> openmpi-v2.x-dev-1404-g74d8ea0 as you can see in my previous emails,
> although I have the same messages in log.configure.*. I would be
> grateful, if somebody can fix the problem if it is a problem
> and not an intended message. Thank you very much for any help in
> advance.
>
>
> Kind regards
>
> Siegmar
>


[OMPI users] warning message for process binding with openmpi-dev-4010-g6c9d65c

2016-05-07 Thread Siegmar Gross

Hi,

yesterday I installed openmpi-dev-4010-g6c9d65c on my "SUSE Linux
Enterprise Server 12 (x86_64)" with Sun C 5.13  and gcc-5.3.0.
Unfortunately I get the following warning message.

loki hello_1 128 ompi_info | grep -e "OPAL repo revision" -e "C compiler 
absolute"
  OPAL repo revision: dev-4010-g6c9d65c
 C compiler absolute: /opt/solstudio12.4/bin/cc
loki hello_1 129 mpiexec -np 3 --host loki --slot-list 0:0-5,1:0-5 hello_1_mpi
--
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.

  Node:  loki

Open MPI uses the "hwloc" library to perform process and memory
binding. This error message means that hwloc has indicated that
processor binding support is not available on this machine.

On OS X, processor and memory binding is not available at all (i.e.,
the OS does not expose this functionality).

On Linux, lack of the functionality can mean that you are on a
platform where processor and memory affinity is not supported in Linux
itself, or that hwloc was built without NUMA and/or processor affinity
support. When building hwloc (which, depending on your Open MPI
installation, may be embedded in Open MPI itself), it is important to
have the libnuma header and library files available. Different linux
distributions package these files under different names; look for
packages with the word "numa" in them. You may also need a developer
version of the package (e.g., with "dev" or "devel" in the name) to
obtain the relevant header files.

If you are getting this message on a non-OS X, non-Linux platform,
then hwloc does not support processor / memory affinity on this
platform. If the OS/platform does actually support processor / memory
affinity, then you should contact the hwloc maintainers:
https://github.com/open-mpi/hwloc.

This is a warning only; your job will continue, though performance may
be degraded.
--
Process 0 of 3 running on loki
Process 2 of 3 running on loki
Process 1 of 3 running on loki


Now 2 slave tasks are sending greetings.

Greetings from task 1:
  message type:3
...



loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 122 ls -l /usr/lib64/*numa*
-rwxr-xr-x 1 root root 48256 Nov 24 16:29 /usr/lib64/libnuma.so.1
loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 123 grep numa 
log.configure.Linux.x86_64.64_cc

checking numaif.h usability... no
checking numaif.h presence... yes
configure: WARNING: numaif.h: present but cannot be compiled
configure: WARNING: numaif.h: check for missing prerequisite headers?
configure: WARNING: numaif.h: see the Autoconf documentation
configure: WARNING: numaif.h: section "Present But Cannot Be Compiled"
configure: WARNING: numaif.h: proceeding with the compiler's result
checking for numaif.h... no
loki openmpi-dev-4010-g6c9d65c-Linux.x86_64.64_cc 124



I didn't get the warning for openmpi-v1.10.2-176-g9d45e07 and
openmpi-v2.x-dev-1404-g74d8ea0 as you can see in my previous emails,
although I have the same messages in log.configure.*. I would be
grateful, if somebody can fix the problem if it is a problem
and not an intended message. Thank you very much for any help in
advance.


Kind regards

Siegmar
/* An MPI-version of the "hello world" program, which delivers some
 * information about its machine and operating system.
 *
 *
 * Compiling:
 *   Store executable(s) into local directory.
 * mpicc -o  
 *
 *   Store executable(s) into predefined directories.
 * make
 *
 *   Make program(s) automatically on all specified hosts. You must
 *   edit the file "make_compile" and specify your host names before
 *   you execute it.
 * make_compile
 *
 * Running:
 *   LAM-MPI:
 * mpiexec -boot -np  
 * or
 * mpiexec -boot \
 *	 -host  -np   : \
 *	 -host  -np  
 * or
 * mpiexec -boot [-v] -configfile 
 * or
 * lamboot [-v] []
 *   mpiexec -np  
 *	 or
 *	 mpiexec [-v] -configfile 
 * lamhalt
 *
 *   OpenMPI:
 * "host1", "host2", and so on can all have the same name,
 * if you want to start a virtual computer with some virtual
 * cpu's on the local host. The name "localhost" is allowed
 * as well.
 *
 * mpiexec -np  
 * or
 * mpiexec --host  \
 *	 -np  
 * or
 * mpiexec -hostfile  \
 *	 -np  
 * or
 * mpiexec -app 
 *
 * Cleaning:
 *   local computer:
 * rm 
 * or
 * make clean_all
 *   on all specified computers (you must edit the file "make_clean_all"
 *   and specify your host names before you execute it.
 * make_clean_all
 *
 *
 * File: hello_1_mpi.c		   	Author: S. Gross
 * Date: 01.10.2012
 *
 */

#include 
#include 
#include 
#include 
#include 
#include "mpi.h"

#define	BUF_SIZE	255		/* message buffer size		*/
#define	MAX_TASKS	12		/* max. number