Thanks Ralph for this answer. May be I wasn't very clear (my English is not so
good...)
I do not want the binding-to-core be the default. For hybrid codes (OpenMP +
MPI) I need a bind to the socket. But at this time, I am unable to request a
--bind-to-core option:
[begou@kareline OARTEST]$ mpirun -n 4 --bind-to-core ./hellompi172
--------------------------------------------------------------------------
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.
Node: kareline
This is a warning only; your job will continue, though performance may
be degraded.
and without "--bind-to-core" some processes share a same core with only 50% load
each.
With an older MPI version (OpenMPI-1.6.3 built with gcc 4.4) it works on the
same server. I try to build OpenMPI-1.7.3 or 1.7.2 with GCC 4.8.1 at this time
with openib support.
Browsing the developper's list (not this user list), I've found one thread where
you discuss about libnuma for a similar problem. libnuma is installed but not
the development package (numactl-devel-2.0.7-6.el6.x86_64) so I've just added
this package and at this time I'm trying to build again OpenMPI.
Patrick
Ralph Castain wrote:
We never set binding "on" by default, and there is no configure option that
will do so. Never has been, to my knowledge.
If you truly want it to bind by default, then you need to add that directive to
your default MCA param file:
<prefix>/etc/openmpi-mca-params.conf
On Oct 21, 2013, at 3:17 AM, Patrick Begou <patrick.be...@legi.grenoble-inp.fr>
wrote:
I am compiling OpenMPI 1.7.3 and 1.7.2 with GCC 4.8.1 but I'm unable to
activate some binding policy at compile time.
ompi_info -a shows:
MCA hwloc: parameter "hwloc_base_binding_policy" (current value: "", data
source: default, level: 9 dev/all, type: string)
Policy for binding processes [none (default) |
hwthread | core | l1cache | l2cache | l3cache | socket | numa | board]
(supported qualifiers: overload-allowed,if-supported)
MCA hwloc: parameter "hwloc_base_bind_to_core" (current value: "false", data
source: default, level: 9 dev/all, type: bool)
Bind processes to cores
Valid values: 0: f|false|disabled, 1: t|true|enabled
MCA hwloc: parameter "hwloc_base_bind_to_socket" (current value: "false", data
source: default, level: 9 dev/all, type: bool)
Bind processes to sockets
Valid values: 0: f|false|disabled, 1: t|true|enabled
So clearly it is not activated.
I've tried to set these options to ./configure but it doesn't help:
--enable-mca-direct=hwloc_base_bind_to_core,hwloc_base_bind_to_socket
I know it should work because it is working out of the box with OpenMPI-1.6.3
that I have compiled several months ago.
I think I've messed something but where ?
Thanks for your advices
Patrick
--
===================================================================
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI | |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX | Fax 04 76 82 52 71 |
===================================================================
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
===================================================================
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:patrick.be...@grenoble-inp.fr |
| LEGI | |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX | Fax 04 76 82 52 71 |
===================================================================