Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-26 Thread Dave Love
Jed Brown writes: > It would be folly for PETSc to ship with a hard dependency on MPI-3. > You wouldn't be able to package it with ompi-1.6, for example. But that > doesn't mean PETSc's configure can't test for MPI-3 functionality and > use it when available. Indeed, it does (though for differe

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-25 Thread Jed Brown
Dave Love writes: > PETSc can't be using MPI-3 because I'm in the process of fixing rpm > packaging for the current version and building it with ompi 1.6. It would be folly for PETSc to ship with a hard dependency on MPI-3. You wouldn't be able to package it with ompi-1.6, for example. But that

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-25 Thread Dave Love
Jeff Hammond writes: > MPC is a great idea, although it poses some challenges w.r.t. globals and > such (however, see below). Unfortunately, "MPC conforms to the POSIX > Threads, OpenMP 3.1 and MPI 1.3 standards" ( > http://mpc.hpcframework.paratools.com/), it does not do me much good (I'm a > h

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-21 Thread Jeff Hammond
On Thu, Jan 21, 2016 at 4:07 AM, Dave Love wrote: > > Jeff Hammond writes: > > > Just using Intel compilers, OpenMP and MPI. Problem solved :-) > > > > (I work for Intel and the previous statement should be interpreted as a > > joke, > > Good! > > > although Intel OpenMP and MPI interoperate as

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-21 Thread Dave Love
Jeff Hammond writes: > Just using Intel compilers, OpenMP and MPI. Problem solved :-) > > (I work for Intel and the previous statement should be interpreted as a > joke, Good! > although Intel OpenMP and MPI interoperate as well as any > implementations of which I am aware.) Better than MPC (

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Jeff Hammond
On Wed, Jan 6, 2016 at 4:36 PM, Matt Thompson wrote: > On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet > wrote: > >> FWIW, >> >> there has been one attempt to set the OMP_* environment variables within >> OpenMPI, and that was aborted >> because that caused crashes with a prominent commercia

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Matt Thompson
On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet wrote: > FWIW, > > there has been one attempt to set the OMP_* environment variables within > OpenMPI, and that was aborted > because that caused crashes with a prominent commercial compiler. > > also, i'd like to clarify that OpenMPI does bind

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Gilles Gouaillardet
FWIW, there has been one attempt to set the OMP_* environment variables within OpenMPI, and that was aborted because that caused crashes with a prominent commercial compiler. also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g. processes), and it is up to the OpenMP runtime to bin

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Thanks for the clarification. :) 2016-01-07 0:48 GMT+01:00 Jeff Hammond : > KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option, > although MKL will respect it since MKL uses the Intel OpenMP runtime (by > default, at least). > > The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PR

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Jeff Hammond
KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option, although MKL will respect it since MKL uses the Intel OpenMP runtime (by default, at least). The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PROC_BIND and OMP_PLACES. I do not know how many OpenMP implementations support these

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Ok, thanks :) 2016-01-06 22:03 GMT+01:00 Ralph Castain : > Not really - just consistent with the other cmd line options. > > On Jan 6, 2016, at 12:58 PM, Nick Papior wrote: > > It was just that when I started using map-by I didn't get why: > ppr:2 > but > PE=2 > I would at least have expected: >

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Ralph Castain
Not really - just consistent with the other cmd line options. > On Jan 6, 2016, at 12:58 PM, Nick Papior wrote: > > It was just that when I started using map-by I didn't get why: > ppr:2 > but > PE=2 > I would at least have expected: > ppr=2:PE=2 > or > ppr:2:PE:2 > ? > Does this have a reason

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
It was just that when I started using map-by I didn't get why: ppr:2 but PE=2 I would at least have expected: ppr=2:PE=2 or ppr:2:PE:2 ? Does this have a reason? 2016-01-06 21:54 GMT+01:00 Ralph Castain : > ah yes, “r” = “resource”!! Thanks for the reminder :-) > > The difference in delimiter is

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Ralph Castain
ah yes, “r” = “resource”!! Thanks for the reminder :-) The difference in delimiter is just to simplify parsing - we can “split” the string on colons to separate out the options, and then use “=“ to set the value. Nothing particularly significant about the choice. > On Jan 6, 2016, at 12:48 PM

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Ralph Castain
Hmmm…let me see if I can remember :-) Procs-per-object is what it does, of course, but I honestly forget what that last “r” stands for! So what your command line is telling us is: map 2 processes on each socket, binding each process to 7 cpu’s (“pe” = processing element) In this case, we have

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Your are correct. "socket" means that the resource is socket, "ppr:2" means 2 processes per resource. PE= is Processing Elements per process. Perhaps the dev's can shed some light on why PE uses "=" and ppr has ":" as delimiter for resource request? This "old" slide show from Jeff shows the usage

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Matt Thompson
A ha! The Gurus know all. The map-by was the magic sauce: (1176) $ env OMP_NUM_THREADS=7 KMP_AFFINITY=compact mpirun -np 4 -map-by ppr:2:socket:pe=7 ./hello-hybrid.x | sort -g -k 18 Hello from thread 0 out of 7 from process 0 out of 4 on borgo035 on CPU 0 Hello from thread 1 out of 7 from process

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Ah, yes, my example was for 10 cores per socket, good catch :) 2016-01-06 21:19 GMT+01:00 Ralph Castain : > I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7 > > > On Jan 6, 2016, at 12:14 PM, Nick Papior wrote: > > I do not think KMP_AFFINITY should affect anything in OpenMPI

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Ralph Castain
I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7 > On Jan 6, 2016, at 12:14 PM, Nick Papior wrote: > > I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL > env setting? Or am I wrong? > > Note that these are used in an environment where openmpi aut

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Matt Thompson
Sure. Here's the basic one: (1159) $ env OMP_NUM_THREADS=7 mpirun -np 4 ./hello-hybrid.x | sort -g -k 18 Hello from thread 3 out of 7 from process 0 out of 4 on borgo035 on CPU 0 Hello from thread 1 out of 7 from process 0 out of 4 on borgo035 on CPU 1 Hello from thread 4 out of 7 from process 0 o

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL env setting? Or am I wrong? Note that these are used in an environment where openmpi automatically gets the host-file. Hence they are not present. With intel mkl and openmpi I got the best performance using these, rather l

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Erik Schnetter
Setting KMP_AFFINITY will probably override anything that OpenMPI sets. Can you try without? -erik On Wed, Jan 6, 2016 at 2:46 PM, Matt Thompson wrote: > Hello Open MPI Gurus, > > As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do > things to get the same behavior in variou

[OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Matt Thompson
Hello Open MPI Gurus, As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do things to get the same behavior in various stacks. For example, I have a 28-core node (2 14-core Haswells), and I'd like to run 4 MPI processes and 7 OpenMP threads. Thus, I'd like the processes to be 2