Jed Brown writes:
> It would be folly for PETSc to ship with a hard dependency on MPI-3.
> You wouldn't be able to package it with ompi-1.6, for example. But that
> doesn't mean PETSc's configure can't test for MPI-3 functionality and
> use it when available. Indeed, it does (though for differe
Dave Love writes:
> PETSc can't be using MPI-3 because I'm in the process of fixing rpm
> packaging for the current version and building it with ompi 1.6.
It would be folly for PETSc to ship with a hard dependency on MPI-3.
You wouldn't be able to package it with ompi-1.6, for example. But that
Jeff Hammond writes:
> MPC is a great idea, although it poses some challenges w.r.t. globals and
> such (however, see below). Unfortunately, "MPC conforms to the POSIX
> Threads, OpenMP 3.1 and MPI 1.3 standards" (
> http://mpc.hpcframework.paratools.com/), it does not do me much good (I'm a
> h
On Thu, Jan 21, 2016 at 4:07 AM, Dave Love wrote:
>
> Jeff Hammond writes:
>
> > Just using Intel compilers, OpenMP and MPI. Problem solved :-)
> >
> > (I work for Intel and the previous statement should be interpreted as a
> > joke,
>
> Good!
>
> > although Intel OpenMP and MPI interoperate as
Jeff Hammond writes:
> Just using Intel compilers, OpenMP and MPI. Problem solved :-)
>
> (I work for Intel and the previous statement should be interpreted as a
> joke,
Good!
> although Intel OpenMP and MPI interoperate as well as any
> implementations of which I am aware.)
Better than MPC (
On Wed, Jan 6, 2016 at 4:36 PM, Matt Thompson wrote:
> On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet
> wrote:
>
>> FWIW,
>>
>> there has been one attempt to set the OMP_* environment variables within
>> OpenMPI, and that was aborted
>> because that caused crashes with a prominent commercia
On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet
wrote:
> FWIW,
>
> there has been one attempt to set the OMP_* environment variables within
> OpenMPI, and that was aborted
> because that caused crashes with a prominent commercial compiler.
>
> also, i'd like to clarify that OpenMPI does bind
FWIW,
there has been one attempt to set the OMP_* environment variables within
OpenMPI, and that was aborted
because that caused crashes with a prominent commercial compiler.
also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g.
processes), and it is up to the OpenMP runtime to bin
Thanks for the clarification. :)
2016-01-07 0:48 GMT+01:00 Jeff Hammond :
> KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option,
> although MKL will respect it since MKL uses the Intel OpenMP runtime (by
> default, at least).
>
> The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PR
KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option,
although MKL will respect it since MKL uses the Intel OpenMP runtime (by
default, at least).
The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PROC_BIND and
OMP_PLACES. I do not know how many OpenMP implementations support these
Ok, thanks :)
2016-01-06 22:03 GMT+01:00 Ralph Castain :
> Not really - just consistent with the other cmd line options.
>
> On Jan 6, 2016, at 12:58 PM, Nick Papior wrote:
>
> It was just that when I started using map-by I didn't get why:
> ppr:2
> but
> PE=2
> I would at least have expected:
>
Not really - just consistent with the other cmd line options.
> On Jan 6, 2016, at 12:58 PM, Nick Papior wrote:
>
> It was just that when I started using map-by I didn't get why:
> ppr:2
> but
> PE=2
> I would at least have expected:
> ppr=2:PE=2
> or
> ppr:2:PE:2
> ?
> Does this have a reason
It was just that when I started using map-by I didn't get why:
ppr:2
but
PE=2
I would at least have expected:
ppr=2:PE=2
or
ppr:2:PE:2
?
Does this have a reason?
2016-01-06 21:54 GMT+01:00 Ralph Castain :
> ah yes, “r” = “resource”!! Thanks for the reminder :-)
>
> The difference in delimiter is
ah yes, “r” = “resource”!! Thanks for the reminder :-)
The difference in delimiter is just to simplify parsing - we can “split” the
string on colons to separate out the options, and then use “=“ to set the
value. Nothing particularly significant about the choice.
> On Jan 6, 2016, at 12:48 PM
Hmmm…let me see if I can remember :-)
Procs-per-object is what it does, of course, but I honestly forget what that
last “r” stands for!
So what your command line is telling us is:
map 2 processes on each socket, binding each process to 7 cpu’s (“pe” =
processing element)
In this case, we have
Your are correct. "socket" means that the resource is socket, "ppr:2" means
2 processes per resource.
PE= is Processing Elements per process.
Perhaps the dev's can shed some light on why PE uses "=" and ppr has ":" as
delimiter for resource request?
This "old" slide show from Jeff shows the usage
A ha! The Gurus know all. The map-by was the magic sauce:
(1176) $ env OMP_NUM_THREADS=7 KMP_AFFINITY=compact mpirun -np 4 -map-by
ppr:2:socket:pe=7 ./hello-hybrid.x | sort -g -k 18
Hello from thread 0 out of 7 from process 0 out of 4 on borgo035 on CPU 0
Hello from thread 1 out of 7 from process
Ah, yes, my example was for 10 cores per socket, good catch :)
2016-01-06 21:19 GMT+01:00 Ralph Castain :
> I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7
>
>
> On Jan 6, 2016, at 12:14 PM, Nick Papior wrote:
>
> I do not think KMP_AFFINITY should affect anything in OpenMPI
I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7
> On Jan 6, 2016, at 12:14 PM, Nick Papior wrote:
>
> I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL
> env setting? Or am I wrong?
>
> Note that these are used in an environment where openmpi aut
Sure. Here's the basic one:
(1159) $ env OMP_NUM_THREADS=7 mpirun -np 4 ./hello-hybrid.x | sort -g -k 18
Hello from thread 3 out of 7 from process 0 out of 4 on borgo035 on CPU 0
Hello from thread 1 out of 7 from process 0 out of 4 on borgo035 on CPU 1
Hello from thread 4 out of 7 from process 0 o
I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL
env setting? Or am I wrong?
Note that these are used in an environment where openmpi automatically gets
the host-file. Hence they are not present.
With intel mkl and openmpi I got the best performance using these, rather
l
Setting KMP_AFFINITY will probably override anything that OpenMPI
sets. Can you try without?
-erik
On Wed, Jan 6, 2016 at 2:46 PM, Matt Thompson wrote:
> Hello Open MPI Gurus,
>
> As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do
> things to get the same behavior in variou
Hello Open MPI Gurus,
As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do
things to get the same behavior in various stacks. For example, I have a
28-core node (2 14-core Haswells), and I'd like to run 4 MPI processes and
7 OpenMP threads. Thus, I'd like the processes to be 2
23 matches
Mail list logo