Re: [OMPI users] OpenMP & Open MPI

2016-03-13 Thread Nick Papior
ce, instead of launching 8 MPI processors you may launch 2 MPI processors each forking 4 threads. > > Sent from Yahoo Mail on Android > > On Sun, Mar 13, 2016 at 5:59 PM, Nick Papior > wrote: > 2016-03-13 22:02 GMT+01:00 Matthew Larkin : >> Hello, >> >> My und

Re: [OMPI users] OpenMP & Open MPI

2016-03-13 Thread Nick Papior
2016-03-13 22:02 GMT+01:00 Matthew Larkin : > Hello, > > My understanding is Open MPI can utilize shared and/or distributed memory > architecture (parallel programming model). OpenMP is soley for shared memory > systems. > > I believe Open MPI incorporates OpenMP from some of the other archives I >

Re: [OMPI users] Open MPI backwards incompatibility issue in statically linked program

2016-02-13 Thread Nick Papior
g an intel-suite of compilers (for instance). > > Thanks, > Kim > > On Sat, Feb 13, 2016 at 10:45 PM, Nick Papior > wrote: > >> You may be interested in reading: >> https://www.open-mpi.org/software/ompi/versions/ >> >> 2016-02-13 22:30 GMT+01:00 K

Re: [OMPI users] Open MPI backwards incompatibility issue in statically linked program

2016-02-13 Thread Nick Papior
You may be interested in reading: https://www.open-mpi.org/software/ompi/versions/ 2016-02-13 22:30 GMT+01:00 Kim Walisch : > Hi, > > In order to make life of my users easier I have built a fully > statically linked version of my primecount program. So the program > also statically links against

Re: [OMPI users] build failure with NAG Fortran

2016-01-26 Thread Nick Papior
Try and add this flag to the nagfor compiler. -width=90 it seems may be related to line-length limit? 2016-01-26 16:26 GMT+01:00 Dave Love : > Building 1.10.2 with the NAG Fortran compiler version 6.0 fails with > > libtool: compile: nagfor -I../../../../ompi/include > -I../../../../ompi/in

Re: [OMPI users] how to benchmark a server with openmpi?

2016-01-24 Thread Nick Papior
*All* codes scale differently. So you should do these tests with your own code, and not a different code (such as MM). 2016-01-24 15:38 GMT+01:00 Ibrahim Ikhlawi : > > > Hallo, > > I am working on a server and run java codes with OpenMPI. I want to know > which number of process is the fastest t

Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Nick Papior
The status field should be integer :: stat(MPI_STATUS_SIZE) Perhaps n is located stackwise just after the stat variable, which then overwrites it. 2016-01-22 15:37 GMT+01:00 Paweł Jarzębski : > Hi, > > I wrote this code: > > program hello >implicit none > >include 'mpif.h'

Re: [OMPI users] MPI, Fortran, and GET_ENVIRONMENT_VARIABLE

2016-01-15 Thread Nick Papior
Wouldn't this be partially available via https://github.com/open-mpi/ompi/pull/326 in the trunk? Of course the switch is not gathered from this, but it might work as an initial step towards what you seek Matt? 2016-01-15 17:27 GMT+01:00 Ralph Castain : > Yes, we don’t propagate envars ourselves

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
MP_AFFINITY are OMP_PROC_BIND and > OMP_PLACES. I do not know how many OpenMP implementations support these > two options, but Intel and GCC should. > > Best, > > Jeff > > On Wed, Jan 6, 2016 at 1:04 PM, Nick Papior wrote: > >> Ok, thanks :) >> >> 2016-01

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Ok, thanks :) 2016-01-06 22:03 GMT+01:00 Ralph Castain : > Not really - just consistent with the other cmd line options. > > On Jan 6, 2016, at 12:58 PM, Nick Papior wrote: > > It was just that when I started using map-by I didn't get why: > ppr:2 > but > PE=2 >

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
e in delimiter is just to simplify parsing - we can “split” > the string on colons to separate out the options, and then use “=“ to set > the value. Nothing particularly significant about the choice. > > > On Jan 6, 2016, at 12:48 PM, Nick Papior wrote: > > Your are correct.

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
seems to be > ppr:2:socket means 2 processes per socket? And pe=7 means leave 7 processes > between them? Is that about right? > > Matt > > On Wed, Jan 6, 2016 at 3:19 PM, Ralph Castain wrote: > >> I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
Ah, yes, my example was for 10 cores per socket, good catch :) 2016-01-06 21:19 GMT+01:00 Ralph Castain : > I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7 > > > On Jan 6, 2016, at 12:14 PM, Nick Papior wrote: > > I do not think KMP_AFFINITY should

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Nick Papior
I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL env setting? Or am I wrong? Note that these are used in an environment where openmpi automatically gets the host-file. Hence they are not present. With intel mkl and openmpi I got the best performance using these, rather l

Re: [OMPI users] Trying to map to socket #0 on each node

2015-12-07 Thread Nick Papior
Couldn't it be that the slot list should be 0,1,2,3? It depends on the setup. You can get some more information about _what it does_ by using --report-bindings (when/if it succeeds). 2015-12-07 16:18 GMT+01:00 Carl Ponder : > *On 12/06/2015 11:07 AM, Carl Ponder wrote:* > > I'm trying to run a m

Re: [OMPI users] MPI_AllReduce vs MPI_IAllReduce

2015-11-27 Thread Nick Papior
Try and do a variable amount of work for every process, I see non-blocking as a way to speed-up communication if they arrive individually to the call. Please always have this at the back of your mind when doing this. Surely non-blocking has overhead, and if the communication time is low, so will t

Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-19 Thread Nick Papior
Maybe I can chip in, We use OpenMPI 1.10.1 with Intel /2016.1.0.423501 without problems. I could not get 1.10.0 to work, one reason is: http://www.open-mpi.org/community/lists/users/2015/09/27655.php On a side-note, please note that if you require scalapack you may need to follow this approach:

Re: [OMPI users] Setting bind-to none as default via environment?

2015-11-05 Thread Nick Papior
2015-11-05 18:51 GMT+01:00 Dave Love : > Nick Papior writes: > > > This is what I do to successfully get the best performance for my > > application using OpenMP and OpenMPI: > > > > (note this is for 8 cores per socket) > > > > mpirun -x OMP_PROC_BIND=t

Re: [OMPI users] Setting bind-to none as default via environment?

2015-11-03 Thread Nick Papior
This is what I do to successfully get the best performance for my application using OpenMP and OpenMPI: (note this is for 8 cores per socket) mpirun -x OMP_PROC_BIND=true --report-bindings -x OMP_NUM_THREADS=8 --map-by ppr:1:socket:pe=8 It assigns 8 cores per MPI processor separated by sockets,

Re: [OMPI users] Seg fault in MPI_FINALIZE

2015-10-16 Thread Nick Papior
@Jeff, Kevin Shouldn't Kevin wait for 1.10.1 with the intel 16 compiler? A bugfix for intel 16 has been committed with fb49a2d71ed9115be892e8a22643d9a1c069a8f9. (At least I am anxiously awaiting the 1.10.1 because I cannot get my builds to complete successfully) 2015-10-16 19:33 GMT+00:00 Jeff

Re: [OMPI users] fatal error:openmpi-v2.x-dev-415-g5c9b192andopenmpi-dev-2696-gd579a07

2015-10-15 Thread Nick Papior
2015-10-15 13:51 GMT+02:00 Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de>: > Hi Gilles, > > thank you very much for your help to locate the problem. > > in the mean time, and as a work around, you can make sure >> CPPFLAGS is not set in your environment( or set it to ""), and then >> invoke

Re: [OMPI users] Setting bind-to none as default via environment?

2015-10-01 Thread Nick Papior
You can define default mca parameters in this file: /etc/openmpi-mca-params.conf 2015-10-01 16:57 GMT+02:00 Grigory Shamov : > Hi All, > > A parhaps naive question: is it possible to set ' mpiexec —bind-to none ' > as a system-wide default in 1.10, like, by setting an OMPI_xxx variable? > > -- >

Re: [OMPI users] understanding mpi_gather-mpi_gatherv

2015-09-30 Thread Nick Papior
Gather receives messages of _one_ length. Hence all arrays has to be of same length (not exactly see below). Hence 625 should be 175. See the example on the documentation site: https://www.open-mpi.org/doc/v1.8/man3/MPI_Gather.3.php You should use gatherv for varying length of messages, or use gat

Re: [OMPI users] Problem with using MPI in a Python extension

2015-09-17 Thread Nick Papior
Depending on your exact usage and the data contained in the CDF-5 files I guess netcdf4-python would work for reading the files (if the underlying netcdf library is compiled against pnetcdf). However, this will not immediately yield mpi features. Yet, reading different segments of files could be ma

Re: [OMPI users] Problem with using MPI in a Python extension

2015-09-17 Thread Nick Papior
FYI, you can also see what they have done in mpi4py to by-pass this problem. I would actually highly recommend you to use mpi4py rather than implementing this from scratch your-self ;) 2015-09-17 15:21 GMT+00:00 Jeff Squyres (jsquyres) : > Short version: > > The easiest way to do this is to confi

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-03 Thread Nick Papior
ear Nick, Dear all, > > I use mpi. > > I recompile everything, every time. > > I do not understand what I shall do. > > Thanks again > > Diego > > Diego > > > On 3 September 2015 at 16:52, Nick Papior wrote: > >> When you change environment, tha

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-03 Thread Nick Papior
When you change environment, that is change between OpenMPI and Intel MPI, or compiler, it is recommended that you recompile everything. use mpi is a module, you cannot mix these between compilers/environments, sadly the Fortran specification does not enforce a strict module format which is why t

Re: [OMPI users] Passing a rank specific argument to the JVM

2015-07-19 Thread Nick Papior
Wrap the call in a bash script or the like, there are several examples on this mailing list. I am sorry I am not at my computer so cannot find them. On 19 Jul 2015 06:34, "Saliya Ekanayake" wrote: > Hi, > > I am trying to profile one of our applications and would like each rank to > report to a

Re: [OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
t; On Tue, Apr 14, 2015 at 02:41:27PM -0400, Andy Riebs wrote: > >Nick, > > > >You may have more luck looking into the OSHMEM layer of Open MPI; > SHMEM is > >designed for one-sided communications. > > > >BR, > >Andy > > >

Re: [OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
Sorry, nevermind. It seems it has been generalized (found on wiki) Thanks for the help. 2015-04-14 20:50 GMT+02:00 Nick Papior Andersen : > Thanks Andy! I will discontinue my hunt in openmpi then ;) > > Isn't SHMEM related only to shared memory nodes? > Or am I wrong? > &g

Re: [OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
ed communications. > > BR, > Andy > > > On 04/14/2015 02:36 PM, Nick Papior Andersen wrote: > > Dear all, > > I am trying to implement some features using a one-sided communication > scheme. > > The problem is that I understand the different one-sided communicati

[OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
Dear all, I am trying to implement some features using a one-sided communication scheme. The problem is that I understand the different one-sided communication schemes as this (basic words): MPI_Get) fetches remote window memory to a local memory space MPI_Get_Accumulate) 1. fetches remote window

Re: [OMPI users] Configuration error with external hwloc

2015-03-18 Thread Nick Papior Andersen
As it says check the config.log for any error messages. I have not had any problems using external hwloc on my debian boxes. 2015-03-18 1:30 GMT+00:00 Peter Gottesman : > Hey all, > I am trying to compile Open MPI on a 32bit laptop running debian wheezy > 7.8.0. When I > >> ../ompi-master/configu

Re: [OMPI users] error building BLACS with openmpi 1.8.4 and intel 2015

2015-03-06 Thread Nick Papior Andersen
se required components is not available, then > the user must build the needed component before proceeding with the > ScaLAPACK installation." > > Thank you, > > On Fri, Mar 6, 2015 at 9:36 AM, Nick Papior Andersen > wrote: > >> Do you plan to use BLACS for anyth

Re: [OMPI users] error building BLACS with openmpi 1.8.4 and intel 2015

2015-03-06 Thread Nick Papior Andersen
Do you plan to use BLACS for anything else than scalapack? Else I would highly recommend you to just compile scalapack 2.0.2 which has BLACS shipped with it :) 2015-03-06 15:31 GMT+01:00 Irena Johnson : > Hello, > > I am trying to build BLACS for openmpi-1.8.4 and intel/2015.u1 > > The compilati

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
t; To: us...@open-mpi.org > Subject: Re: [OMPI users] configuring a code with MPI/OPENMPI > > I also concur with Jeff about asking software specific questions at the > software-site, abinit already has a pretty active forum: > http://forum.abinit.org/ > So any questions can also be

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
I also concur with Jeff about asking software specific questions at the software-site, abinit already has a pretty active forum: http://forum.abinit.org/ So any questions can also be directed there. 2015-02-03 19:20 GMT+00:00 Nick Papior Andersen : > > > 2015-02-03 19:12 GMT+00:00 Eli

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
2015-02-03 19:12 GMT+00:00 Elio Physics : > Hello, > > thanks for your help. I have tried: > > ./configure --with-mpi-prefix=/usr FC=ifort CC=icc > > But i still get the same error. Mind you if I compile it serially, that > is ./configure FC=ifort CC=icc > > It works perfectly fine. > > We do

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
First try and correct your compilation by using the intel c-compiler AND the fortran compiler. You should not mix compilers. CC=icc FC=ifort Else the config.log is going to be necessary to debug it further. PS: You could also try and convince your cluster administrator to provide a more recent com

Re: [OMPI users] vector type

2015-02-01 Thread Nick Papior Andersen
Because the compiler does not know that you want to send the entire sub-matrix, passing non-contiguous arrays to a function is, at best, dangerous, do not do that unless you know the function can handle that. Do AA(1,1,2) and then it works. (in principle you then pass the starting memory location a

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-19 Thread Nick Papior Andersen
I have been following this being very interested, I will create a PR for my branch then. To be clear, I already did the OMPI change before this discussion came up, so this will be the one, however the change to other naming schemes is easy. 2014-12-19 7:48 GMT+00:00 George Bosilca : > > On Thu, D

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-02 Thread Nick Papior Andersen
Just to drop in, I can/and will provide whatever interface you want (if you want my contribution). However just to help my ignorance, 1. Adam Moody's method still requires a way to create a distinguished string per processor, i.e. the spilt is entirely done via the string/color, which then needs

Re: [OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-28 Thread Nick Papior Andersen
eed any more information about my setup to debug this, please let me know. Or am I completely missing something? I tried looking into the opal/mca/hwloc/hwloc.h, but I have no idea whether they are correct or not. If you think, I can make a pull request at its current stage? 2014-11-27 13:22 GMT

Re: [OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-27 Thread Nick Papior Andersen
No worries :) 2014-11-27 14:20 GMT+01:00 Jeff Squyres (jsquyres) : > Many thanks! > > Note that it's a holiday week here in the US -- I'm only on for a short > time here this morning; I'll likely disappear again shortly until next > week. :-) > > > >

Re: [OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-27 Thread Nick Papior Andersen
Sure, I will make the changes and commit to make them OMPI specific. I will post forward my problems on the devel list. I will keep you posted. :) 2014-11-27 13:58 GMT+01:00 Jeff Squyres (jsquyres) : > On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen > wrote: > > > Here i

[OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-26 Thread Nick Papior Andersen
Dear Ralph (all ;)) In regards of these posts and due to you adding it to your todo list. I wanted to do something similarly and implemented a "quick fix". I wanted to create a communicator per node, and then create a window to allocate an array in shared memory, however, I came to short in the c

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-07 Thread Nick Papior Andersen
You should redo it in terms of George's suggestion, in that way you should also circumvent the "manual" alignment of data. George's method is the best generic way of doing it. As for the -r8 thing, just do not use it :) And check the interface for the routines used to see why MPIstatus is used.

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
It is attached in the previous mail. 2014-10-03 16:47 GMT+00:00 Diego Avesani : > Dear N., > thanks for the explanation. > > really really sorry, but I am not able to see your example. where is it? > > thanks again > > > Diego > > > On 3 October 2014

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
right? > What do you suggest as next step? > ??? The example I sent you worked perfectly. Good luck! > I could create a type variable and try to send it from a processor to > another with MPI_SEND and MPI_RECV? > > Again thank > > Diego > > > On 3 October 2014 18:04,

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
Dear Diego, Instead of instantly going about using cartesian communicators you should try and create a small test case, something like this: I have successfully runned this small snippet on my machine. As I state in the source, the culprit was the integer address size. It is inherently of type lon

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
last thing. Not second to last, really _the_ last thing. :) I hope I made my point clear, if not I am at a loss... :) > > > On 3 October 2014 17:03, Nick Papior Andersen > wrote: > >> selected_real_kind > > > > > Diego > > > _

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
Might I chip in and ask "why in the name of fortran are you using -r8"? It seems like you do not really need it, more that it is a convenience flag for you (so that you have to type less?)? Again as I stated in my previous mail, I would never do that (and would discourage the use of it for almost a

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
**and potentially your MPI job) > > Do you know something about this errors? > > Thanks again > > Diego > > > On 3 October 2014 15:29, Nick Papior Andersen > wrote: > >> Yes, I guess this is correct. Testing is easy! Try testing! >> As I stated, I do

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
; TYPES(1)=MPI_INTEGER > TYPES(2)=MPI_DOUBLE_PRECISION > TYPES(3)=MPI_DOUBLE_PRECISION > nBLOCKS(1)=2 > nBLOCKS(2)=2 > nBLOCKS(3)=4 > > Am I wrong? Do I have correctly understood? > Really Really thanks > > > Diego > > > On 3 October 2014 15:10, Nick Papio

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
ke)+sizeof(dummy%RP(1))+sizeof(dummy%RP(2)) > > CALL > MPI_TYPE_CREATE_STRUCT(3,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr) > > This is how I compile > > mpif90 -r8 *.f90 > No, that was not what you said! You said you compiled it using: mpif90 -r8 -i8 *.f90 &g

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
_OTHER: known error not in list > [diedroLap:12267] *** MPI_ERRORS_ARE_FATAL (processes in this communicator > will now abort, > [diedroLap:12267] ***and potentially your MPI job) > > > > What I can do? > Thanks a lot > > > > On 3 October 2014 08:15, Nick Papior

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
If misalignment is the case then adding "sequence" to the data type might help. So: type :: sequence integer :: ... real :: ... end type Note that you cannot use the alignment on types with allocatables and pointers for obvious reasons. 2014-10-03 0:39 GMT+00:00 Kawashima, Takahiro : > Hi Die

Re: [OMPI users] About debugging and asynchronous communication

2014-09-19 Thread Nick Papior Andersen
d of 445) and the >>> process exits abnormally. Anyone has similar experience? >>> >>> On Thu, Sep 18, 2014 at 10:07 PM, XingFENG >> <mailto:xingf...@cse.unsw.edu.au>> wrote: >>> >>> Thank you for your reply! I am still working on my code

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
Thu, Sep 18, 2014 at 10:07 PM, XingFENG > wrote: > >> Thank you for your reply! I am still working on my codes. I would update >> the post when I fix bugs. >> >> On Thu, Sep 18, 2014 at 9:48 PM, Nick Papior Andersen < >> nickpap...@gmail.com> wrote: >> >

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
9-18 13:39 GMT+02:00 Tobias Kloeffel : > ok i have to wait until tomorrow, they have some problems with the > network... > > > > > On 09/18/2014 01:27 PM, Nick Papior Andersen wrote: > > I am not sure whether test will cover this... You should check it... > > >

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
test to see if it works... 2014-09-18 13:20 GMT+02:00 XingFENG : > Thanks very much for your reply! > > To Sir Jeff Squyres: > > I think it fails due to truncation errors. I am now logging information of > each send and receive to find out the reason. > > > > > To

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
In complement to Jeff, I would add that using asynchronous messages REQUIRES that you wait (mpi_wait) for all messages at some point. Even though this might not seem obvious it is due to memory allocation "behind the scenes" which are only de-allocated upon completion through a wait statement. 20

Re: [OMPI users] removed maffinity, paffinity in 1.7+

2014-09-15 Thread Nick Papior Andersen
efault, level: 9 dev/all, type: > string) > Comma-separated list of ranges specifying logical > cpus allocated to this job [default: none] >MCA hwloc: parameter "hwloc_base_use_hwthreads

[OMPI users] removed maffinity, paffinity in 1.7+

2014-09-15 Thread Nick Papior Andersen
Dear all maffinity, paffinity parameters have been removed since 1.7. For the uninitiated is this because it has been digested by the code so as the code would automatically decide on these values? For instance I have always been using paffinity_alone=1 for single node jobs with entire occupatio