Re: [OMPI users] Error using rankfile to bind multiple cores on the same node for threaded OpenMPI application

2022-02-02 Thread Christoph Niethammer via users
The linked pastebin includes the following version information:

[1,0]:package:Open MPI spackapps@eu-c7-042-03 Distribution
[1,0]:ompi:version:full:4.0.2
[1,0]:ompi:version:repo:v4.0.2
[1,0]:ompi:version:release_date:Oct 07, 2019
[1,0]:orte:version:full:4.0.2
[1,0]:orte:version:repo:v4.0.2
[1,0]:orte:version:release_date:Oct 07, 2019
[1,0]:opal:version:full:4.0.2
[1,0]:opal:version:repo:v4.0.2
[1,0]:opal:version:release_date:Oct 07, 2019
[1,0]:mpi-api:version:full:3.1.0
[1,0]:ident:4.0.2

Best
Christoph

- Original Message -
From: "Open MPI Users" 
To: "Open MPI Users" 
Cc: "Ralph Castain" 
Sent: Thursday, 3 February, 2022 00:22:30
Subject: Re: [OMPI users] Error using rankfile to bind multiple cores on the 
same node for threaded OpenMPI application

Errr...what version OMPI are you using?

> On Feb 2, 2022, at 3:03 PM, David Perozzi via users 
>  wrote:
> 
> Helo,
> 
> I'm trying to run a code implemented with OpenMPI and OpenMP (for threading) 
> on a large cluster that uses LSF for the job scheduling and dispatch. The 
> problem with LSF is that it is not very straightforward to allocate and bind 
> the right amount of threads to an MPI rank inside a single node. Therefore, I 
> have to create a rankfile myself, as soon as the (a priori unknown) 
> ressources are allocated.
> 
> So, after my job get dispatched, I run:
> 
> mpirun -n "$nslots" -display-allocation -nooversubscribe --map-by core:PE=1 
> --bind-to core mpi_allocation/show_numactl.sh 
> >mpi_allocation/allocation_files/allocation.txt
> 
> where show_numactl.sh consists of just one line:
> 
> { hostname; numactl --show; } | sed ':a;N;s/\n/ /;ba'
> 
> If I ask for 16 slots, in blocks of 4 (i.e., bsub -n 16 -R "span[block=4]"), 
> I get something like:
> 
> ==   ALLOCATED NODES   ==
> eu-g1-006-1: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP
> eu-g1-009-2: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP
> eu-g1-002-3: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP
> eu-g1-005-1: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP
> =
> eu-g1-006-1 policy: default preferred node: current physcpubind: 16  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-006-1 policy: default preferred node: current physcpubind: 24  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-006-1 policy: default preferred node: current physcpubind: 32  cpubind: 
> 2  nodebind: 2  membind: 0 1 2 3 4 5 6 7
> eu-g1-002-3 policy: default preferred node: current physcpubind: 21  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-002-3 policy: default preferred node: current physcpubind: 22  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-009-2 policy: default preferred node: current physcpubind: 0  cpubind: 
> 0  nodebind: 0  membind: 0 1 2 3 4 5 6 7
> eu-g1-009-2 policy: default preferred node: current physcpubind: 1  cpubind: 
> 0  nodebind: 0  membind: 0 1 2 3 4 5 6 7
> eu-g1-009-2 policy: default preferred node: current physcpubind: 2  cpubind: 
> 0  nodebind: 0  membind: 0 1 2 3 4 5 6 7
> eu-g1-002-3 policy: default preferred node: current physcpubind: 19  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-002-3 policy: default preferred node: current physcpubind: 23  cpubind: 
> 1  nodebind: 1  membind: 0 1 2 3 4 5 6 7
> eu-g1-006-1 policy: default preferred node: current physcpubind: 52  cpubind: 
> 3  nodebind: 3  membind: 0 1 2 3 4 5 6 7
> eu-g1-009-2 policy: default preferred node: current physcpubind: 3  cpubind: 
> 0  nodebind: 0  membind: 0 1 2 3 4 5 6 7
> eu-g1-005-1 policy: default preferred node: current physcpubind: 90  cpubind: 
> 5  nodebind: 5  membind: 0 1 2 3 4 5 6 7
> eu-g1-005-1 policy: default preferred node: current physcpubind: 91  cpubind: 
> 5  nodebind: 5  membind: 0 1 2 3 4 5 6 7
> eu-g1-005-1 policy: default preferred node: current physcpubind: 94  cpubind: 
> 5  nodebind: 5  membind: 0 1 2 3 4 5 6 7
> eu-g1-005-1 policy: default preferred node: current physcpubind: 95  cpubind: 
> 5  nodebind: 5  membind: 0 1 2 3 4 5 6 7
> 
> After that, I parse this allocation file in python and I create a hostfile 
> and a rankfile.
> 
> The hostfile reads:
> 
> eu-g1-006-1
> eu-g1-009-2
> eu-g1-002-3
> eu-g1-005-1
> 
> The rankfile:
> 
> rank 0=eu-g1-006-1 slot=16,24,32,52
> rank 1=eu-g1-009-2 slot=0,1,2,3
> rank 2=eu-g1-002-3 slot=21,22,19,23
> rank 3=eu-g1-005-1 slot=90,91,94,95
> 
> Following OpenMPI's manpages and FAQs, I then run my application using
> 
> mpirun -n "$nmpiproc" --rankfile mpi_allocation/hostfiles/rankfile --mca 
> rmaps_rank_file_physical 1 ./build/"$executable_name" true "$input_file"
> 
> where the bash variables are passed in directly in the bsub command (I 
> basically run bsub -n 16 -R "span[block=4]" "my_script.sh num_slots 
> num_thread_per_rank executable_name input_file").
> 
> 
> Now, this procedure sometimes works just 

Re: [OMPI users] weird mpi error report: Type mismatch between arguments

2021-02-17 Thread Christoph Niethammer via users
Hello Diego,

The problem is the old (hopefully sometime deprecated) MPI Fortran interface 
(mpif.h) that now hits you as GCC 10 introduced stricter checks.
Using mpif.h you cannot have function overloading based on argument types as, 
e.g., in C++, so the compiler sees two type mismatching calls here.

Best way to solve this is to update your application to the new mpif08 module.
I know this may end up in a lot of work and finding bugs along the way. ;)

Best regards
Christoph Niethammer


- Original Message -
From: "Open MPI Users" 
To: "Open MPI Users" 
Cc: "Luis Diego Pinto" 
Sent: Wednesday, 17 February, 2021 13:20:18
Subject: [OMPI users] weird mpi error report: Type mismatch between arguments

Dear OPENMPI users,

i'd like to notify you a strange issue that arised right after installing a new 
up-to-date version of Linux (Kubuntu 20.10, with gcc-10 version )

I am developing a software to be run in distributed memory machines with 
OpenMPI version 4.0.3.

The error seems to be as much simple as weird. Each time i compile the program, 
the following problem is reported:


692 | call  MPI_BCAST (config,1000*100,MPI_DOUBLE,0,MPI_COMM_WORLD,ierr)
|  2
693 | call MPI_BCAST(N,1,MPI_INT,0,MPI_COMM_WORLD,ierr2)
|1
Error: Type mismatch between actual argument at (1) and actual argument at (2) 
(INTEGER(4)/REAL(8)).



If i compile the same program on an older machine (with an older version of 
gcc, the 9th one), i don't get back any error like this.

Moreover, the command doesn't sound like a syntax error nor a logical error, 
since it represents just two consecutive trivial operations of broadcasting, 
each independent to the other. The first operation broadcasts an array named 
"config" of 10 double elements, while the second operation broadcasts a 
single integer variable named "N"

It seems that the compiler finds some strange links between the line 692 and 
693, and believes that i should put the two arrays "config" and "N" in a 
consistent way, while they are clearly absolutely independent so they are not 
requested to be of the same type.

Searching on the web, i read that this issue could be ascribed  to the new 
version of gcc (the 10th) which contains some well known bugs.

I have even tried to compile with the flag -fallow-argument-mismatch, and it 
fixed some similar report problems indeed, but this specific problem remains

what do you think about?

thank you very much

Best Regards


Diego


Re: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa architecture

2020-08-20 Thread Christoph Niethammer via users
Hello Carlo,

If you execute multiple mpirun commands they will not know about each others 
resource bindings.
E.g. if you bind to cores each mpirun will start with the same core to assign 
with again.
This results then in over subscription of the cores, which slows down your 
programs - as you did realize.


You can use "--cpu-list" together with "--bind-to cpu-list:ordered"
So if you start all your simulations in a single script this would look like

mpirun -n 6 --cpu-list "$(seq -s, 0 5)" --bind-to cpu-list:ordered  $app
mpirun -n 6 --cpu-list "$(seq -s, 6 11)" --bind-to cpu-list:ordered  $app
...
mpirun -n 6 --cpu-list "$(seq -s, 42 47)" --bind-to cpu-list:ordered  $app


Best
Christoph


- Original Message -
From: "Open MPI Users" 
To: "Open MPI Users" 
Cc: "Carlo Nervi" 
Sent: Thursday, 20 August, 2020 12:17:21
Subject: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa architecture

Dear OMPI community,
I'm a simple end-user with no particular experience.
I compile quantum chemical programs and use them in parallel.

My system is a 4 socket, 12 core per socket Opteron 6168 system for a total
of 48 cores and 64 Gb of RAM. It has 8 NUMA nodes:

openmpi $ hwloc-info
depth 0:   1 Machine (type #0)
 depth 1:  4 Package (type #1)
  depth 2: 8 L3Cache (type #6)
   depth 3:48 L2Cache (type #5)
depth 4:   48 L1dCache (type #4)
 depth 5:  48 L1iCache (type #9)
  depth 6: 48 Core (type #2)
   depth 7:48 PU (type #3)
Special depth -3:  8 NUMANode (type #13)
Special depth -4:  3 Bridge (type #14)
Special depth -5:  5 PCIDev (type #15)
Special depth -6:  5 OSDev (type #16)

lstopo:

openmpi $ lstopo
Machine (63GB total)
  Package L#0
L3 L#0 (5118KB)
  NUMANode L#0 (P#0 7971MB)
  L2 L#0 (512KB) + L1d L#0 (64KB) + L1i L#0 (64KB) + Core L#0 + PU L#0
(P#0)
  L2 L#1 (512KB) + L1d L#1 (64KB) + L1i L#1 (64KB) + Core L#1 + PU L#1
(P#1)
  L2 L#2 (512KB) + L1d L#2 (64KB) + L1i L#2 (64KB) + Core L#2 + PU L#2
(P#2)
  L2 L#3 (512KB) + L1d L#3 (64KB) + L1i L#3 (64KB) + Core L#3 + PU L#3
(P#3)
  L2 L#4 (512KB) + L1d L#4 (64KB) + L1i L#4 (64KB) + Core L#4 + PU L#4
(P#4)
  L2 L#5 (512KB) + L1d L#5 (64KB) + L1i L#5 (64KB) + Core L#5 + PU L#5
(P#5)
  HostBridge
PCIBridge
  PCI 02:00.0 (Ethernet)
Net "enp2s0f0"
  PCI 02:00.1 (Ethernet)
Net "enp2s0f1"
PCI 00:11.0 (RAID)
  Block(Disk) "sdb"
  Block(Disk) "sdc"
  Block(Disk) "sda"
PCI 00:14.1 (IDE)
PCIBridge
  PCI 01:04.0 (VGA)
L3 L#1 (5118KB)
  NUMANode L#1 (P#1 8063MB)
  L2 L#6 (512KB) + L1d L#6 (64KB) + L1i L#6 (64KB) + Core L#6 + PU L#6
(P#6)
  L2 L#7 (512KB) + L1d L#7 (64KB) + L1i L#7 (64KB) + Core L#7 + PU L#7
(P#7)
  L2 L#8 (512KB) + L1d L#8 (64KB) + L1i L#8 (64KB) + Core L#8 + PU L#8
(P#8)
  L2 L#9 (512KB) + L1d L#9 (64KB) + L1i L#9 (64KB) + Core L#9 + PU L#9
(P#9)
  L2 L#10 (512KB) + L1d L#10 (64KB) + L1i L#10 (64KB) + Core L#10 + PU
L#10 (P#10)
  L2 L#11 (512KB) + L1d L#11 (64KB) + L1i L#11 (64KB) + Core L#11 + PU
L#11 (P#11)
  Package L#1
L3 L#2 (5118KB)
  NUMANode L#2 (P#2 8063MB)
  L2 L#12 (512KB) + L1d L#12 (64KB) + L1i L#12 (64KB) + Core L#12 + PU
L#12 (P#12)
  L2 L#13 (512KB) + L1d L#13 (64KB) + L1i L#13 (64KB) + Core L#13 + PU
L#13 (P#13)
  L2 L#14 (512KB) + L1d L#14 (64KB) + L1i L#14 (64KB) + Core L#14 + PU
L#14 (P#14)
  L2 L#15 (512KB) + L1d L#15 (64KB) + L1i L#15 (64KB) + Core L#15 + PU
L#15 (P#15)
  L2 L#16 (512KB) + L1d L#16 (64KB) + L1i L#16 (64KB) + Core L#16 + PU
L#16 (P#16)
  L2 L#17 (512KB) + L1d L#17 (64KB) + L1i L#17 (64KB) + Core L#17 + PU
L#17 (P#17)
L3 L#3 (5118KB)
  NUMANode L#3 (P#3 8063MB)
  L2 L#18 (512KB) + L1d L#18 (64KB) + L1i L#18 (64KB) + Core L#18 + PU
L#18 (P#18)
  L2 L#19 (512KB) + L1d L#19 (64KB) + L1i L#19 (64KB) + Core L#19 + PU
L#19 (P#19)
  L2 L#20 (512KB) + L1d L#20 (64KB) + L1i L#20 (64KB) + Core L#20 + PU
L#20 (P#20)
  L2 L#21 (512KB) + L1d L#21 (64KB) + L1i L#21 (64KB) + Core L#21 + PU
L#21 (P#21)
  L2 L#22 (512KB) + L1d L#22 (64KB) + L1i L#22 (64KB) + Core L#22 + PU
L#22 (P#22)
  L2 L#23 (512KB) + L1d L#23 (64KB) + L1i L#23 (64KB) + Core L#23 + PU
L#23 (P#23)
  Package L#2
L3 L#4 (5118KB)
  NUMANode L#4 (P#4 8063MB)
  L2 L#24 (512KB) + L1d L#24 (64KB) + L1i L#24 (64KB) + Core L#24 + PU
L#24 (P#24)
  L2 L#25 (512KB) + L1d L#25 (64KB) + L1i L#25 (64KB) + Core L#25 + PU
L#25 (P#25)
  L2 L#26 (512KB) + L1d L#26 (64KB) + L1i L#26 (64KB) + Core L#26 + PU
L#26 (P#26)
  L2 L#27 (512KB) + L1d L#27 (64KB) + L1i L#27 (64KB) + Core L#27 + PU
L#27 (P#27)
  L2 L#28 (512KB) + L1d L#28 (64KB) + L1i L#28 (64KB) + Core L#28 + PU
L#28 (P#28)
  L2 L#29 (512KB) + L1d L#29 (64KB) + L1i L#29 (64KB) + Core L#29 + PU
L#29 (P#29)
L3 L#5 (5118KB)
  NUMANode L#5 (P#5 8063MB)
  L2 L#30 (512KB) + L1d L#30 

Re: [OMPI users] MPI test suite

2020-07-24 Thread Christoph Niethammer via users
Hi,

MTT is a testing infrastructure to automate building MPI libraries and tests, 
running tests and collecting test results but does not come with MPI testsuites 
itself.

Best
Christoph

- Original Message -
From: "Open MPI Users" 
To: "Open MPI Users" 
Cc: "Joseph Schuchart" 
Sent: Friday, 24 July, 2020 09:00:34
Subject: Re: [OMPI users] MPI test suite

You may want to look into MTT: https://github.com/open-mpi/mtt

Cheers
Joseph

On 7/23/20 8:28 PM, Zhang, Junchao via users wrote:
> Hello,
>    Does OMPI have a test suite that can let me validate MPI 
> implementations from other vendors?
> 
>    Thanks
> --Junchao Zhang
> 
> 
>


Re: [OMPI users] MPI test suite

2020-07-24 Thread Christoph Niethammer via users
Hello,

What do you wanne test in detail?

If you are interested in testing combinations of datatypes and communicators 
the mpi_test_suite [1] may be of interest for you.

Best
Christoph Niethammer

[1] https://projects.hlrs.de/projects/mpitestsuite/



- Original Message -
From: "Open MPI Users" 
To: "Open MPI Users" 
Cc: "Zhang, Junchao" 
Sent: Thursday, 23 July, 2020 22:25:18
Subject: Re: [OMPI users] MPI test suite

I know OSU micro-benchmarks.  But it is not an extensive test suite.

Thanks
--Junchao Zhang



> On Jul 23, 2020, at 2:00 PM, Marco Atzeri via users 
>  wrote:
> 
> On 23.07.2020 20:28, Zhang, Junchao via users wrote:
>> Hello,
>>   Does OMPI have a test suite that can let me validate MPI implementations 
>> from other vendors?
>>   Thanks
>> --Junchao Zhang
> 
> Have you considered the OSU Micro-Benchmarks ?
> 
> http://mvapich.cse.ohio-state.edu/benchmarks/