Re: [OMPI users] Building OpenMPI on Raspberry Pi 2

2015-05-30 Thread Jeff Layton

Jeff,

The error happens during the configure step before compiling.
However, I ran the make command as you indicated and I'm
attaching the output to this email.

Thanks!

Jeff



Can you send the output of "make V=1"?

That will show the exact command line that is being used to build that file.



On May 29, 2015, at 2:17 PM, Jeff Layton  wrote:

George,

I changed my configure command to be:

./configure CCASFLAGS=-march=native

and I get an error while running configure:

...
*** Assembler
checking dependency style of gcc -std=gnu99... gcc3
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking for fgrep... /bin/grep -F
checking if need to remove -g from CCASFLAGS... no
checking whether to enable smp locks... yes
checking if .proc/endp is needed... no
checking directive for setting text section... .text
checking directive for exporting symbols... .globl
checking for objdump... objdump
checking if .note.GNU-stack is needed... yes
checking suffix for labels... :
checking prefix for global symbol labels... none
configure: error: Could not determine global symbol label prefix


Not sure why configure failed at this point.

Thanks!

Jeff



As you are not cross-compiling I would expect gcc to use the right assembly by 
default. What is happening is you force the native mode (-march=native) ?

   George.


On Fri, May 29, 2015 at 10:09 AM, Jeff Layton  wrote:
On 05/29/2015 09:35 AM, Jeff Layton wrote:

Gilles,

oops - yes, CFLAGS. But I also saw this posting:

https://www.open-mpi.org/community/lists/users/2013/01/2.php

where CCASFLAGS is used (I assume because for asm). I'm trying
this flag when I configure Open MPI.

I tried using the CCASFLAGS flag from the above link and it didn't work. The 
error
now reads:

Making all in mca/memory/linux
make[2]: Entering directory '/work/pi/src/openmpi-1.8.5/opal/mca/memory/linux'
   CC   memory_linux_component.lo
   CC   memory_linux_ptmalloc2.lo
   CC   memory_linux_munmap.lo
   CC   malloc.lo
/tmp/cc7g4mWi.s: Assembler messages:
/tmp/cc7g4mWi.s:948: Error: selected processor does not support ARM mode `dmb'
Makefile:1694: recipe for target 'malloc.lo' failed
make[2]: *** [malloc.lo] Error 1
make[2]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal/mca/memory/linux'
Makefile:2149: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal'
Makefile:1698: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1


I used the configure flag CCASFLAGS=-march=armv7-a

Not sure if that is correct or not. I'm guessing I'm using the wrong
architecture for the Pi 2. Suggestions?

Thanks!

Jeff




Thanks!

Jeff



Jeff,

shall I assume you made a typo and wrote CCFLAGS instead of CFLAGS ?

also, can you double check the flags are correctly passed to the assembler with
cd opal/asm
make -n atomic-asm.lo

Cheers,

Gilles

On Friday, May 29, 2015, Jeff Layton  wrote:
Good morning,

I'm building OpenMPI from source on a Raspberry Pi 2 and
I've hit an error. The error is:

make[2]: Entering directory '/work/pi/src/openmpi-1.8.5/opal/asm'
   CPPASatomic-asm.lo
atomic-asm.S: Assembler messages:
atomic-asm.S:7: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:15: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:23: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:55: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:70: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:86: Error: selected processor does not support ARM mode `ldrexd 
r4,r5,[r0]'
atomic-asm.S:91: Error: selected processor does not support ARM mode `strexd 
r1,r6,r7,[r0]'
atomic-asm.S:107: Error: selected processor does not support ARM mode `ldrexd 
r4,r5,[r0]'
atomic-asm.S:112: Error: selected processor does not support ARM mode `strexd 
r1,r6,r7,[r0]'
atomic-asm.S:115: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:130: Error: selected processor does not support ARM mode `ldrexd 
r4,r5,[r0]'
atomic-asm.S:135: Error: selected processor does not support ARM mode `dmb'
atomic-asm.S:136: Error: selected processor does not support ARM mode `strexd 
r1,r6,r7,[r0]'
Makefile:1608: recipe for target 'atomic-asm.lo' failed
make[2]: *** [atomic-asm.lo] Error 1
make[2]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal/asm'
Makefile:2149: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal'
Makefile:1698: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1


I was doing some googling and I saw where I need to specify
CCFLAGS when I run configure but I want to make 100% sure
I have the right arguments. Can anyone help?

Thanks!

Jeff


Re: [OMPI users] mpirun

2015-05-30 Thread Ralph Castain
FWIW: the master might be much faster, if the interfaces are the issue, as you 
could then add “—mca oob ^tcp” to the cmd line and eliminate some of that 
selection time.


> On May 29, 2015, at 11:07 PM, Marco Atzeri  wrote:
> 
> On 5/29/2015 9:53 PM, Walt Brainerd wrote:
>> It behaved this way with the Cygwin version (very recent update)
>> and with 1.8.5 that I built from source.
>> 
>> On Fri, May 29, 2015 at 12:35 PM, Ralph Castain > > wrote:
>> 
>>I assume you mean on cygwin? Or is this an older version that
>>supported native Windows?
>> 
>> > On May 29, 2015, at 12:34 PM, Walt Brainerd
>>> wrote:
>> >
>> > On Windows, mpirun appears to take about 5 seconds
>> > to start. I can't try it on Linux. Intel takes no time to
>> > start executing its version.
>> >
>> > Is this expected?
>> >
> 
> I would say yes
> 
> $ time mpirun -n 2 ./hello_c.exe
> Hello, world, I am 0 of 2, (Open MPI v1.8.5, package: Open MPI .., ident: 
> 1.8.5, repo rev: v1.8.4-333-g039fb11, May 05, 2015, 127)
> Hello, world, I am 1 of 2, (Open MPI v1.8.5, package: Open MPI .., ident: 
> 1.8.5, repo rev: v1.8.4-333-g039fb11, May 05, 2015, 127)
> 
> real0m2.636s
> user0m1.012s
> sys 0m2.119s
> 
> I presume is wasting some time enumerating and rejecting the
> available interfaces.
> On windows they have unusual names
> 
> $ ./interface-64.exe
> Interfaces (count = 10):
>{EC2ABB5C-42A8-431D-A133-8F4BE0F309AF}
>{9213DBB8-80C6-4316-AA7A-EBF8AD7661E1}
>{8D78D8D9-CFF0-4C4A-AFC3-72CB0E275588}
>{2449A164-BE1A-4393-8168-2A3EDC9AA6F0}
>{97191531-3960-4C35-8D79-1851EF7EE9E0}
>{6F8DABED-A5FE-4E8D-8BA1-02763080D9DC}
>{2A3E9C71-E553-44D0-ABE3-327EB89C3863}
>{9F4F7FD2-5E44-4796-ABE0-0785CF76C11E}
>{C4069E93-6662-44BF-B363-5175A04681D5}
>{846EE342-7039-11DE-9D20-806E6F6E6963}
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/05/26996.php



Re: [OMPI users] Building OpenMPI on Raspberry Pi 2

2015-05-30 Thread Jeff Squyres (jsquyres)
Can you send the output of "make V=1"?

That will show the exact command line that is being used to build that file.


> On May 29, 2015, at 2:17 PM, Jeff Layton  wrote:
> 
> George,
> 
> I changed my configure command to be:
> 
> ./configure CCASFLAGS=-march=native
> 
> and I get an error while running configure:
> 
> ...
> *** Assembler
> checking dependency style of gcc -std=gnu99... gcc3
> checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
> checking the name lister (/usr/bin/nm -B) interface... BSD nm
> checking for fgrep... /bin/grep -F
> checking if need to remove -g from CCASFLAGS... no
> checking whether to enable smp locks... yes
> checking if .proc/endp is needed... no
> checking directive for setting text section... .text
> checking directive for exporting symbols... .globl
> checking for objdump... objdump
> checking if .note.GNU-stack is needed... yes
> checking suffix for labels... :
> checking prefix for global symbol labels... none
> configure: error: Could not determine global symbol label prefix
> 
> 
> Not sure why configure failed at this point.
> 
> Thanks!
> 
> Jeff
> 
> 
>> As you are not cross-compiling I would expect gcc to use the right assembly 
>> by default. What is happening is you force the native mode (-march=native) ?
>> 
>>   George.
>> 
>> 
>> On Fri, May 29, 2015 at 10:09 AM, Jeff Layton  wrote:
>> On 05/29/2015 09:35 AM, Jeff Layton wrote:
>>> Gilles,
>>> 
>>> oops - yes, CFLAGS. But I also saw this posting:
>>> 
>>> https://www.open-mpi.org/community/lists/users/2013/01/2.php 
>>> 
>>> where CCASFLAGS is used (I assume because for asm). I'm trying
>>> this flag when I configure Open MPI.
>> 
>> I tried using the CCASFLAGS flag from the above link and it didn't work. The 
>> error
>> now reads:
>> 
>> Making all in mca/memory/linux
>> make[2]: Entering directory 
>> '/work/pi/src/openmpi-1.8.5/opal/mca/memory/linux'
>>   CC   memory_linux_component.lo
>>   CC   memory_linux_ptmalloc2.lo
>>   CC   memory_linux_munmap.lo
>>   CC   malloc.lo
>> /tmp/cc7g4mWi.s: Assembler messages:
>> /tmp/cc7g4mWi.s:948: Error: selected processor does not support ARM mode 
>> `dmb'
>> Makefile:1694: recipe for target 'malloc.lo' failed
>> make[2]: *** [malloc.lo] Error 1
>> make[2]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal/mca/memory/linux'
>> Makefile:2149: recipe for target 'all-recursive' failed
>> make[1]: *** [all-recursive] Error 1
>> make[1]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal'
>> Makefile:1698: recipe for target 'all-recursive' failed
>> make: *** [all-recursive] Error 1
>> 
>> 
>> I used the configure flag CCASFLAGS=-march=armv7-a
>> 
>> Not sure if that is correct or not. I'm guessing I'm using the wrong
>> architecture for the Pi 2. Suggestions?
>> 
>> Thanks!
>> 
>> Jeff
>> 
>> 
>> 
>>> 
>>> Thanks!
>>> 
>>> Jeff
>>> 
>>> 
 Jeff,
 
 shall I assume you made a typo and wrote CCFLAGS instead of CFLAGS ?
 
 also, can you double check the flags are correctly passed to the assembler 
 with
 cd opal/asm
 make -n atomic-asm.lo
 
 Cheers,
 
 Gilles
 
 On Friday, May 29, 2015, Jeff Layton  wrote:
 Good morning,
 
 I'm building OpenMPI from source on a Raspberry Pi 2 and
 I've hit an error. The error is:
 
 make[2]: Entering directory '/work/pi/src/openmpi-1.8.5/opal/asm'
   CPPASatomic-asm.lo
 atomic-asm.S: Assembler messages:
 atomic-asm.S:7: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:15: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:23: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:55: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:70: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:86: Error: selected processor does not support ARM mode 
 `ldrexd r4,r5,[r0]'
 atomic-asm.S:91: Error: selected processor does not support ARM mode 
 `strexd r1,r6,r7,[r0]'
 atomic-asm.S:107: Error: selected processor does not support ARM mode 
 `ldrexd r4,r5,[r0]'
 atomic-asm.S:112: Error: selected processor does not support ARM mode 
 `strexd r1,r6,r7,[r0]'
 atomic-asm.S:115: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:130: Error: selected processor does not support ARM mode 
 `ldrexd r4,r5,[r0]'
 atomic-asm.S:135: Error: selected processor does not support ARM mode `dmb'
 atomic-asm.S:136: Error: selected processor does not support ARM mode 
 `strexd r1,r6,r7,[r0]'
 Makefile:1608: recipe for target 'atomic-asm.lo' failed
 make[2]: *** [atomic-asm.lo] Error 1
 make[2]: Leaving directory '/work/pi/src/openmpi-1.8.5/opal/asm'
 Makefile:2149: recipe for target 'all-recursive' failed
 make[1]: *** [all-recursive] Error 1
 make[1]: Leaving 

Re: [OMPI users] Openmpi compilation errors

2015-05-30 Thread Jeff Squyres (jsquyres)
On May 29, 2015, at 11:19 AM, Timothy Brown  
wrote:
> 
> I've built Openmpi 1.8.5 with the following configure line:
> 
> ./configure  \
>  --prefix=/curc/tools/x86_64/rh6/software/openmpi/1.8.5/pgi/15.3 \
>  --with-threads=posix \
>  --enable-mpi-thread-multiple \
>  --with-slurm \
>  --with-pmi=/curc/slurm/slurm/current/
> 
> Please note, I am using the following environment variables:
> CC=pgcc
> FC=pgfortran
> F90=pgf90
> F77=pgf77
> CXX=pgc++

Sweet -- thanks for the info, Tim.

One extremely minor tweak that I would recommend is to do this, instead:

./configure  \
 CC=pgcc \
 FC=pgfortran \
 F90=pgf90 \
 F77=pgf77 \
 CXX=pgc++ \
 --prefix=/curc/tools/x86_64/rh6/software/openmpi/1.8.5/pgi/15.3 \
 --with-threads=posix \
 --enable-mpi-thread-multiple \
 --with-slurm \
 --with-pmi=/curc/slurm/slurm/current/

I.e., set those environment variables on the configure command line instead of 
having them in your environment.

The end effect is exactly the same -- the only difference is that these 
environment variables will be explicitly listed right at the top in the 
config.log file that is generated when you run configure.  It's a very minor 
thing -- just for helping your future self when remembering exactly how your 
copy of Open MPI was built.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI users] How can I discover valid values for MCA parameters?

2015-05-30 Thread Jeff Squyres (jsquyres)
On May 29, 2015, at 12:05 PM, Blosch, Edwin L  wrote:
> 
> I’ve tried ompi_info --param  all  but no matter what string I 
> give for framework, I get no output at all.

Keep in mind that starting sometime in the v1.7 series, ompi_info grew another 
command line option: --level.

Short version: if you just want to see all MCA params, use the "--all" or 
"--level 9" CLI options to ompi_info.  E.g., "ompi_info --all".  I wrote a blog 
entry about this a while ago: 
http://blogs.cisco.com/performance/open-mpi-and-the-mpi-3-mpi_t-interface

More detail:

All MCA parameters now have a "level" associated with them, ranging from 1 to 
9.  The levels correspond to the MPI_T system that was added in MPI-3.0.  The 
levels are:

1. End user, basic
2. End user, detailed
3. End user, advanced
4. Application tuner, basic
5. Application tuner, detailed
6. Application tuner, advanced
7. MPI developer, basic
8. MPI developer, detailed
9. MPI developer, advanced

ompi_info now defaults to only showing level 1 parameters by default.

We changed to this policy because of the (justified) complaint from Open MPI 
users that ompi_info provided way too much information for the common user: 
it really created a sense of information overload, and made it incredibly 
difficult to find what you were actually looking for.  Here's the wiki page 
where we outlined guidance for Open MPI developers as to what level they should 
assign to their MCA params:

https://github.com/open-mpi/ompi/wiki/MCAParamLevels

In short: level 1 params are what you need for *correctness* (e.g., selecting 
which network interface(s) to use).  That's what all users will need -- so show 
that by default.  Everything else beyond that is extra -- so it's ok to ask 
users to supply --all or --level X on the ompi_info command line.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI users] mpirun

2015-05-30 Thread Marco Atzeri

On 5/29/2015 9:53 PM, Walt Brainerd wrote:

It behaved this way with the Cygwin version (very recent update)
and with 1.8.5 that I built from source.

On Fri, May 29, 2015 at 12:35 PM, Ralph Castain > wrote:

I assume you mean on cygwin? Or is this an older version that
supported native Windows?

 > On May 29, 2015, at 12:34 PM, Walt Brainerd
> wrote:
 >
 > On Windows, mpirun appears to take about 5 seconds
 > to start. I can't try it on Linux. Intel takes no time to
 > start executing its version.
 >
 > Is this expected?
 >


I would say yes

$ time mpirun -n 2 ./hello_c.exe
Hello, world, I am 0 of 2, (Open MPI v1.8.5, package: Open MPI .., 
ident: 1.8.5, repo rev: v1.8.4-333-g039fb11, May 05, 2015, 127)
Hello, world, I am 1 of 2, (Open MPI v1.8.5, package: Open MPI .., 
ident: 1.8.5, repo rev: v1.8.4-333-g039fb11, May 05, 2015, 127)


real0m2.636s
user0m1.012s
sys 0m2.119s

I presume is wasting some time enumerating and rejecting the
available interfaces.
On windows they have unusual names

$ ./interface-64.exe
Interfaces (count = 10):
{EC2ABB5C-42A8-431D-A133-8F4BE0F309AF}
{9213DBB8-80C6-4316-AA7A-EBF8AD7661E1}
{8D78D8D9-CFF0-4C4A-AFC3-72CB0E275588}
{2449A164-BE1A-4393-8168-2A3EDC9AA6F0}
{97191531-3960-4C35-8D79-1851EF7EE9E0}
{6F8DABED-A5FE-4E8D-8BA1-02763080D9DC}
{2A3E9C71-E553-44D0-ABE3-327EB89C3863}
{9F4F7FD2-5E44-4796-ABE0-0785CF76C11E}
{C4069E93-6662-44BF-B363-5175A04681D5}
{846EE342-7039-11DE-9D20-806E6F6E6963}