Re: [OMPI users] Problem building OpenMPI with CUDA 8.0

2016-10-18 Thread Justin Luitjens
After looking into this a bit more it appears that the issue is I am building 
on a head node which does not have the driver installed.  Building on back node 
resolves this issue.  In CUDA 8.0 the NVML stubs can be found in the toolkit at 
the following path:  ${CUDA_HOME}/lib64/stubs

For 8.0 I'd suggest updating the configure/make scripts to look for nvml there 
and link in the stubs.  This way the build is not dependent on the driver being 
installed and only the toolkit.

Thanks,
Justin

From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Justin 
Luitjens
Sent: Tuesday, October 18, 2016 9:53 AM
To: users@lists.open-mpi.org
Subject: [OMPI users] Problem building OpenMPI with CUDA 8.0

I have the release version of CUDA 8.0 installed and am trying to build OpenMPI.

Here is my configure and build line:

./configure --prefix=$PREFIXPATH --with-cuda=$CUDA_HOME --with-tm= 
--with-openib= && make && sudo make install

Where CUDA_HOME points to the cuda install path.

When I run the above command it builds for quite a while but eventually errors 
out wit this:

make[2]: Entering directory 
`/home/jluitjens/Perforce/jluitjens_dtlogin_p4sw/sw/devrel/DevtechCompute/Internal/Tools/dtlogin/scripts/mpi/openmpi-1.10.1-gcc5.0_2014_11-cuda8.0/opal/tools/wrappers'
  CCLD opal_wrapper
../../../opal/.libs/libopen-pal.so: undefined reference to `nvmlInit_v2'
../../../opal/.libs/libopen-pal.so: undefined reference to 
`nvmlDeviceGetHandleByIndex_v2'
../../../opal/.libs/libopen-pal.so: undefined reference to 
`nvmlDeviceGetCount_v2'


Any idea what I might need to change to get around this error?

Thanks,
Justin

This email message is for the sole use of the intended recipient(s) and may 
contain confidential information.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [MTT users] MTT Server Downtime - Tues., Oct. 18, 2016

2016-10-18 Thread Josh Hursey
We are moving this downtime to *Friday, Oct. 21 from 2-5 pm US Eastern*.

We hit a snag with the AWS configuration that we are working through.

On Sun, Oct 16, 2016 at 9:53 AM, Josh Hursey  wrote:

> I will announce this on the Open MPI developer's teleconf on Tuesday,
> before the move.
>
> Geoff - Please add this item to the agenda.
>
>
> Short version:
> ---
> MTT server (mtt.open-mpi.org) will be going down for maintenance on
> Tuesday, Oct. 18, 2016 from 2-5 pm US Eastern. During this time the MTT
> Reporter and the MTT client submission interface will not be accessible. I
> will send an email out when the service is back online.
>
>
> Longer version:
> ---
> We need to move the MTT Server/Database from the IU server to the AWS
> server. This move will be completely transparent to users submitting to the
> database, except for a window of downtime to move the database.
>
> I estimate that moving the database will take about two hours. So I have
> blocked off three hours to give us time to test, and redirect the DNS
> record.
>
> Once the service comes back online, you should be able to access MTT using
> themtt.open-mpi.org URL. No changes are needed in your MTT client setup,
> and all permalinks are expected to still work after the move.
>
>
> Let me know if you have any questions or concerns about the move.
>
>
> --
> Josh Hursey
> IBM Spectrum MPI Developer
>



-- 
Josh Hursey
IBM Spectrum MPI Developer
___
mtt-users mailing list
mtt-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/mtt-users

[OMPI users] Problem building OpenMPI with CUDA 8.0

2016-10-18 Thread Justin Luitjens
I have the release version of CUDA 8.0 installed and am trying to build OpenMPI.

Here is my configure and build line:

./configure --prefix=$PREFIXPATH --with-cuda=$CUDA_HOME --with-tm= 
--with-openib= && make && sudo make install

Where CUDA_HOME points to the cuda install path.

When I run the above command it builds for quite a while but eventually errors 
out wit this:

make[2]: Entering directory 
`/home/jluitjens/Perforce/jluitjens_dtlogin_p4sw/sw/devrel/DevtechCompute/Internal/Tools/dtlogin/scripts/mpi/openmpi-1.10.1-gcc5.0_2014_11-cuda8.0/opal/tools/wrappers'
  CCLD opal_wrapper
../../../opal/.libs/libopen-pal.so: undefined reference to `nvmlInit_v2'
../../../opal/.libs/libopen-pal.so: undefined reference to 
`nvmlDeviceGetHandleByIndex_v2'
../../../opal/.libs/libopen-pal.so: undefined reference to 
`nvmlDeviceGetCount_v2'


Any idea what I might need to change to get around this error?

Thanks,
Justin

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Kawashima, Takahiro
Hi,

You did *not* specify the following options to configure, right?
Specifying all these will cause a problem.

  --disable-mmap-shmem
  --disable-posix-shmem
  --disable-sysv-shmem

Please send the output of the following command.

  mpirun --allow-run-as-root -np 1 --mca shmem_base_verbose 100 helloworld

And, give us the config.log file which is output in the
top directory where configure is executed.
Put it on the web or send the compressed (xz or bzip2 is better) file.

Regards,
Takahiro Kawashima

> Hi Gilles,
> 
> Thank you for reply,
> 
> After doing below config options also
> 
> ./configure --enable-orterun-prefix-by-default
> --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc
> CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi
> --enable-script-wrapper-compilers
> --disable-mpi-fortran
> --enable-shared
> --disable-dlopen
> 
> it's configured ,make & make install successfully.
> 
> i compiled  *helloworld.c *programm got executable for *arm* as below(by
> checking the readelf *armhelloworld*),
> 
> 
> *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
> /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*
> 
> But ,while i run using mpirun on target board as below
> 
> root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
> --
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_shmem_base_select failed
>   --> Returned value -1 instead of OPAL_SUCCESS
> 
> Kindly help me.
> 
> On Tue, Oct 18, 2016 at 7:31 PM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> 
> > Hi Gilles,
> >
> > Thank you for reply,
> >
> > After doing below config options also
> >
> > ./configure --enable-orterun-prefix-by-default
> > --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
> > CC=arm-openwrt-linux-muslgnueabi-gcc
> > CXX=arm-openwrt-linux-muslgnueabi-g++
> > --host=arm-openwrt-linux-muslgnueabi
> > --enable-script-wrapper-compilers
> > --disable-mpi-fortran
> > --enable-shared
> > --disable-dlopen
> >
> > it's configured ,make & make install successfully.
> >
> > i compiled  *helloworld.c *programm got executable for *arm* as below(by
> > checking the readelf *armhelloworld*),
> >
> >
> > *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> > -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*
> >
> > But ,while i run using mpirun on target board as below
> >
> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
> > --
> > It looks like opal_init failed for some reason; your parallel process is
> > likely to abort.  There are many reasons that a parallel process can
> > fail during opal_init; some of which are due to configuration or
> > environment problems.  This failure appears to be an internal failure;
> > here's some additional information (which may only be relevant to an
> > Open MPI developer):
> >
> >   opal_shmem_base_select failed
> >   --> Returned value -1 instead of OPAL_SUCCESS
> >
> > Kindly help me.
> >
> > On Tue, Oct 18, 2016 at 5:51 PM, Gilles Gouaillardet <
> > gilles.gouaillar...@gmail.com> wrote:
> >
> >> 3 shmem components are available in v1.10, and you explicitly
> >> blacklisted all of them with
> >> --disable-mmap-shmem \
> >> --disable-posix-shmem \
> >> --disable-sysv-shmem
> >>
> >> as a consequence, Open MPI will not start.
> >>
> >> unless you have a good reason, you should build all of them and let
> >> the runtime decide which is best
> >>
> >> Cheers,
> >>
> >> Gilles
> >>
> >> On Tue, Oct 18, 2016 at 9:13 PM, Mahesh Nanavalla
> >>  wrote:
> >> > Hi all,
> >> >
> >> > Thank you for responding me
> >> >
> >> > Below is my configure options...
> >> >
> >> > ./configure --enable-orterun-prefix-by-default
> >> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" \
> >> > CC=arm-openwrt-linux-muslgnueabi-gcc \
> >> > CXX=arm-openwrt-linux-muslgnueabi-g++ \
> >> > --host=arm-openwrt-linux-muslgnueabi \
> >> > --enable-script-wrapper-compilers
> >> > --disable-mpi-fortran \
> >> > --enable-shared \
> >> > --disable-mmap-shmem \
> >> > --disable-posix-shmem \
> >> > --disable-sysv-shmem \
> >> > --disable-dlopen \
> >> >
> >> > it's configured ,make & make install successfully.
> >> >
> >> > i compiled  helloworld.c programm got executable for arm as below(by
> >> 

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi Gilles,

Thank you for reply,

After doing below config options also

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc
CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers
--disable-mpi-fortran
--enable-shared
--disable-dlopen

it's configured ,make & make install successfully.

i compiled  *helloworld.c *programm got executable for *arm* as below(by
checking the readelf *armhelloworld*),


*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*

But ,while i run using mpirun on target board as below

root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
--
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS

Kindly help me.

On Tue, Oct 18, 2016 at 7:31 PM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Gilles,
>
> Thank you for reply,
>
> After doing below config options also
>
> ./configure --enable-orterun-prefix-by-default
> --prefix="/home/nmahesh//home/nmahesh/Workspace/ARM_MPI/armmpi/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc
> CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi
> --enable-script-wrapper-compilers
> --disable-mpi-fortran
> --enable-shared
> --disable-dlopen
>
> it's configured ,make & make install successfully.
>
> i compiled  *helloworld.c *programm got executable for *arm* as below(by
> checking the readelf *armhelloworld*),
>
>
> *nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
> /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
> -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o helloworld*
>
> But ,while i run using mpirun on target board as below
>
> root@OpenWrt:/# mpirun --allow-run-as-root -np 1 helloworld
> --
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>   opal_shmem_base_select failed
>   --> Returned value -1 instead of OPAL_SUCCESS
>
> Kindly help me.
>
> On Tue, Oct 18, 2016 at 5:51 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> 3 shmem components are available in v1.10, and you explicitly
>> blacklisted all of them with
>> --disable-mmap-shmem \
>> --disable-posix-shmem \
>> --disable-sysv-shmem
>>
>> as a consequence, Open MPI will not start.
>>
>> unless you have a good reason, you should build all of them and let
>> the runtime decide which is best
>>
>> Cheers,
>>
>> Gilles
>>
>> On Tue, Oct 18, 2016 at 9:13 PM, Mahesh Nanavalla
>>  wrote:
>> > Hi all,
>> >
>> > Thank you for responding me
>> >
>> > Below is my configure options...
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" \
>> > CC=arm-openwrt-linux-muslgnueabi-gcc \
>> > CXX=arm-openwrt-linux-muslgnueabi-g++ \
>> > --host=arm-openwrt-linux-muslgnueabi \
>> > --enable-script-wrapper-compilers
>> > --disable-mpi-fortran \
>> > --enable-shared \
>> > --disable-mmap-shmem \
>> > --disable-posix-shmem \
>> > --disable-sysv-shmem \
>> > --disable-dlopen \
>> >
>> > it's configured ,make & make install successfully.
>> >
>> > i compiled  helloworld.c programm got executable for arm as below(by
>> > checking the readelf armhelloworld),
>> >
>> > nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
>> > /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
>> > -L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o
>> armhelloworld
>> >
>> > nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$ ls
>> > a.out  armhelloworld  helloworld.c  openmpi-1.10.3
>> openmpi-1.10.3.tar.gz
>> >
>> > But ,while i run using mpirun on target board as below
>> >
>> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 armhelloworld
>> >
>> > 
>> --
>> > It looks like opal_init failed for some reason; your parallel process is
>> > likely to abort.  There are many reasons that a parallel process can
>> > fail 

Re: [OMPI users] Performing partial calculation on a single node in an MPI job

2016-10-18 Thread Vahid Askarpour
Hi George and Jeff,

Thank you for taking the time to respond to my query. You did inspire me in the 
right direction. Some of the variables involved in the calculation of B were 
not broadcast. In addition,
a  double do-loop combined with an IF statement was overwriting on the correct 
B values. Interestingly, none of the variables are declared contiguous. And I 
did not have to convert
B into a 1-D array. So at the end, it all worked out and I get the correct B 
matrix out of the code.

Thank you again,

Vahid


On Oct 17, 2016, at 10:23 PM, George Bosilca 
> wrote:

I should have been more precise: you cannot use Fortran's vector subscript with 
Open MPI.

George.

On Mon, Oct 17, 2016 at 2:19 PM, Jeff Hammond 
> wrote:
George:

http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm

Jeff

On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca 
> wrote:
Vahid,

You cannot use Fortan's vector subscript with MPI.

--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi all,

Thank you for responding me

Below is my configure options...

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" \
CC=arm-openwrt-linux-muslgnueabi-gcc \
CXX=arm-openwrt-linux-muslgnueabi-g++ \
--host=arm-openwrt-linux-muslgnueabi \
--enable-script-wrapper-compilers
--disable-mpi-fortran \
--enable-shared \
--disable-mmap-shmem \
--disable-posix-shmem \
--disable-sysv-shmem \
--disable-dlopen \

it's configured ,make & make install successfully.

i compiled  *helloworld.c *programm got executable for *arm* as below(by
checking the readelf *armhelloworld*),


*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c -o armhelloworld*

nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$ ls
a.out  *armhelloworld * helloworld.c  openmpi-1.10.3  openmpi-1.10.3.tar.gz

But ,while i run using mpirun on target board as below

*root@OpenWrt:/# mpirun --allow-run-as-root -np 1 armhelloworld *





*--It
looks like opal_init failed for some reason; your parallel process islikely
to abort.  There are many reasons that a parallel process canfail during
opal_init; some of which are due to configuration orenvironment problems.
This failure appears to be an internal failure;here's some additional
information (which may only be relevant to anOpen MPI developer):
opal_shmem_base_select failed  --> Returned value -1 instead of
OPAL_SUCCESS--root@OpenWrt:/#
Kindly
help me...Thanks and Regards,Mahesh .N*

On Tue, Oct 18, 2016 at 5:09 PM, Kawashima, Takahiro <
t-kawash...@jp.fujitsu.com> wrote:

> Hi,
>
> > How to cross compile *openmpi *for* arm *on* x86_64 pc.*
> >
> > *Kindly provide configure options for above...*
>
> You should pass your arm architecture name to the --host option.
>
> Example of my configure options for Open MPI, run on sparc64,
> built on x86_64:
>
>   --prefix=...
>   --host=sparc64-unknown-linux-gnu
>   --build=x86_64-cross-linux-gnu
>   --disable-mpi-fortran
>   CC=your_c_cross_compiler_command
>   CXX=your_cxx_cross_compiler_command
>
> If you need Fortran support, it's a bit complex. You need to
> prepare a special file and pass it to the --with-cross option.
>
> A cross mpicc command is not built automatically with the
> options above. There are (at least) three options to compile
> your MPI programs.
>
> (A) Manually add -L, -I, and -l options to the cross gcc command
> (or another compiler) when you compile a MPI program.
> The options you should pass is written in
> $installdir/share/openmpi/mpicc-wrapper-data.txt.
> In most cases, -I$installdir/include -L$installdir/lib -lmpi
> will be sufficient.
>
> (B) Use the --enable-script-wrapper-compilers option on configure
> time, as you tried. This method may not be maintained well
> in the Open MPI team so you may encounter problems.
> But you can ask them on this mailing list.
>
> (C) Build Open MPI for x86_64 natively, copy the opal_wrapper
> command, and write wrapper-data.txt file.
> This is a bit complex task. I'll write the procedure on
> GitHub Wiki when I have a time.
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Kawashima, Takahiro
Hi,

> How to cross compile *openmpi *for* arm *on* x86_64 pc.*
> 
> *Kindly provide configure options for above...*

You should pass your arm architecture name to the --host option.

Example of my configure options for Open MPI, run on sparc64,
built on x86_64:

  --prefix=...
  --host=sparc64-unknown-linux-gnu
  --build=x86_64-cross-linux-gnu
  --disable-mpi-fortran
  CC=your_c_cross_compiler_command
  CXX=your_cxx_cross_compiler_command

If you need Fortran support, it's a bit complex. You need to
prepare a special file and pass it to the --with-cross option.

A cross mpicc command is not built automatically with the
options above. There are (at least) three options to compile
your MPI programs.

(A) Manually add -L, -I, and -l options to the cross gcc command
(or another compiler) when you compile a MPI program.
The options you should pass is written in
$installdir/share/openmpi/mpicc-wrapper-data.txt.
In most cases, -I$installdir/include -L$installdir/lib -lmpi
will be sufficient.

(B) Use the --enable-script-wrapper-compilers option on configure
time, as you tried. This method may not be maintained well
in the Open MPI team so you may encounter problems.
But you can ask them on this mailing list.

(C) Build Open MPI for x86_64 natively, copy the opal_wrapper
command, and write wrapper-data.txt file.
This is a bit complex task. I'll write the procedure on
GitHub Wiki when I have a time.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] communications groups

2016-10-18 Thread Gilles Gouaillardet
Rick,

if you look at the big picture, i think 3 communicators make more sense.
now keep in mind that a given task is only part of two valid communicators.
depending on how you implement communicator creation, there could be 2
communicators per task,
or 3 communicators, and one is MPI_COMM_NULL.

MPI_Barrier is invoked on communicators, not group.
all members of a communicator must call MPI_Barrier().
now keep in mind that when you create a communicator, it might return
MPI_COMM_NULL.
this is a special case, and you are not allowed to MPI_Barrier(MPI_COMM_NULL)


Cheers,

Gilles

On Tue, Oct 18, 2016 at 12:43 AM, Marlborough, Rick
 wrote:
> Designation: Non-Export Controlled Content
>
> Gilles;
>
> Yes, your assumption is correct. No communication between
> proxies and no communications between sensors. I am using rank to determine
> role. Dispatcher being 0. Sensors start at 1. So I should have 3 groups? I
> am new to MPI and my knowledge of it is not the best. My understanding is
> that when I utilize an MPI_Barrier, all participants of a specified group
> must call MPI_Barrier in order to advance. This tells me Dispatch and sensor
> must be in the same group. Is my understanding incorrect?
>
>
>
> Thanks
>
> Rick
>
>
>
> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
> Gouaillardet
> Sent: Monday, October 17, 2016 10:38 AM
>
>
> To: Open MPI Users
> Subject: Re: [OMPI users] communications groups
>
>
>
> Rick,
>
>
>
> So you have three types of tasks
>
> - 1 dispatcher
>
> - several sensors
>
> - several proxies
>
>
>
> If proxies do not communicate with each other, and if sensors do not
> communicate with each other, then you could end up with 3 inter
> communicators
>
> sensorComm: dispatcher in the left group and sensors in the right group
>
> proxyComm: dispatcher in the left group and proxies in the right group
>
> controlComm: sensors in the left group and proxies in the right group
>
>
>
> Does that fit your needs ?
>
> If yes, then keep in mind sensorComm is MPI_COMM_NULL on the proxy tasks,
> proxyComm is MPI_COMM_NULL on the sensor tasks, and controlComm is
> MPI_COMM_NULL on the dispatcher.
>
>
>
> Cheers,
>
>
>
> Gilles
>
> On Monday, October 17, 2016, Marlborough, Rick 
> wrote:
>
> Designation: Non-Export Controlled Content
>
> Gilles;
>
> My scenario involves a Dispatcher of rank 0, and several
> sensors and proxy objects. The Dispatcher triggers activity and gathers
> results. The proxies get triggered first. They send data to the sensors, and
> the sensors indicate to the dispatcher that they are done. I am trying to
> create 2 comm groups. One for the sensors and one for the proxies. The
> dispatcher will use the 2 comm groups to coordinate activity. I tried adding
> the dispatcher to the sensorList comm group, but I get an error saying
> “invalid task”.
>
>
>
> Rick
>
>
>
> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
> Gouaillardet
> Sent: Monday, October 17, 2016 9:30 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] communications groups
>
>
>
> Rick,
>
>
>
> I re-read the MPI standard and was unable to figure out if sensorgroup is
> MPI_GROUP_EMPTY or a group with task 1 on tasks except task 1
>
> (A group that does not contain the current task makes little sense to me,
> but I do not see any reason why this group have to be MPI_GROUP_EMPTY)
>
>
>
> Regardless, sensorComm will be MPI_COMM_NULL except on task 1, so
> MPI_Barrier will fail.
>
>
>
> Cheers,
>
>
>
> Gilles
>
> On Monday, October 17, 2016, Marlborough, Rick 
> wrote:
>
> Designation: Non-Export Controlled Content
>
> George;
>
> Thanks for your response. Your second sentence is a little
> confusing. If my world group is P0,P1, visible on both processes, why
> wouldn’t the sensorList contain P1 on both processes?
>
>
>
> Rick
>
>
>
>
>
> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of George
> Bosilca
> Sent: Friday, October 14, 2016 5:44 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] communications groups
>
>
>
> Rick,
>
>
>
> Let's assume that you have started 2 processes, and that your sensorList is
> {1}. The worldgroup will then be {P0, P1}, which trimmed via the sensorList
> will give the sensorgroup {MPI_GROUP_EMPTY} on P0 and the sensorgroup {P1}
> on P1. As a result on P0 you will create a MPI_COMM_NULL communicator, while
> on P1 you will have a valid communicator sensorComm (which will only contain
> P1). You cannot do a Barrier on an MPI_COMM_NULL communicator, which might
> explain the "invalid communicator" error you are getting.
>
>
>
> George.
>
>
>
>
>
> On Fri, Oct 14, 2016 at 5:33 PM, Marlborough, Rick
>  wrote:
>
> Designation: Non-Export Controlled Content
>
> Folks;
>
> I have the following code setup. The sensorList is an array
> of ints of size 1. 

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Gilles Gouaillardet
You can try
configure --host=arm... CC=gcc_cross_compiler CXX=g++_cross_compiler

On Tuesday, October 18, 2016, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi all,
>
> How to cross compile *openmpi *for* arm *on* x86_64 pc.*
>
> *Kindly provide configure options for above...*
>
> Thanks,
> Mahesh.N
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Gilles Gouaillardet
mpicc is a symlink pointing to ompi_wrapper_script

I guess we do not correctly support the --target option, unless you changed 
your configure options meanwhile, so you have to manually update the broken 
symlinks so they point to arm-openwrt-linux-muslgnueabi-ompi_wrapper_script

Cheers,

Gilles


Mahesh Nanavalla  wrote:
>Hi Gilles,
>
>
>Thanks for responding me.
>
>i did as mention previous mail ,but am getting below error as mpicc  not 
>found...
>
>
>nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$ 
>/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc 
>-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c 
>
>bash: /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc: No such file or 
>directory
>
>
>
>but i cheak in folder as follow
>
>nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/openmpi/bin$ ll
>
>total 35460
>
>drwxrwxr-x 2 nmahesh nmahesh    4096 Oct 18 15:34 ./
>
>drwxrwxr-x 7 nmahesh nmahesh    4096 Oct 18 15:34 ../
>
>-rwxr-xr-x 1 nmahesh nmahesh   22961 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-ompi_info*
>
>-rwxr-xr-x 1 nmahesh nmahesh    5661 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-ompi_wrapper_script*
>
>-rwxr-xr-x 1 nmahesh nmahesh   14699 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-clean*
>
>-rwxr-xr-x 1 nmahesh nmahesh    8770 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orted*
>
>-rwxr-xr-x 1 nmahesh nmahesh   56800 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-dvm*
>
>-rwxr-xr-x 1 nmahesh nmahesh   35191 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-info*
>
>-rwxr-xr-x 1 nmahesh nmahesh   23939 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-ps*
>
>-rwxr-xr-x 1 nmahesh nmahesh  124520 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orterun*
>
>-rwxr-xr-x 1 nmahesh nmahesh   15429 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-server*
>
>-rwxr-xr-x 1 nmahesh nmahesh   79807 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-submit*
>
>-rwxr-xr-x 1 nmahesh nmahesh   30514 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-orte-top*
>
>-rwxr-xr-x 1 nmahesh nmahesh   23032 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-oshmem_info*
>
>-rwxr-xr-x 1 nmahesh nmahesh  980414 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfaux*
>
>-rwxr-xr-x 1 nmahesh nmahesh   40607 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfcompress*
>
>-rwxr-xr-x 1 nmahesh nmahesh   20915 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfconfig*
>
>-rwxr-xr-x 1 nmahesh nmahesh   94717 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfinfo*
>
>-rwxr-xr-x 1 nmahesh nmahesh  115276 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfmerge*
>
>-rwxr-xr-x 1 nmahesh nmahesh  122985 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfmerge-mpi*
>
>-rwxr-xr-x 1 nmahesh nmahesh  231420 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfprint*
>
>-rwxr-xr-x 1 nmahesh nmahesh 6356996 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfprofile*
>
>-rwxr-xr-x 1 nmahesh nmahesh 7037080 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfprofile-mpi*
>
>-rwxr-xr-x 1 nmahesh nmahesh  417971 Oct 18 15:34 
>arm-openwrt-linux-muslgnueabi-otfshrink*
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpic++ -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpicc -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpiCC -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpicxx -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh       7 Oct 18 15:34 mpiexec -> orterun
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpif77 -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpif90 -> ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh      19 Oct 18 15:34 mpifort -> 
>ompi_wrapper_script
>
>lrwxrwxrwx 1 nmahesh nmahesh       7 Oct 18 15:34 mpirun -> orterun
>
>lrwxrwxrwx 1 nmahesh nmahesh      10 Oct 18 15:34 ompi-clean -> orte-clean
>
>lrwxrwxrwx 1 nmahesh nmahesh       7 Oct 18 15:34 ompi-ps -> orte-ps
>
>lrwxrwxrwx 1 nmahesh nmahesh      11 Oct 18 15:34 ompi-server -> orte-server
>
>lrwxrwxrwx 1 nmahesh nmahesh       8 Oct 18 15:34 ompi-top -> orte-top
>
>-rwxr-xr-x 1 nmahesh nmahesh 1766460 Oct 18 15:34 opari*
>
>lrwxrwxrwx 1 nmahesh nmahesh       5 Oct 18 15:34 oshcc -> mpicc
>
>lrwxrwxrwx 1 nmahesh nmahesh       7 Oct 18 15:34 oshfort -> mpifort
>
>lrwxrwxrwx 1 nmahesh nmahesh       6 Oct 18 15:34 oshrun -> mpirun
>
>lrwxrwxrwx 1 nmahesh nmahesh      11 Oct 18 15:34 otfdecompress -> otfcompress
>
>lrwxrwxrwx 1 nmahesh nmahesh       5 Oct 18 15:34 shmemcc -> mpicc
>
>lrwxrwxrwx 1 nmahesh nmahesh       7 Oct 18 15:34 shmemfort -> mpifort
>
>lrwxrwxrwx 1 nmahesh nmahesh       6 Oct 18 15:34 shmemrun -> mpirun
>
>lrwxrwxrwx 1 nmahesh nmahesh       9 Oct 18 15:34 vtc++ -> vtwrapper*
>
>lrwxrwxrwx 1 nmahesh nmahesh       9 Oct 18 15:34 vtcc -> vtwrapper*
>
>lrwxrwxrwx 1 nmahesh nmahesh       9 Oct 18 15:34 vtCC -> vtwrapper*
>
>lrwxrwxrwx 1 nmahesh nmahesh       9 Oct 18 15:34 vtcxx -> vtwrapper*
>

[OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi all,

How to cross compile *openmpi *for* arm *on* x86_64 pc.*

*Kindly provide configure options for above...*

Thanks,
Mahesh.N
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Mahesh Nanavalla
Hi Gilles,

Thanks for responding me.
i did as mention previous mail ,but am getting below error as mpicc  not
found...

*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/mpi$
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c *
*bash: /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc: No such file or
directory*


but i cheak in folder as follow

*nmahesh@nmahesh-H81MLV3:~/Workspace/ARM_MPI/openmpi/bin$ lltotal
35460drwxrwxr-x 2 nmahesh nmahesh4096 Oct 18 15:34 ./drwxrwxr-x 7
nmahesh nmahesh4096 Oct 18 15:34 ../-rwxr-xr-x 1 nmahesh nmahesh
22961 Oct 18 15:34 arm-openwrt-linux-muslgnueabi-ompi_info*-rwxr-xr-x 1
nmahesh nmahesh5661 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-ompi_wrapper_script*-rwxr-xr-x 1 nmahesh
nmahesh   14699 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-orte-clean*-rwxr-xr-x 1 nmahesh nmahesh
 8770 Oct 18 15:34 arm-openwrt-linux-muslgnueabi-orted*-rwxr-xr-x 1 nmahesh
nmahesh   56800 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-orte-dvm*-rwxr-xr-x 1 nmahesh nmahesh   35191
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-orte-info*-rwxr-xr-x 1 nmahesh
nmahesh   23939 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-orte-ps*-rwxr-xr-x 1 nmahesh nmahesh  124520
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-orterun*-rwxr-xr-x 1 nmahesh
nmahesh   15429 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-orte-server*-rwxr-xr-x 1 nmahesh nmahesh
79807 Oct 18 15:34 arm-openwrt-linux-muslgnueabi-orte-submit*-rwxr-xr-x 1
nmahesh nmahesh   30514 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-orte-top*-rwxr-xr-x 1 nmahesh nmahesh   23032
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-oshmem_info*-rwxr-xr-x 1 nmahesh
nmahesh  980414 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-otfaux*-rwxr-xr-x 1 nmahesh nmahesh   40607
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-otfcompress*-rwxr-xr-x 1 nmahesh
nmahesh   20915 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-otfconfig*-rwxr-xr-x 1 nmahesh nmahesh
94717 Oct 18 15:34 arm-openwrt-linux-muslgnueabi-otfinfo*-rwxr-xr-x 1
nmahesh nmahesh  115276 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-otfmerge*-rwxr-xr-x 1 nmahesh nmahesh  122985
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-otfmerge-mpi*-rwxr-xr-x 1
nmahesh nmahesh  231420 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-otfprint*-rwxr-xr-x 1 nmahesh nmahesh 6356996
Oct 18 15:34 arm-openwrt-linux-muslgnueabi-otfprofile*-rwxr-xr-x 1 nmahesh
nmahesh 7037080 Oct 18 15:34
arm-openwrt-linux-muslgnueabi-otfprofile-mpi*-rwxr-xr-x 1 nmahesh nmahesh
 417971 Oct 18 15:34 arm-openwrt-linux-muslgnueabi-otfshrink*lrwxrwxrwx 1
nmahesh nmahesh  19 Oct 18 15:34 mpic++ ->
ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh  19 Oct 18 15:34 mpicc
-> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh  19 Oct 18 15:34
mpiCC -> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh  19 Oct 18
15:34 mpicxx -> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh   7 Oct
18 15:34 mpiexec -> orterunlrwxrwxrwx 1 nmahesh nmahesh  19 Oct 18
15:34 mpif77 -> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh  19 Oct
18 15:34 mpif90 -> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh  19
Oct 18 15:34 mpifort -> ompi_wrapper_scriptlrwxrwxrwx 1 nmahesh nmahesh
  7 Oct 18 15:34 mpirun -> orterunlrwxrwxrwx 1 nmahesh nmahesh  10 Oct
18 15:34 ompi-clean -> orte-cleanlrwxrwxrwx 1 nmahesh nmahesh   7 Oct
18 15:34 ompi-ps -> orte-pslrwxrwxrwx 1 nmahesh nmahesh  11 Oct 18
15:34 ompi-server -> orte-serverlrwxrwxrwx 1 nmahesh nmahesh   8 Oct 18
15:34 ompi-top -> orte-top-rwxr-xr-x 1 nmahesh nmahesh 1766460 Oct 18 15:34
opari*lrwxrwxrwx 1 nmahesh nmahesh   5 Oct 18 15:34 oshcc ->
mpicclrwxrwxrwx 1 nmahesh nmahesh   7 Oct 18 15:34 oshfort ->
mpifortlrwxrwxrwx 1 nmahesh nmahesh   6 Oct 18 15:34 oshrun ->
mpirunlrwxrwxrwx 1 nmahesh nmahesh  11 Oct 18 15:34 otfdecompress ->
otfcompresslrwxrwxrwx 1 nmahesh nmahesh   5 Oct 18 15:34 shmemcc ->
mpicclrwxrwxrwx 1 nmahesh nmahesh   7 Oct 18 15:34 shmemfort ->
mpifortlrwxrwxrwx 1 nmahesh nmahesh   6 Oct 18 15:34 shmemrun ->
mpirunlrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtc++ ->
vtwrapper*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtcc ->
vtwrapper*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtCC ->
vtwrapper*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtcxx ->
vtwrapper*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtf77 ->
vtwrapper*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtf90 ->
vtwrapper*-rwxr-xr-x 1 nmahesh nmahesh 2928026 Oct 18 15:34
vtfilter*lrwxrwxrwx 1 nmahesh nmahesh   8 Oct 18 15:34 vtfiltergen ->
vtfilter*lrwxrwxrwx 1 nmahesh nmahesh  12 Oct 18 15:34 vtfiltergen-mpi
-> vtfilter-mpi*-rwxr-xr-x 1 nmahesh nmahesh 3100359 Oct 18 15:34
vtfilter-mpi*lrwxrwxrwx 1 nmahesh nmahesh   9 Oct 18 15:34 vtfort ->
vtwrapper*-rwxr-xr-x 1 nmahesh nmahesh9031 Oct 18 15:34
vtrun*-rwxr-xr-x 1 nmahesh nmahesh 5623609 Oct 18 15:34 vtunify*-rwxr-xr-x
1 nmahesh nmahesh 6177733 Oct 18 

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Gilles Gouaillardet

Hi,


can you please give the patch below a try ?

Cheers,

Gilles

diff --git a/ompi/tools/wrappers/ompi_wrapper_script.in 
b/ompi/tools/wrappers/ompi_wrapper_script.in

index d87649f..b66fec3 100644
--- a/ompi/tools/wrappers/ompi_wrapper_script.in
+++ b/ompi/tools/wrappers/ompi_wrapper_script.in
@@ -35,12 +35,12 @@ my $FC = "@FC@";
 my $extra_includes = "@OMPI_WRAPPER_EXTRA_INCLUDES@";
 my $extra_cppflags = "@OMPI_WRAPPER_EXTRA_CPPFLAGS@";
 my $extra_cflags = "@OMPI_WRAPPER_EXTRA_CFLAGS@";
-my $extra_cflags_prefix = "@ORTE_WRAPPER_EXTRA_CFLAGS_PREFIX@";
+my $extra_cflags_prefix = "@OMPI_WRAPPER_EXTRA_CFLAGS_PREFIX@";
 my $extra_cxxflags = "@OMPI_WRAPPER_EXTRA_CXXFLAGS@";
-my $extra_cxxflags_prefix = "@ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX@";
+my $extra_cxxflags_prefix = "@OMPI_WRAPPER_EXTRA_CXXFLAGS_PREFIX@";
 my $extra_fcflags = "@OMPI_WRAPPER_EXTRA_FCFLAGS@";
 my $extra_fcflags_prefix = "@OMPI_WRAPPER_EXTRA_FCFLAGS_PREFIX@";
-my $extra_ldflags = "@OMPI_WRAPPER_EXTRA_LDFLAGS@";
+my $extra_ldflags = "@OMPI_PKG_CONFIG_LDFLAGS@";
 my $extra_libs = "@OMPI_WRAPPER_EXTRA_LIBS@";
 my $cxx_lib = "@OMPI_WRAPPER_CXX_LIB@";
 my $fc_module_flag = "@OMPI_FC_MODULE_FLAG@";

On 10/18/2016 1:48 PM, Mahesh Nanavalla wrote:

Hi everyone,

I'm trying to cross compile openmpi-1.10.3 
for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu with below 
configure options...



./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
 --build=x86_64-linux-gnu
--host=x86_64-linux-gnu
--target=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers
--disable-mpi-fortran
--enable-shared
--disable-mmap-shmem
--disable-posix-shmem
--disable-sysv-shmem
--disable-dlopen
configuring,make  install successfully.
I added
$export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/"
$export 
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/ARM_MPI/openmpi/lib/"


$export PATH="$PATH:/home/$USER/Workspace/ARM_MPI/openmpi/bin/" >> 
/home/$USER/.bashrc
$export 
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/Workspace/ARM_MPI/openmpi/lib/" >> 
/home/$USER/.bashrc


But while compiling as below i'am getting error

*$ /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc 
-L/home/nmahesh/Workspace/ARM_MPI/openmpi/lib helloworld.c *
Possible unintended interpolation of 
@ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX in string at 
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 40.
Possible unintended interpolation of @libdir in string at 
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
Name "main::ORTE_WRAPPER_EXTRA_CXXFLAGS_PREFIX" used only once: 
possible typo at /home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc 
line 40.
Name "main::libdir" used only once: possible typo at 
/home/nmahesh/Workspace/ARM_MPI/openmpi/bin/mpicc line 43.
/home/nmahesh/Workspace/ARM_MPI/openmpi/lib/libmpi.so: file not 
recognized: File format not recognized

collect2: error: ld returned 1 exit status

*can anybody help..*


___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users