[OMPI users] (no subject)

2013-10-07 Thread San B
Hi,

I'm facing a  performance issue with a scientific application(Fortran). The
issue is, it runs faster on single node but runs very slow on multiple
nodes. For example, a 16 core job on single node finishes in 1hr 2mins, but
the same job on two nodes (i.e. 8 cores per node & remaining 8 cores kept
free) takes 3hr 20mins. The code is compiled with ifort-13.1.1,
openmpi-1.4.5 and intel MKL libraries - lapack, blas, scalapack, blacs &
fftw. What could be the problem here with?
Is it possible to do any tuning in OpenMPI? FY More info: The cluster has
Intel Sandybridge processor (E5-2670), infiniband and Hyperthreading is
Enabled. Jobs are submitted thru LSF scheduler.

Does HyperThreading causing any problem here?


Thanks


Re: [OMPI users] (no subject)

2013-10-07 Thread Reuti
Hi,

Am 07.10.2013 um 08:45 schrieb San B:

> I'm facing a  performance issue with a scientific application(Fortran). The 
> issue is, it runs faster on single node but runs very slow on multiple nodes. 
> For example, a 16 core job on single node finishes in 1hr 2mins, but the same 
> job on two nodes (i.e. 8 cores per node & remaining 8 cores kept free) takes 
> 3hr 20mins. The code is compiled with ifort-13.1.1, openmpi-1.4.5 and intel 
> MKL libraries - lapack, blas, scalapack, blacs & fftw. What could be the 
> problem here with?

How do you provide a list of hosts it should use to the application? Maybe it's 
now just running on only one machine - and/or can make use only of local OpenMP 
inside MKL (yes, OpenMP here which is bound to run on a single machine only).

-- Reuti

PS: Do you have 16 real cores or 8 plus Hyperthreading?


> Is it possible to do any tuning in OpenMPI? FY More info: The cluster has 
> Intel Sandybridge processor (E5-2670), infiniband and Hyperthreading is 
> Enabled. Jobs are submitted thru LSF scheduler.
> 
> Does HyperThreading causing any problem here?
> 
> 
> Thanks
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Hammond, Simon David (-EXP)
Hey everyone,

I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will
configure successfully but I am seeing a build error relating to the
inclusion of the CUDA options (at least I think so). Do you guys know if
this is a bug or whether something is wrong with how we are configuring
OpenMPI for our cluster.

Configure Line: ./configure
--prefix=/home/projects/openmpi/1.7.2/gnu/4.7.2 --enable-shared
--enable-static --disable-vt --with-cuda=/home/projects/cuda/5.5.11
CC=`which gcc` CXX=`which g++` FC=`which gfortran`

Running make V=1 gives:

make[2]: Entering directory `/tmp/openmpi-1.7.2/ompi/tools/ompi_info'
/bin/sh ../../../libtool  --tag=CC   --mode=link
/home/projects/gcc/4.7.2/bin/gcc -std=gnu99
-DOPAL_CONFIGURE_USER="\"\"" -DOPAL_CONFIGURE_HOST="\"k20-0007\""
-DOPAL_CONFIGURE_DATE="\"Mon Oct  7 13:16:12 MDT 2013\""
-DOMPI_BUILD_USER="\"$USER\"" -DOMPI_BUILD_HOST="\"`hostname`\""
-DOMPI_BUILD_DATE="\"`date`\"" -DOMPI_BUILD_CFLAGS="\"-O3 -DNDEBUG
-finline-functions -fno-strict-aliasing -pthread\""
-DOMPI_BUILD_CPPFLAGS="\"-I../../..
-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
-I/usr/include/infiniband -I/usr/include/infiniband
-I/usr/include/infiniband -I/usr/include/infiniband
-I/usr/include/infiniband\"" -DOMPI_BUILD_CXXFLAGS="\"-O3 -DNDEBUG
-finline-functions -pthread\"" -DOMPI_BUILD_CXXCPPFLAGS="\"-I../../..  \""
-DOMPI_BUILD_FFLAGS="\"\"" -DOMPI_BUILD_FCFLAGS="\"\""
-DOMPI_BUILD_LDFLAGS="\"-export-dynamic  \"" -DOMPI_BUILD_LIBS="\"-lrt
-lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE="\"\""
-DOMPI_CXX_ABSOLUTE="\"none\"" -O3 -DNDEBUG -finline-functions
-fno-strict-aliasing -pthread  -export-dynamic   -o ompi_info ompi_info.o
param.o components.o version.o ../../../ompi/libmpi.la -lrt -lnsl  -lutil
-lm
libtool: link: /home/projects/gcc/4.7.2/bin/gcc -std=gnu99
-DOPAL_CONFIGURE_USER=\"\" -DOPAL_CONFIGURE_HOST=\"k20-0007\"
"-DOPAL_CONFIGURE_DATE=\"Mon Oct  7 13:16:12 MDT 2013\""
-DOMPI_BUILD_USER=\"\" -DOMPI_BUILD_HOST=\"k20-0007\"
"-DOMPI_BUILD_DATE=\"Mon Oct  7 13:26:23 MDT 2013\""
"-DOMPI_BUILD_CFLAGS=\"-O3 -DNDEBUG -finline-functions
-fno-strict-aliasing -pthread\"" "-DOMPI_BUILD_CPPFLAGS=\"-I../../..
-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
-I/usr/include/infiniband -I/usr/include/infiniband
-I/usr/include/infiniband -I/usr/include/infiniband
-I/usr/include/infiniband\"" "-DOMPI_BUILD_CXXFLAGS=\"-O3 -DNDEBUG
-finline-functions -pthread\"" "-DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \""
-DOMPI_BUILD_FFLAGS=\"\" -DOMPI_BUILD_FCFLAGS=\"\"
"-DOMPI_BUILD_LDFLAGS=\"-export-dynamic  \"" "-DOMPI_BUILD_LIBS=\"-lrt
-lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE=\"\" -DOMPI_CXX_ABSOLUTE=\"none\"
-O3 -DNDEBUG -finline-functions -fno-strict-aliasing -pthread -o
.libs/ompi_info ompi_info.o param.o components.o version.o
-Wl,--export-dynamic  ../../../ompi/.libs/libmpi.so -L/usr/lib64 -lrdmacm
-losmcomp -libverbs /tmp/openmpi-1.7.2/orte/.libs/libopen-rte.so
/tmp/openmpi-1.7.2/opal/.libs/libopen-pal.so -lcuda -lnuma -ldl -lrt -lnsl
-lutil -lm -pthread -Wl,-rpath
-Wl,/home/projects/openmpi/1.7.2/gnu/4.7.2/lib
../../../ompi/.libs/libmpi.so: undefined reference to
`mca_pml_bfo_send_request_start_cuda'
../../../ompi/.libs/libmpi.so: undefined reference to
`mca_pml_bfo_cuda_need_buffers'
collect2: error: ld returned 1 exit status



Thanks.

S.

-- 
Simon Hammond
Scalable Computer Architectures (CSRI/146, 01422)
Sandia National Laboratories, NM, USA






Re: [OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Rolf vandeVaart
That might be a bug.  While I am checking, you could try configuring with this 
additional flag:

--enable-mca-no-build=pml-bfo

Rolf

>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond,
>Simon David (-EXP)
>Sent: Monday, October 07, 2013 3:30 PM
>To: us...@open-mpi.org
>Subject: [OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11
>
>Hey everyone,
>
>I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will
>configure successfully but I am seeing a build error relating to the inclusion 
>of
>the CUDA options (at least I think so). Do you guys know if this is a bug or
>whether something is wrong with how we are configuring OpenMPI for our
>cluster.
>
>Configure Line: ./configure
>--prefix=/home/projects/openmpi/1.7.2/gnu/4.7.2 --enable-shared --enable-
>static --disable-vt --with-cuda=/home/projects/cuda/5.5.11
>CC=`which gcc` CXX=`which g++` FC=`which gfortran`
>
>Running make V=1 gives:
>
>make[2]: Entering directory `/tmp/openmpi-1.7.2/ompi/tools/ompi_info'
>/bin/sh ../../../libtool  --tag=CC   --mode=link
>/home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>DOPAL_CONFIGURE_USER="\"\"" -
>DOPAL_CONFIGURE_HOST="\"k20-0007\""
>-DOPAL_CONFIGURE_DATE="\"Mon Oct  7 13:16:12 MDT 2013\""
>-DOMPI_BUILD_USER="\"$USER\"" -DOMPI_BUILD_HOST="\"`hostname`\""
>-DOMPI_BUILD_DATE="\"`date`\"" -DOMPI_BUILD_CFLAGS="\"-O3 -
>DNDEBUG -finline-functions -fno-strict-aliasing -pthread\""
>-DOMPI_BUILD_CPPFLAGS="\"-I../../..
>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>-I/usr/include/infiniband -I/usr/include/infiniband -I/usr/include/infiniband -
>I/usr/include/infiniband -I/usr/include/infiniband\"" -
>DOMPI_BUILD_CXXFLAGS="\"-O3 -DNDEBUG -finline-functions -pthread\"" -
>DOMPI_BUILD_CXXCPPFLAGS="\"-I../../..  \""
>-DOMPI_BUILD_FFLAGS="\"\"" -DOMPI_BUILD_FCFLAGS="\"\""
>-DOMPI_BUILD_LDFLAGS="\"-export-dynamic  \"" -DOMPI_BUILD_LIBS="\"-
>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE="\"\""
>-DOMPI_CXX_ABSOLUTE="\"none\"" -O3 -DNDEBUG -finline-functions
>-fno-strict-aliasing -pthread  -export-dynamic   -o ompi_info ompi_info.o
>param.o components.o version.o ../../../ompi/libmpi.la -lrt -lnsl  -lutil -lm
>libtool: link: /home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>DOPAL_CONFIGURE_USER=\"\" -
>DOPAL_CONFIGURE_HOST=\"k20-0007\"
>"-DOPAL_CONFIGURE_DATE=\"Mon Oct  7 13:16:12 MDT 2013\""
>-DOMPI_BUILD_USER=\"\" -DOMPI_BUILD_HOST=\"k20-0007\"
>"-DOMPI_BUILD_DATE=\"Mon Oct  7 13:26:23 MDT 2013\""
>"-DOMPI_BUILD_CFLAGS=\"-O3 -DNDEBUG -finline-functions -fno-strict-
>aliasing -pthread\"" "-DOMPI_BUILD_CPPFLAGS=\"-I../../..
>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>-I/usr/include/infiniband -I/usr/include/infiniband -I/usr/include/infiniband -
>I/usr/include/infiniband -I/usr/include/infiniband\"" "-
>DOMPI_BUILD_CXXFLAGS=\"-O3 -DNDEBUG -finline-functions -pthread\"" "-
>DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \""
>-DOMPI_BUILD_FFLAGS=\"\" -DOMPI_BUILD_FCFLAGS=\"\"
>"-DOMPI_BUILD_LDFLAGS=\"-export-dynamic  \"" "-DOMPI_BUILD_LIBS=\"-
>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE=\"\" -
>DOMPI_CXX_ABSOLUTE=\"none\"
>-O3 -DNDEBUG -finline-functions -fno-strict-aliasing -pthread -o
>.libs/ompi_info ompi_info.o param.o components.o version.o -Wl,--export-
>dynamic  ../../../ompi/.libs/libmpi.so -L/usr/lib64 -lrdmacm -losmcomp -
>libverbs /tmp/openmpi-1.7.2/orte/.libs/libopen-rte.so
>/tmp/openmpi-1.7.2/opal/.libs/libopen-pal.so -lcuda -lnuma -ldl -lrt -lnsl 
>-lutil -
>lm -pthread -Wl,-rpath -Wl,/home/projects/openmpi/1.7.2/gnu/4.7.2/lib
>../../../ompi/.libs/libmpi.so: undefined reference to
>`mca_pml_bfo_send_request_start_cuda'
>../../../ompi/.libs/libmpi.so: undefined reference to
>`mca_pml_bfo_cuda_need_buffers'
>collect2: error: ld returned 1 exit status
>
>
>
>Thanks.
>
>S.
>
>--
>Simon Hammond
>Scalable Computer Architectures (CSRI/146, 01422) Sandia National
>Laboratories, NM, USA
>
>
>
>
>___
>users mailing list
>us...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/users
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---


Re: [OMPI users] [EXTERNAL] Re: Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Hammond, Simon David (-EXP)
Thanks Rolf, that seems to have made the code compile and make
successfully. 

S.

-- 
Simon Hammond
Scalable Computer Architectures (CSRI/146, 01422)
Sandia National Laboratories, NM, USA






On 10/7/13 1:47 PM, "Rolf vandeVaart"  wrote:

>That might be a bug.  While I am checking, you could try configuring with
>this additional flag:
>
>--enable-mca-no-build=pml-bfo
>
>Rolf
>
>>-Original Message-
>>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond,
>>Simon David (-EXP)
>>Sent: Monday, October 07, 2013 3:30 PM
>>To: us...@open-mpi.org
>>Subject: [OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11
>>
>>Hey everyone,
>>
>>I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will
>>configure successfully but I am seeing a build error relating to the
>>inclusion of
>>the CUDA options (at least I think so). Do you guys know if this is a
>>bug or
>>whether something is wrong with how we are configuring OpenMPI for our
>>cluster.
>>
>>Configure Line: ./configure
>>--prefix=/home/projects/openmpi/1.7.2/gnu/4.7.2 --enable-shared --enable-
>>static --disable-vt --with-cuda=/home/projects/cuda/5.5.11
>>CC=`which gcc` CXX=`which g++` FC=`which gfortran`
>>
>>Running make V=1 gives:
>>
>>make[2]: Entering directory `/tmp/openmpi-1.7.2/ompi/tools/ompi_info'
>>/bin/sh ../../../libtool  --tag=CC   --mode=link
>>/home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>>DOPAL_CONFIGURE_USER="\"\"" -
>>DOPAL_CONFIGURE_HOST="\"k20-0007\""
>>-DOPAL_CONFIGURE_DATE="\"Mon Oct  7 13:16:12 MDT 2013\""
>>-DOMPI_BUILD_USER="\"$USER\"" -DOMPI_BUILD_HOST="\"`hostname`\""
>>-DOMPI_BUILD_DATE="\"`date`\"" -DOMPI_BUILD_CFLAGS="\"-O3 -
>>DNDEBUG -finline-functions -fno-strict-aliasing -pthread\""
>>-DOMPI_BUILD_CPPFLAGS="\"-I../../..
>>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>>-I/usr/include/infiniband -I/usr/include/infiniband
>>-I/usr/include/infiniband -
>>I/usr/include/infiniband -I/usr/include/infiniband\"" -
>>DOMPI_BUILD_CXXFLAGS="\"-O3 -DNDEBUG -finline-functions -pthread\"" -
>>DOMPI_BUILD_CXXCPPFLAGS="\"-I../../..  \""
>>-DOMPI_BUILD_FFLAGS="\"\"" -DOMPI_BUILD_FCFLAGS="\"\""
>>-DOMPI_BUILD_LDFLAGS="\"-export-dynamic  \"" -DOMPI_BUILD_LIBS="\"-
>>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE="\"\""
>>-DOMPI_CXX_ABSOLUTE="\"none\"" -O3 -DNDEBUG -finline-functions
>>-fno-strict-aliasing -pthread  -export-dynamic   -o ompi_info ompi_info.o
>>param.o components.o version.o ../../../ompi/libmpi.la -lrt -lnsl
>>-lutil -lm
>>libtool: link: /home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>>DOPAL_CONFIGURE_USER=\"\" -
>>DOPAL_CONFIGURE_HOST=\"k20-0007\"
>>"-DOPAL_CONFIGURE_DATE=\"Mon Oct  7 13:16:12 MDT 2013\""
>>-DOMPI_BUILD_USER=\"\" -DOMPI_BUILD_HOST=\"k20-0007\"
>>"-DOMPI_BUILD_DATE=\"Mon Oct  7 13:26:23 MDT 2013\""
>>"-DOMPI_BUILD_CFLAGS=\"-O3 -DNDEBUG -finline-functions -fno-strict-
>>aliasing -pthread\"" "-DOMPI_BUILD_CPPFLAGS=\"-I../../..
>>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>>-I/usr/include/infiniband -I/usr/include/infiniband
>>-I/usr/include/infiniband -
>>I/usr/include/infiniband -I/usr/include/infiniband\"" "-
>>DOMPI_BUILD_CXXFLAGS=\"-O3 -DNDEBUG -finline-functions -pthread\"" "-
>>DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \""
>>-DOMPI_BUILD_FFLAGS=\"\" -DOMPI_BUILD_FCFLAGS=\"\"
>>"-DOMPI_BUILD_LDFLAGS=\"-export-dynamic  \"" "-DOMPI_BUILD_LIBS=\"-
>>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE=\"\" -
>>DOMPI_CXX_ABSOLUTE=\"none\"
>>-O3 -DNDEBUG -finline-functions -fno-strict-aliasing -pthread -o
>>.libs/ompi_info ompi_info.o param.o components.o version.o -Wl,--export-
>>dynamic  ../../../ompi/.libs/libmpi.so -L/usr/lib64 -lrdmacm -losmcomp -
>>libverbs /tmp/openmpi-1.7.2/orte/.libs/libopen-rte.so
>>/tmp/openmpi-1.7.2/opal/.libs/libopen-pal.so -lcuda -lnuma -ldl -lrt
>>-lnsl -lutil -
>>lm -pthread -Wl,-rpath -Wl,/home/projects/openmpi/1.7.2/gnu/4.7.2/lib
>>../../../ompi/.libs/libmpi.so: undefined reference to
>>`mca_pml_bfo_send_request_start_cuda'
>>../../../ompi/.libs/libmpi.so: undefined reference to
>>`mca_pml_bfo_cuda_need_buffers'
>>collect2: error: ld returned 1 exit status
>>
>>
>>
>>Thanks.
>>
>>S.
>>
>>--
>>Simon Hammond
>>Scalable Computer Architectures (CSRI/146, 01422) Sandia National
>>Laboratories, NM, USA
>>
>>
>>
>>
>>___
>>users mailing list
>>us...@open-mpi.org
>>http://www.open-mpi.org/mailman/listinfo.cgi/users
>--
>-
>This email message is for the sole use of the intended recipient(s) and
>may contain
>confidential information.  Any unauthorized review, use, disclosure or
>distribution
>is prohibited.  If you are not the intended recipient, please contact th

Re: [OMPI users] [EXTERNAL] Re: Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11

2013-10-07 Thread Rolf vandeVaart
Good.  This is fixed in Open MPI 1.7.3 by the way.  I will add note to FAQ on 
building Open MPI 1.7.2.

>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond,
>Simon David (-EXP)
>Sent: Monday, October 07, 2013 4:17 PM
>To: Open MPI Users
>Subject: Re: [OMPI users] [EXTERNAL] Re: Build Failing for OpenMPI 1.7.2 and
>CUDA 5.5.11
>
>Thanks Rolf, that seems to have made the code compile and make
>successfully.
>
>S.
>
>--
>Simon Hammond
>Scalable Computer Architectures (CSRI/146, 01422) Sandia National
>Laboratories, NM, USA
>
>
>
>
>
>
>On 10/7/13 1:47 PM, "Rolf vandeVaart"  wrote:
>
>>That might be a bug.  While I am checking, you could try configuring with
>>this additional flag:
>>
>>--enable-mca-no-build=pml-bfo
>>
>>Rolf
>>
>>>-Original Message-
>>>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of
>Hammond,
>>>Simon David (-EXP)
>>>Sent: Monday, October 07, 2013 3:30 PM
>>>To: us...@open-mpi.org
>>>Subject: [OMPI users] Build Failing for OpenMPI 1.7.2 and CUDA 5.5.11
>>>
>>>Hey everyone,
>>>
>>>I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will
>>>configure successfully but I am seeing a build error relating to the
>>>inclusion of
>>>the CUDA options (at least I think so). Do you guys know if this is a
>>>bug or
>>>whether something is wrong with how we are configuring OpenMPI for our
>>>cluster.
>>>
>>>Configure Line: ./configure
>>>--prefix=/home/projects/openmpi/1.7.2/gnu/4.7.2 --enable-shared --
>enable-
>>>static --disable-vt --with-cuda=/home/projects/cuda/5.5.11
>>>CC=`which gcc` CXX=`which g++` FC=`which gfortran`
>>>
>>>Running make V=1 gives:
>>>
>>>make[2]: Entering directory `/tmp/openmpi-1.7.2/ompi/tools/ompi_info'
>>>/bin/sh ../../../libtool  --tag=CC   --mode=link
>>>/home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>>>DOPAL_CONFIGURE_USER="\"\"" -
>>>DOPAL_CONFIGURE_HOST="\"k20-0007\""
>>>-DOPAL_CONFIGURE_DATE="\"Mon Oct  7 13:16:12 MDT 2013\""
>>>-DOMPI_BUILD_USER="\"$USER\"" -
>DOMPI_BUILD_HOST="\"`hostname`\""
>>>-DOMPI_BUILD_DATE="\"`date`\"" -DOMPI_BUILD_CFLAGS="\"-O3 -
>>>DNDEBUG -finline-functions -fno-strict-aliasing -pthread\""
>>>-DOMPI_BUILD_CPPFLAGS="\"-I../../..
>>>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>>>-I/usr/include/infiniband -I/usr/include/infiniband
>>>-I/usr/include/infiniband -
>>>I/usr/include/infiniband -I/usr/include/infiniband\"" -
>>>DOMPI_BUILD_CXXFLAGS="\"-O3 -DNDEBUG -finline-functions -
>pthread\"" -
>>>DOMPI_BUILD_CXXCPPFLAGS="\"-I../../..  \""
>>>-DOMPI_BUILD_FFLAGS="\"\"" -DOMPI_BUILD_FCFLAGS="\"\""
>>>-DOMPI_BUILD_LDFLAGS="\"-export-dynamic  \"" -
>DOMPI_BUILD_LIBS="\"-
>>>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE="\"\""
>>>-DOMPI_CXX_ABSOLUTE="\"none\"" -O3 -DNDEBUG -finline-functions
>>>-fno-strict-aliasing -pthread  -export-dynamic   -o ompi_info ompi_info.o
>>>param.o components.o version.o ../../../ompi/libmpi.la -lrt -lnsl
>>>-lutil -lm
>>>libtool: link: /home/projects/gcc/4.7.2/bin/gcc -std=gnu99 -
>>>DOPAL_CONFIGURE_USER=\"\" -
>>>DOPAL_CONFIGURE_HOST=\"k20-0007\"
>>>"-DOPAL_CONFIGURE_DATE=\"Mon Oct  7 13:16:12 MDT 2013\""
>>>-DOMPI_BUILD_USER=\"\" -DOMPI_BUILD_HOST=\"k20-
>0007\"
>>>"-DOMPI_BUILD_DATE=\"Mon Oct  7 13:26:23 MDT 2013\""
>>>"-DOMPI_BUILD_CFLAGS=\"-O3 -DNDEBUG -finline-functions -fno-strict-
>>>aliasing -pthread\"" "-DOMPI_BUILD_CPPFLAGS=\"-I../../..
>>>-I/tmp/openmpi-1.7.2/opal/mca/hwloc/hwloc152/hwloc/include
>>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent
>>>-I/tmp/openmpi-1.7.2/opal/mca/event/libevent2019/libevent/include
>>>-I/usr/include/infiniband -I/usr/include/infiniband
>>>-I/usr/include/infiniband -
>>>I/usr/include/infiniband -I/usr/include/infiniband\"" "-
>>>DOMPI_BUILD_CXXFLAGS=\"-O3 -DNDEBUG -finline-functions -pthread\""
>"-
>>>DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \""
>>>-DOMPI_BUILD_FFLAGS=\"\" -DOMPI_BUILD_FCFLAGS=\"\"
>>>"-DOMPI_BUILD_LDFLAGS=\"-export-dynamic  \"" "-
>DOMPI_BUILD_LIBS=\"-
>>>lrt -lnsl  -lutil -lm \"" -DOPAL_CC_ABSOLUTE=\"\" -
>>>DOMPI_CXX_ABSOLUTE=\"none\"
>>>-O3 -DNDEBUG -finline-functions -fno-strict-aliasing -pthread -o
>>>.libs/ompi_info ompi_info.o param.o components.o version.o -Wl,--
>export-
>>>dynamic  ../../../ompi/.libs/libmpi.so -L/usr/lib64 -lrdmacm -losmcomp -
>>>libverbs /tmp/openmpi-1.7.2/orte/.libs/libopen-rte.so
>>>/tmp/openmpi-1.7.2/opal/.libs/libopen-pal.so -lcuda -lnuma -ldl -lrt
>>>-lnsl -lutil -
>>>lm -pthread -Wl,-rpath -Wl,/home/projects/openmpi/1.7.2/gnu/4.7.2/lib
>>>../../../ompi/.libs/libmpi.so: undefined reference to
>>>`mca_pml_bfo_send_request_start_cuda'
>>>../../../ompi/.libs/libmpi.so: undefined reference to
>>>`mca_pml_bfo_cuda_need_buffers'
>>>collect2: error: ld returned 1 exit status
>>>
>>>
>>>
>>>Thanks.
>>>
>>>S.
>>>
>>>--
>>>Simon Hammond
>>>Scalable Computer Architectures (CSRI/146, 01422) Sandia National
>>>Labo