[OMPI users] MPI_INIT gets stuck

2016-03-06 Thread Marco Lubosch

Hello guys,

I try to do the first steps with Open MPI and I finally got it work on 
Cygwin64(Windows 7 64bit).
I am able to compile plain C code without any issues via "mpicc ..." but 
when I try to initialize MPI the program is getting stuck within 
"MPI_INIT" without creating CPU load. Example from 
https://svn.open-mpi.org/source/xref/ompi_1.8/examples/:


   #include 
   #include "mpi.h"
   int main(int argc, char* argv[])
   {
int rank, size, len;
char version[MPI_MAX_LIBRARY_VERSION_STRING];
printf("1\n");
MPI_Init(&argc, &argv);
printf("2\n");
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("3\n");
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("4\n");
MPI_Get_library_version(version, &len);
printf("5\n");
printf("Hello, world, I am %d of %d, (%s, %d)\n", rank, size,
   version, len);
MPI_Finalize();
printf("6\n");
return 0;
   }

Compiling works perfectly fine with "mpicc -o hello_c.exe hello_c.c". 
But when I run it with "mpirun -np 4 ./hello_c" it creates 4 threads 
printing "1" but then keeps on running without doing anything. I then 
have to kill the threads manually to keep on working with Cygwin.


Can you tell me what I am doing wrong?

Thanks
Marco

PS: Installed packages on Cygwin are libopenmpi, libopenmpi-devel, 
openmpi, gcc-core


Re: [OMPI users] openmpi bug on mac os 10.11.3 ?

2016-03-06 Thread Gilles Gouaillardet

Hans,

here is attached a simplified version of your second.c test program that 
works.


i noticed that anz variable is not initialized,
also, your program is incorrect from an MPI point of view.
the program fails on my RHEL7 like box, with both OpenMPI and MPICH.

Cheers,

Gilles

On 3/5/2016 7:35 PM, Hans-Jürgen Greif wrote:




Hello,

on mac os 10.11.3 I have found an error:

mpirun -np 2 valgrind ./second
==612== Memcheck, a memory error detector
==612== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==612== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==612== Command: ./second
==612==
==611== Memcheck, a memory error detector
==611== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==611== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==611== Command: ./second
==611==
--612-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option
--611-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option
--612-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
2 times)
--611-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
2 times)
--611-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
4 times)
--612-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
4 times)
--611-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
8 times)
--612-- UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option (repeated 
8 times)

==612== Conditional jump or move depends on uninitialised value(s)
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x10BED: main (second.c:39)
==611==
==612==at 0x10D1C: main (second.c:60)
==612==
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x100060781: MPI_Win_post (in 
/usr/local/openmpi/lib/libmpi.12.dylib)

==611==by 0x10C69: main (second.c:43)
==611==
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x100413E98: __ultoa (in /usr/lib/system/libsystem_c.dylib)
==611==by 0x10041136C: __vfprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1004396C8: __v2printf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x10040EF51: _vasprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1001C379E: opal_show_help_vstring (in 
/usr/local/openmpi/lib/libopen-pal.13.dylib)
==611==by 0x100128231: orte_show_help (in 
/usr/local/openmpi/lib/libopen-rte.12.dylib)
==611==by 0x10002069E: backend_fatal (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000203EC: ompi_mpi_errors_are_fatal_comm_handler (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000201BD: ompi_errhandler_invoke (in 
/usr/local/openmpi/lib/libmpi.12.dylib)

==611==by 0x10C69: main (second.c:43)
==611==
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x100413F06: __ultoa (in /usr/lib/system/libsystem_c.dylib)
==611==by 0x10041136C: __vfprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1004396C8: __v2printf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x10040EF51: _vasprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1001C379E: opal_show_help_vstring (in 
/usr/local/openmpi/lib/libopen-pal.13.dylib)
==611==by 0x100128231: orte_show_help (in 
/usr/local/openmpi/lib/libopen-rte.12.dylib)
==611==by 0x10002069E: backend_fatal (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000203EC: ompi_mpi_errors_are_fatal_comm_handler (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000201BD: ompi_errhandler_invoke (in 
/usr/local/openmpi/lib/libmpi.12.dylib)

==611==by 0x10C69: main (second.c:43)
==611==
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x100413F71: __ultoa (in /usr/lib/system/libsystem_c.dylib)
==611==by 0x10041136C: __vfprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1004396C8: __v2printf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x10040EF51: _vasprintf (in 
/usr/lib/system/libsystem_c.dylib)
==611==by 0x1001C379E: opal_show_help_vstring (in 
/usr/local/openmpi/lib/libopen-pal.13.dylib)
==611==by 0x100128231: orte_show_help (in 
/usr/local/openmpi/lib/libopen-rte.12.dylib)
==611==by 0x10002069E: backend_fatal (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000203EC: ompi_mpi_errors_are_fatal_comm_handler (in 
/usr/local/openmpi/lib/libmpi.12.dylib)
==611==by 0x1000201BD: ompi_errhandler_invoke (in 
/usr/local/openmpi/lib/libmpi.12.dylib)

==611==by 0x10C69: main (second.c:43)
==611==
==611== Conditional jump or move depends on uninitialised value(s)
==611==at 0x1B359: strlen (vg_replace_strmem.c:470)
==611==by 0x10019922D: opal_dss_pack_string (in 
/usr/local/openmpi/lib/libopen-pal.13.dylib)
==611==by 0x100198CFD: opal_dss_pack (in 
/usr/local/openmpi/lib/libopen-pal.13.dylib)
==611==by 0x100128633: or

Re: [OMPI users] Sending string causes memory errors

2016-03-06 Thread Gilles Gouaillardet

Folks,

Here is attached a simplified C only version of the test program.
it can be ran with two or one task.

on rhel7, valgrind complains about an invalid read when accessing the 
recv buffer after MPI_Recv.

this is pretty odd since :
- the buffer is initialized *before* MPI_Recv is invoked
- MPI_Recv *do* write the buffer

i added some trace, and OpenMPI told valgrind to mark the buffer as non 
accessible

(e.g. VALGRIND_MAKE_MEM_NOACCESS) *after* it marked it as defined
(e.g. VALGRIND_MAKE_MEM_DEFINED)

the issue can be seen on both master and v1.10 when OpenMPI is 
configure'd with

--enable-memchecker --with-valgrind

in mca_pml_ob1_recv_request_progress_match() from 
ompi/mca/pml/ob1/pml_ob1_recvreq.c,
what is the rationale for marking the buffer an unaccessable after the 
unpack ?


/*
 *  Unpacking finished, make the user buffer unaccessable again.
 */
MEMCHECKER(
memchecker_call(&opal_memchecker_base_mem_noaccess,
recvreq->req_recv.req_base.req_addr,
recvreq->req_recv.req_base.req_count,
recvreq->req_recv.req_base.req_datatype);
   );

also, in MPI_Send (ompi/mpi/c/isend.c) what is the rationale for marking 
the buffer as non accessible before calling the PML isend ?
if this is an attempt to track users modifying the buffer after 
MPI_Isend(), should valgrind be invoked *after* the PML is invoked ?


if i #if out these two calls, then the test program runs just fine

fwiw :
- MPI_Sendrecv do not issue any warning
- MPI_Send/MPI_Recv issues one warning in the test code
- MPI_Isend/MPI_Recv issues three warning, one in the test, and two in 
OpenMPI
i previously reported a very weird behaviour ... and the root cause is 
one subroutine in
the test program was called "send", which conflicts with the send libc 
function ...



Cheers,

Gilles

On 3/3/2016 9:43 PM, Jeff Squyres (jsquyres) wrote:

All of those valgrind reports below are from within your code -- not from 
within Open MPI.

All Open MPI can do is pass the contents of your message properly; you can 
verify that it is being sent and received properly by checking the byte 
contents of your received array (e.g., assert that the string is there 
correctly and is \0-terminated).

If cout or some other std:: thing is going beyond the end of your allocated 
buffer, that's a different problem -- perhaps you have a busted std:: 
implementation...?



On Mar 3, 2016, at 2:47 AM, Florian Lindner  wrote:

I am still getting errors, even with your script.

I will also try to modified build of openmpi that Jeff suggested.

Best,
Florian

% mpicxx -std=c++11 -g -O0 -Wall -Wextra -fno-builtin-strlen mpi_gilles.cpp && 
mpirun -n 2 ./a.out
Stringlength = 64
123456789012345678901234567890123456789012345678901234567890123

% LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 2 valgrind 
./a.out
==5324== Memcheck, a memory error detector
==5324== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==5324== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info
==5324== Command: ./a.out
==5324==
==5325== Memcheck, a memory error detector
==5325== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==5325== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info
==5325== Command: ./a.out
==5325==
valgrind MPI wrappers  5324: Active for pid 5324
valgrind MPI wrappers  5324: Try MPIWRAP_DEBUG=help for possible options
valgrind MPI wrappers  5325: Active for pid 5325
valgrind MPI wrappers  5325: Try MPIWRAP_DEBUG=help for possible options
Stringlength = 64
==5325== Invalid read of size 1
==5325==at 0x4C2D992: strlen (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5325==by 0x56852D8: length (char_traits.h:267)
==5325==by 0x56852D8: std::basic_ostream >& std::operator<< 
 >(std::basic_ostream >&, char const*) 
(ostream:562)
==5325==by 0x408A45: receive() (mpi_gilles.cpp:22)
==5325==by 0x408B88: main (mpi_gilles.cpp:44)
==5325==  Address 0xffefff800 is on thread 1's stack
==5325==  in frame #2, created by receive() (mpi_gilles.cpp:8)
==5325==
==5325== Invalid read of size 1
==5325==at 0x4C2D9A4: strlen (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5325==by 0x56852D8: length (char_traits.h:267)
==5325==by 0x56852D8: std::basic_ostream >& std::operator<< 
 >(std::basic_ostream >&, char const*) 
(ostream:562)
==5325==by 0x408A45: receive() (mpi_gilles.cpp:22)
==5325==by 0x408B88: main (mpi_gilles.cpp:44)
==5325==  Address 0xffefff801 is on thread 1's stack
==5325==  in frame #2, created by receive() (mpi_gilles.cpp:8)
==5325==
==5325== Invalid read of size 1
==5325==at 0x60A0FF1: _IO_file_xsputn@@GLIBC_2.2.5 (in 
/usr/lib/libc-2.23.so)
==5325==by 0x6096D1A: fwrite (in /usr/lib/libc-2.23.so)
==5325==by 0x5684F75: sputn (streambuf:451)
==5325==by 0x5684F75: __ostream_write > 
(ostream_insert.h:50)
==5325==by 0x5684F75: std::basic_ostream >& std::__ostream_insert >(std::basic_ostream >&, char 

Re: [OMPI users] Sending string causes memory errors

2016-03-06 Thread George Bosilca
Gilles,

memchecker is intended to be used together with some suppression rules.

For the receive the rationale of making the buffer unaccessible after the
unpack was to ensure that nobody is touching the memory until we return
from the receive. The buffer was supposed to be made available during the
request completion function.

For the send, the rationale is now obsolete as the MPI Forum removed the
access restriction on send buffer. We should instead mark it in read-only
mode to make sure it is never modified.

  George.


On Sun, Mar 6, 2016 at 9:28 PM, Gilles Gouaillardet 
wrote:

> Folks,
>
> Here is attached a simplified C only version of the test program.
> it can be ran with two or one task.
>
> on rhel7, valgrind complains about an invalid read when accessing the recv
> buffer after MPI_Recv.
> this is pretty odd since :
> - the buffer is initialized *before* MPI_Recv is invoked
> - MPI_Recv *do* write the buffer
>
> i added some trace, and OpenMPI told valgrind to mark the buffer as non
> accessible
> (e.g. VALGRIND_MAKE_MEM_NOACCESS) *after* it marked it as defined
> (e.g. VALGRIND_MAKE_MEM_DEFINED)
>
> the issue can be seen on both master and v1.10 when OpenMPI is configure'd
> with
> --enable-memchecker --with-valgrind
>
> in mca_pml_ob1_recv_request_progress_match() from
> ompi/mca/pml/ob1/pml_ob1_recvreq.c,
> what is the rationale for marking the buffer an unaccessable after the
> unpack ?
>
> /*
>  *  Unpacking finished, make the user buffer unaccessable again.
>  */
> MEMCHECKER(
> memchecker_call(&opal_memchecker_base_mem_noaccess,
> recvreq->req_recv.req_base.req_addr,
> recvreq->req_recv.req_base.req_count,
> recvreq->req_recv.req_base.req_datatype);
>);
>
> also, in MPI_Send (ompi/mpi/c/isend.c) what is the rationale for marking
> the buffer as non accessible before calling the PML isend ?
> if this is an attempt to track users modifying the buffer after
> MPI_Isend(), should valgrind be invoked *after* the PML is invoked ?
>
> if i #if out these two calls, then the test program runs just fine
>
> fwiw :
> - MPI_Sendrecv do not issue any warning
> - MPI_Send/MPI_Recv issues one warning in the test code
> - MPI_Isend/MPI_Recv issues three warning, one in the test, and two in
> OpenMPI
> i previously reported a very weird behaviour ... and the root cause is one
> subroutine in
> the test program was called "send", which conflicts with the send libc
> function ...
>
>
> Cheers,
>
> Gilles
>
>
> On 3/3/2016 9:43 PM, Jeff Squyres (jsquyres) wrote:
>
>> All of those valgrind reports below are from within your code -- not from
>> within Open MPI.
>>
>> All Open MPI can do is pass the contents of your message properly; you
>> can verify that it is being sent and received properly by checking the byte
>> contents of your received array (e.g., assert that the string is there
>> correctly and is \0-terminated).
>>
>> If cout or some other std:: thing is going beyond the end of your
>> allocated buffer, that's a different problem -- perhaps you have a busted
>> std:: implementation...?
>>
>>
>> On Mar 3, 2016, at 2:47 AM, Florian Lindner  wrote:
>>>
>>> I am still getting errors, even with your script.
>>>
>>> I will also try to modified build of openmpi that Jeff suggested.
>>>
>>> Best,
>>> Florian
>>>
>>> % mpicxx -std=c++11 -g -O0 -Wall -Wextra -fno-builtin-strlen
>>> mpi_gilles.cpp && mpirun -n 2 ./a.out
>>> Stringlength = 64
>>> 123456789012345678901234567890123456789012345678901234567890123
>>>
>>> % LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 2
>>> valgrind ./a.out
>>> ==5324== Memcheck, a memory error detector
>>> ==5324== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
>>> ==5324== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for
>>> copyright info
>>> ==5324== Command: ./a.out
>>> ==5324==
>>> ==5325== Memcheck, a memory error detector
>>> ==5325== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
>>> ==5325== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for
>>> copyright info
>>> ==5325== Command: ./a.out
>>> ==5325==
>>> valgrind MPI wrappers  5324: Active for pid 5324
>>> valgrind MPI wrappers  5324: Try MPIWRAP_DEBUG=help for possible options
>>> valgrind MPI wrappers  5325: Active for pid 5325
>>> valgrind MPI wrappers  5325: Try MPIWRAP_DEBUG=help for possible options
>>> Stringlength = 64
>>> ==5325== Invalid read of size 1
>>> ==5325==at 0x4C2D992: strlen (in
>>> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
>>> ==5325==by 0x56852D8: length (char_traits.h:267)
>>> ==5325==by 0x56852D8: std::basic_ostream>> std::char_traits >& std::operator<< 
>>> >(std::basic_ostream >&, char const*)
>>> (ostream:562)
>>> ==5325==by 0x408A45: receive() (mpi_gilles.cpp:22)
>>> ==5325==by 0x408B88: main (mpi_gilles.cpp:44)
>>> ==5325==  Address 0xffefff800 is on thread 1's stack
>>> ==5325==  in frame #2, created by receive() (mpi_gilles.cpp:8)
>>> ==5325==
>>> ==5325== Inva

Re: [OMPI users] Sending string causes memory errors

2016-03-06 Thread Gilles Gouaillardet

Thanks George,

is valgrind able to mark memory as read only ?

i checked quickly but could not find such a feature

Cheers,

Gilles

On 3/7/2016 11:40 AM, George Bosilca wrote:

Gilles,

memchecker is intended to be used together with some suppression rules.

For the receive the rationale of making the buffer unaccessible after 
the unpack was to ensure that nobody is touching the memory until we 
return from the receive. The buffer was supposed to be made available 
during the request completion function.


For the send, the rationale is now obsolete as the MPI Forum removed 
the access restriction on send buffer. We should instead mark it in 
read-only mode to make sure it is never modified.


  George.


On Sun, Mar 6, 2016 at 9:28 PM, Gilles Gouaillardet > wrote:


Folks,

Here is attached a simplified C only version of the test program.
it can be ran with two or one task.

on rhel7, valgrind complains about an invalid read when accessing
the recv buffer after MPI_Recv.
this is pretty odd since :
- the buffer is initialized *before* MPI_Recv is invoked
- MPI_Recv *do* write the buffer

i added some trace, and OpenMPI told valgrind to mark the buffer
as non accessible
(e.g. VALGRIND_MAKE_MEM_NOACCESS) *after* it marked it as defined
(e.g. VALGRIND_MAKE_MEM_DEFINED)

the issue can be seen on both master and v1.10 when OpenMPI is
configure'd with
--enable-memchecker --with-valgrind

in mca_pml_ob1_recv_request_progress_match() from
ompi/mca/pml/ob1/pml_ob1_recvreq.c,
what is the rationale for marking the buffer an unaccessable after
the unpack ?

/*
 *  Unpacking finished, make the user buffer unaccessable again.
 */
MEMCHECKER(
memchecker_call(&opal_memchecker_base_mem_noaccess,
recvreq->req_recv.req_base.req_addr,
recvreq->req_recv.req_base.req_count,
recvreq->req_recv.req_base.req_datatype);
   );

also, in MPI_Send (ompi/mpi/c/isend.c) what is the rationale for
marking the buffer as non accessible before calling the PML isend ?
if this is an attempt to track users modifying the buffer after
MPI_Isend(), should valgrind be invoked *after* the PML is invoked ?

if i #if out these two calls, then the test program runs just fine

fwiw :
- MPI_Sendrecv do not issue any warning
- MPI_Send/MPI_Recv issues one warning in the test code
- MPI_Isend/MPI_Recv issues three warning, one in the test, and
two in OpenMPI
i previously reported a very weird behaviour ... and the root
cause is one subroutine in
the test program was called "send", which conflicts with the send
libc function ...


Cheers,

Gilles


On 3/3/2016 9:43 PM, Jeff Squyres (jsquyres) wrote:

All of those valgrind reports below are from within your code
-- not from within Open MPI.

All Open MPI can do is pass the contents of your message
properly; you can verify that it is being sent and received
properly by checking the byte contents of your received array
(e.g., assert that the string is there correctly and is
\0-terminated).

If cout or some other std:: thing is going beyond the end of
your allocated buffer, that's a different problem -- perhaps
you have a busted std:: implementation...?


On Mar 3, 2016, at 2:47 AM, Florian Lindner
mailto:mailingli...@xgm.de>> wrote:

I am still getting errors, even with your script.

I will also try to modified build of openmpi that Jeff
suggested.

Best,
Florian

% mpicxx -std=c++11 -g -O0 -Wall -Wextra
-fno-builtin-strlen mpi_gilles.cpp && mpirun -n 2 ./a.out
Stringlength = 64
123456789012345678901234567890123456789012345678901234567890123

% LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so
mpirun -n 2 valgrind ./a.out
==5324== Memcheck, a memory error detector
==5324== Copyright (C) 2002-2015, and GNU GPL'd, by Julian
Seward et al.
==5324== Using Valgrind-3.12.0.SVN and LibVEX; rerun with
-h for copyright info
==5324== Command: ./a.out
==5324==
==5325== Memcheck, a memory error detector
==5325== Copyright (C) 2002-2015, and GNU GPL'd, by Julian
Seward et al.
==5325== Using Valgrind-3.12.0.SVN and LibVEX; rerun with
-h for copyright info
==5325== Command: ./a.out
==5325==
valgrind MPI wrappers  5324: Active for pid 5324
valgrind MPI wrappers  5324: Try MPIWRAP_DEBUG=help for
possible options
valgrind MPI wrappers  5325: Active for pid 5325
valgrind MPI wrappers  5325: Try MPIWRAP_DEBUG=help for
possible

[OMPI users] Troubles with linking C++ standard library with openmpi 1.10

2016-03-06 Thread Jordan Willis

Hi everyone,

I have tried everything to compile openmpi. It used to compile on my system, 
and I’m not sure what has changed in my c++ libraries to get this error. I get 
the following when trying to compile contrib/vt/vt/extlib/otf/tools/otfprofile

make[8]: Entering directory 
`/dnas/apps/openmpi/openmpi-1.10.2/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile'
  CXXLDotfprofile
otfprofile-collect_data.o: In function `std::string::_M_check(unsigned long, 
char const*) const':
/usr/include/c++/4.9/bits/basic_string.h:324: undefined reference to 
`std::__throw_out_of_range_fmt(char const*, ...)'
otfprofile-create_latex.o: In function `std::string::_M_check(unsigned long, 
char const*) const':
/usr/include/c++/4.9/bits/basic_string.h:324: undefined reference to 
`std::__throw_out_of_range_fmt(char const*, ...)'
/usr/include/c++/4.9/bits/basic_string.h:324: undefined reference to 
`std::__throw_out_of_range_fmt(char const*, ...)'
otfprofile-create_filter.o: In function `std::string::_M_check(unsigned long, 
char const*) const':
/usr/include/c++/4.9/bits/basic_string.h:324: undefined reference to 
`std::__throw_out_of_range_fmt(char const*, ...)'
otfprofile-create_filter.o: In function `std::vector*, 
std::allocator*> >::_M_range_check(unsigned long) const':
/usr/include/c++/4.9/bits/stl_vector.h:803: undefined reference to 
`std::__throw_out_of_range_fmt(char const*, ...)'
otfprofile-create_filter.o:/usr/include/c++/4.9/bits/stl_vector.h:803: more 
undefined references to `std::__throw_out_of_range_fmt(char const*, ...)' follow
collect2: error: ld returned 1 exit status
make[8]: *** [otfprofile] Error 1

If I look online, it may be due to trying to use gcc-4.8 functions in an 4.9 
compiler. So I have tried switching to 4.8 just to check. They also say you may 
have to update your toolchain to force GCC-4.9 although I’m not sure I know how 
to do this. I have also tried compiling openmpi1.8 (last stable) and get the 
same error. I have also reinstalled all my packages using aptitude.

The reason I’m trying to do a custom compile is because I’m trying to build the 
pmi libraries that come with SLURM, although I get the same error on a basic 
configuration.

I’m on ubuntu server 14.04. I think I have exhausted my troubleshooting ideas 
and I’m reaching out to you. My configuration log can be sent at request, but 
the attachment causes my message to get bounced from the list. 

Thanks so much,
Jordan