[OMPI devel] Still having issues w/ opal_path_nfs and EPERM

2014-02-08 Thread Paul Hargrove
I tested the 1.7 tarball tonight.
Jeff had indicated (
http://www.open-mpi.org/community/lists/devel/2014/01/13785.php) that the
problem I had reported w/ opal_path_nfs() and EPERM had been fixed in the
trunk.
Trac ticket #4125 indicated the fix was CMRed to v1.7

However, I still see the problem:
 Failure : Mismatch: input "/users/course13/.gvfs", expected:0 got:1

 Failure : Mismatch: input "/users/steineju/.gvfs", expected:0 got:1

SUPPORT: OMPI Test failed: opal_path_nfs() (2 of 20 failed)
FAIL: opal_path_nfs


I don't currently know if the problem was every fixed on the trunk, but
should know by morning.

-Paul

-- 
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
Computer and Data Sciences Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900


Re: [OMPI devel] Still having issues w/ opal_path_nfs and EPERM

2014-02-08 Thread Paul Hargrove
A test of Friday night's trunk tarball is failing in the same manner.
So, the CMR isn't the issue - the problem was never (fully?) fixed in trunk.

-Paul


On Fri, Feb 7, 2014 at 9:06 PM, Paul Hargrove  wrote:

> I tested the 1.7 tarball tonight.
> Jeff had indicated (
> http://www.open-mpi.org/community/lists/devel/2014/01/13785.php) that the
> problem I had reported w/ opal_path_nfs() and EPERM had been fixed in the
> trunk.
> Trac ticket #4125 indicated the fix was CMRed to v1.7
>
> However, I still see the problem:
>  Failure : Mismatch: input "/users/course13/.gvfs", expected:0 got:1
>
>  Failure : Mismatch: input "/users/steineju/.gvfs", expected:0 got:1
>
> SUPPORT: OMPI Test failed: opal_path_nfs() (2 of 20 failed)
> FAIL: opal_path_nfs
>
>
> I don't currently know if the problem was every fixed on the trunk, but
> should know by morning.
>
> -Paul
>
> --
> Paul H. Hargrove  phhargr...@lbl.gov
> Future Technologies Group
> Computer and Data Sciences Department Tel: +1-510-495-2352
> Lawrence Berkeley National Laboratory Fax: +1-510-486-6900
>



-- 
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
Computer and Data Sciences Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900


Re: [OMPI devel] Still having issues w/ opal_path_nfs and EPERM

2014-02-08 Thread Ralph Castain
Sounds like it - I'll take a peek and see if I can spot it, otherwise will have 
to wait for Jeff next week

On Feb 8, 2014, at 9:56 AM, Paul Hargrove  wrote:

> A test of Friday night's trunk tarball is failing in the same manner.
> So, the CMR isn't the issue - the problem was never (fully?) fixed in trunk.
> 
> -Paul
> 
> 
> On Fri, Feb 7, 2014 at 9:06 PM, Paul Hargrove  wrote:
> I tested the 1.7 tarball tonight.
> Jeff had indicated 
> (http://www.open-mpi.org/community/lists/devel/2014/01/13785.php) that the 
> problem I had reported w/ opal_path_nfs() and EPERM had been fixed in the 
> trunk.
> Trac ticket #4125 indicated the fix was CMRed to v1.7
> 
> However, I still see the problem:
>  Failure : Mismatch: input "/users/course13/.gvfs", expected:0 got:1
> 
>  Failure : Mismatch: input "/users/steineju/.gvfs", expected:0 got:1
> 
> SUPPORT: OMPI Test failed: opal_path_nfs() (2 of 20 failed)
> FAIL: opal_path_nfs
> 
> 
> I don't currently know if the problem was every fixed on the trunk, but 
> should know by morning.
> 
> -Paul
> 
> -- 
> Paul H. Hargrove  phhargr...@lbl.gov
> Future Technologies Group
> Computer and Data Sciences Department Tel: +1-510-495-2352
> Lawrence Berkeley National Laboratory Fax: +1-510-486-6900
> 
> 
> 
> -- 
> Paul H. Hargrove  phhargr...@lbl.gov
> Future Technologies Group
> Computer and Data Sciences Department Tel: +1-510-495-2352
> Lawrence Berkeley National Laboratory Fax: +1-510-486-6900
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel



Re: [OMPI devel] Update on 1.7.5

2014-02-08 Thread Ralph Castain
The OSHMEM update is now in the 1.7.5 tarball - I would appreciate it if people 
could exercise the tarball to ensure nothing broke. Note that shmem examples 
are executing, but shmemrun is hanging instead of exiting. Mellanox is looking 
into the problem.

For now, I just want to verify that MPI operations remain stable.

Thanks
Ralph

On Feb 7, 2014, at 2:09 PM, Paul Hargrove  wrote:

> Ralph,
> 
> I'll try to test tonight's v1.7 taball for:
> + ia64 atomics (#4174)
> + bad getpwuid (#4164)
> + opalpath_nfs/EPERM (#4125)
> + torque smp (#4227)
> 
> All but torque are fully-automated tests and I need only check my email for 
> the results.
> The torque one will require manual job submission.
> 
> -Paul
> 
> 
> On Fri, Feb 7, 2014 at 1:55 PM, Ralph Castain  wrote:
> Hi folks
> 
> As you may have noticed, I've been working my way thru the CMR backlog on 
> 1.7.5. A large percentage of them were minor fixes (valgrind warning 
> suppressions, error message typos, etc.), so those went in the first round. 
> Today's round contains more "meaty" things, but I still consider them fairly 
> low risk as the code coverage impacted is contained.
> 
> I'm going to let this run thru tonight's MTT - if things look okay tomorrow, 
> I will roll the OSHMEM cmr into 1.7.5 over the weekend. This is quite likely 
> to destabilize the branch, so I expect to see breakage in the resulting MTT 
> reports. We'll deal with it as we go.
> 
> Beyond that, there are still about a dozen CMRs in the system awaiting 
> review. Jeff has the majority, followed by Nathan. If folks could please 
> review them early next week, I would appreciate it.
> 
> Thanks
> Ralph
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> 
> -- 
> Paul H. Hargrove  phhargr...@lbl.gov
> Future Technologies Group
> Computer and Data Sciences Department Tel: +1-510-495-2352
> Lawrence Berkeley National Laboratory Fax: +1-510-486-6900
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel



[OMPI devel] Trunk is broken

2014-02-08 Thread Ralph Castain
Sorry to say, some recent commit has broken the trunk:

rhc@bend002 examples]$ mpirun -n 3 ./hello_c
[bend001:22289] *** Process received signal ***
[bend001:22289] Signal: Segmentation fault (11)
[bend001:22289] Signal code: Invalid permissions (2)
[bend001:22289] Failing at address: 0x7f354daaa000
[bend001:22290] *** Process received signal ***
[bend001:22290] Signal: Segmentation fault (11)
[bend001:22290] Signal code: Invalid permissions (2)
[bend001:22290] Failing at address: 0x7fa819d81000
[bend001:22289] [ 0] /lib64/libpthread.so.0[0x38e320f710]
[bend001:22289] [ 1] /lib64/libc.so.6[0x38e26845ad]
[bend001:22289] [ 2] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7f3549924b0b]
[bend001:22289] [ 3] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7f354db62a21]
[bend001:22289] [ 4] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7f354a1cfc2c]
[bend001:22289] [ 5] [bend001:22290] [ 0] /lib64/libpthread.so.0[0x38e320f710]
[bend001:22290] [ 1] /lib64/libc.so.6[0x38e26845ad]
[bend001:22290] [ 2] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7fa815bfbb0b]
[bend001:22290] [ 3] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7f354db6189e]
[bend001:22289] [ 6] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7f35492c3cc3]
[bend001:22289] [ 7] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7f354db88261]
[bend001:22289] [ 8] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7f354dafbc7b]
[bend001:22289] [ 9] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7fa819e39a21]
[bend001:22290] [ 4] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7fa8164a6c2c]
[bend001:22290] [ 5] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7fa819e3889e]
[bend001:22290] [ 6] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7fa81559acc3]
[bend001:22290] [ 7] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7fa819e5f261]
[bend001:22290] [ 8] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7f354db2f156]
[bend001:22289] [10] ./hello_c[0x400806]
[bend001:22289] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
[bend001:22289] [12] ./hello_c[0x400719]
[bend001:22289] *** End of error message ***
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7fa819dd2c7b]
[bend001:22290] [ 9] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7fa819e06156]
[bend001:22290] [10] ./hello_c[0x400806]
[bend001:22290] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
[bend001:22290] [12] ./hello_c[0x400719]
[bend001:22290] *** End of error message ***
[bend001:22291] *** Process received signal ***
[bend001:22291] Signal: Segmentation fault (11)
[bend001:22291] Signal code: Invalid permissions (2)
[bend001:22291] Failing at address: 0x7f498fc96000
[bend001:22291] [ 0] /lib64/libpthread.so.0[0x38e320f710]
[bend001:22291] [ 1] /lib64/libc.so.6[0x38e26845ad]
[bend001:22291] [ 2] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7f498795db0b]
[bend001:22291] [ 3] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7f498fd4ea21]
[bend001:22291] [ 4] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7f498c3bbc2c]
[bend001:22291] [ 5] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7f498fd4d89e]
[bend001:22291] [ 6] 
/home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7f49872fccc3]
[bend001:22291] [ 7] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7f498fd74261]
[bend001:22291] [ 8] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7f498fce7c7b]
[bend001:22291] [ 9] 
/home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7f498fd1b156]
[bend001:22291] [10] ./hello_c[0x400806]
[bend001:22291] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
[bend001:22291] [12] ./hello_c[0x400719]
[bend001:22291] *** End of error message ***
--
mpirun noticed that process rank 0 with PID 22289 on node bend001 exited on 
signal 11 (Segmentation fault).
--
3 total processes killed (some possibly by mpirun during cleanup)
[rhc@bend002 examples]$ 

Nathan: can you please take a look?

Ralph



Re: [OMPI devel] new CRS component added (criu)

2014-02-08 Thread Adrian Reber
On Fri, Feb 07, 2014 at 10:08:48PM +, Jeff Squyres (jsquyres) wrote:
> Sweet -- +1 for CRIU support!
> 
> FWIW, I see you modeled your configure.m4 off the blcr configure.m4, but I'd 
> actually go with making it a bit simpler.  For example, I typically structure 
> my configure.m4's like this (typed in mail client -- forgive mistakes...):
> 
> -
>AS_IF([...some test], [crs_criu_happy=1], [crs_criu_happy=0])
># Only bother doing the next test if the previous one passed
>AS_IF([test $crs_criu_happy -eq 1 && ...next test], 
>  [crs_criu_happy=1], [crs_criu_happy=0])
># Only bother doing the next test if the previous one passed
>AS_IF([test $crs_criu_happy -eq 1 && ...next test], 
>  [crs_criu_happy=1], [crs_criu_happy=0])
> 
>...etc...
> 
># Put a single execution of $2 and $3 at the end, depending on how the 
># above tests go.  If a human asked for criu (e.g., --with-criu) and
># we can't find criu support, that's a fatal error.
>AS_IF([test $crs_criu_happy -eq 1],
>  [$2],
>  [AS_IF([test "$with_criu" != "x" && "x$with_criu" != "xno"],
> [AC_MSG_WARN([You asked for CRIU support, but I can't find 
> it.])
>  AC_MSG_ERROR([Cannot continue])],
> [$1])
>   ])
> -
> 
> I note you have a stray $3 at the end of your configure.m4, too (it might 
> supposed to be $2?).

I think I do not really understand configure.m4 and was happy to just
copy it from blcr. Especially what $2 and $3 mean and how they are
supposed to be used. I will try to simplify my configure.m4. Is there an
example which I can have a look at?

> Finally, I note you're looking for libcriu.  Last time I checked with the 
> CRIU guys -- which was quite a while ago -- that didn't exist (but I put in 
> my $0.02 that OMPI would like to see such a userspace library).  I take it 
> that libcriu now exists?

Yes criu has introduced libcriu with the 1.1 release. It is used to
create RPCs to the criu process running as a service. I submitted a few
patches to criu to actually install the headers and libraries and
included it in the Fedora package:

https://admin.fedoraproject.org/updates/criu-1.1-4.fc20

This is what I am currently using to build against criu.

Adrian


Re: [OMPI devel] Trunk is broken

2014-02-08 Thread Ralph Castain
Temporary workaround:  -mca btl ^vader

On Feb 8, 2014, at 10:11 AM, Ralph Castain  wrote:

> Sorry to say, some recent commit has broken the trunk:
> 
> rhc@bend002 examples]$ mpirun -n 3 ./hello_c
> [bend001:22289] *** Process received signal ***
> [bend001:22289] Signal: Segmentation fault (11)
> [bend001:22289] Signal code: Invalid permissions (2)
> [bend001:22289] Failing at address: 0x7f354daaa000
> [bend001:22290] *** Process received signal ***
> [bend001:22290] Signal: Segmentation fault (11)
> [bend001:22290] Signal code: Invalid permissions (2)
> [bend001:22290] Failing at address: 0x7fa819d81000
> [bend001:22289] [ 0] /lib64/libpthread.so.0[0x38e320f710]
> [bend001:22289] [ 1] /lib64/libc.so.6[0x38e26845ad]
> [bend001:22289] [ 2] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7f3549924b0b]
> [bend001:22289] [ 3] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7f354db62a21]
> [bend001:22289] [ 4] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7f354a1cfc2c]
> [bend001:22289] [ 5] [bend001:22290] [ 0] /lib64/libpthread.so.0[0x38e320f710]
> [bend001:22290] [ 1] /lib64/libc.so.6[0x38e26845ad]
> [bend001:22290] [ 2] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7fa815bfbb0b]
> [bend001:22290] [ 3] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7f354db6189e]
> [bend001:22289] [ 6] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7f35492c3cc3]
> [bend001:22289] [ 7] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7f354db88261]
> [bend001:22289] [ 8] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7f354dafbc7b]
> [bend001:22289] [ 9] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7fa819e39a21]
> [bend001:22290] [ 4] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7fa8164a6c2c]
> [bend001:22290] [ 5] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7fa819e3889e]
> [bend001:22290] [ 6] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7fa81559acc3]
> [bend001:22290] [ 7] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7fa819e5f261]
> [bend001:22290] [ 8] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7f354db2f156]
> [bend001:22289] [10] ./hello_c[0x400806]
> [bend001:22289] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
> [bend001:22289] [12] ./hello_c[0x400719]
> [bend001:22289] *** End of error message ***
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7fa819dd2c7b]
> [bend001:22290] [ 9] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7fa819e06156]
> [bend001:22290] [10] ./hello_c[0x400806]
> [bend001:22290] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
> [bend001:22290] [12] ./hello_c[0x400719]
> [bend001:22290] *** End of error message ***
> [bend001:22291] *** Process received signal ***
> [bend001:22291] Signal: Segmentation fault (11)
> [bend001:22291] Signal code: Invalid permissions (2)
> [bend001:22291] Failing at address: 0x7f498fc96000
> [bend001:22291] [ 0] /lib64/libpthread.so.0[0x38e320f710]
> [bend001:22291] [ 1] /lib64/libc.so.6[0x38e26845ad]
> [bend001:22291] [ 2] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_btl_vader.so(+0x3b0b)[0x7f498795db0b]
> [bend001:22291] [ 3] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_btl_base_select+0x1cc)[0x7f498fd4ea21]
> [bend001:22291] [ 4] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x27)[0x7f498c3bbc2c]
> [bend001:22291] [ 5] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_bml_base_init+0xe2)[0x7f498fd4d89e]
> [bend001:22291] [ 6] 
> /home/common/openmpi/build/svn-trunk/lib/openmpi/mca_pml_ob1.so(+0x7cc3)[0x7f49872fccc3]
> [bend001:22291] [ 7] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(mca_pml_base_select+0x29c)[0x7f498fd74261]
> [bend001:22291] [ 8] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(ompi_mpi_init+0x685)[0x7f498fce7c7b]
> [bend001:22291] [ 9] 
> /home/common/openmpi/build/svn-trunk/lib/libmpi.so.0(MPI_Init+0x185)[0x7f498fd1b156]
> [bend001:22291] [10] ./hello_c[0x400806]
> [bend001:22291] [11] /lib64/libc.so.6(__libc_start_main+0xfd)[0x38e261ed1d]
> [bend001:22291] [12] ./hello_c[0x400719]
> [bend001:22291] *** End of error message ***
> --
> mpirun noticed that process rank 0 with PID 22289 on node bend001 exited on 
> signal 11 (Segmentation fault).
> --
> 3 total processes killed (some possibly by mpirun during c

[OMPI devel] v1.7.4 REGRESSION: build failure w/ old OFED

2014-02-08 Thread Paul Hargrove
With Ralph's announcement that oshmem had been merged to v1.7 I started
tests on lots of systems.
When I found the problem described below, I tried the 1.7.4 release, I
found the problem exists there too!!

One system I tried is a fairly ancient x86-64/linux system w/ QLogic HCAs,
and thus builds and tests mtl:psm.
As a guest on this system I had NOT been testing it with all the 1.7.4rc's,
but had tested at least once w/o problems (
http://www.open-mpi.org/community/lists/devel/2014/01/13661.php).

However, with both the 1.7.4 release and the current tarball
(1.7.5a1r30634) I seem to be getting an ibv error that is probably due to
the age of the OFED on this system:

  CCLD otfmerge-mpi
/home/phhargrove/OMPI/openmpi-1.7-latest-linux-x86_64-psm/BLD/ompi/contrib/vt/vt/../../../.libs/libmpi.so:
undefined reference to `ibv_event_type_str'
collect2: ld returned 1 exit status

The problem seems to be originating in the usenic btl:
$ grep -rl ibv_event_type_str .
./ompi/mca/btl/usnic/btl_usnic_module.c

-Paul


-- 
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
Computer and Data Sciences Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900


[OMPI devel] v1.7 and trunk: hello_oshmemfh link failure with xlc/ppc32/linux

2014-02-08 Thread Paul Hargrove
Testing the current v1.7 tarball (1.7.5a1r30634), I get a failure when
building the oshmem examples.
I've confirmed that the same problem exists on trunk (so not a problem with
the CMR).

[...]
mpifort -g ring_usempi.f90 -o ring_usempi
** ring   === End of Compilation 1 ===
1501-510  Compilation successful for file ring_usempi.f90.
make[2]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[1]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[1]: Entering directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[2]: Entering directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
shmemcc -g hello_oshmem_c.c -o hello_oshmem
make[2]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[2]: Entering directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
shmemcc -g ring_oshmem_c.c -o ring_oshmem
make[2]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[2]: Entering directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
shmemfort -g hello_oshmemfh.f90 -o hello_oshmemfh
** hello_oshmem   === End of Compilation 1 ===
1501-510  Compilation successful for file hello_oshmemfh.f90.
make[2]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[2]: Entering directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
shmemfort -g ring_oshmemfh.f90 -o ring_oshmemfh
** ring_oshmem   === End of Compilation 1 ===
1501-510  Compilation successful for file ring_oshmemfh.f90.
ring_oshmemfh.o: In function `ring_oshmem':
/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples/ring_oshmemfh.f90:33:
undefined reference to `shmem_put8'
/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples/ring_oshmemfh.f90:46:
undefined reference to `shmem_int8_wait_until'
/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples/ring_oshmemfh.f90:55:
undefined reference to `shmem_put8'
make[2]: *** [ring_oshmemfh] Error 1
make[2]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make[1]: *** [oshmem] Error 2
make[1]: Leaving directory
`/gpfs-biou/phh1/OMPI/openmpi-1.7-latest-linux-ppc32-xlc-11.1/BLD/examples'
make: *** [all] Error 2

The link of ring_oshmemfh is failing with undefined references to
shmem_put8 and shmem_int8_wait_until.
The relevant portion of "make" output in the example dir is shown above.
Note that ring_usempi linked fine, indicating that F90 MPI bindings are
fine.
Additionally, ring_oshmem linked file, indicating that C language OSHMEM
bindings are fine, too.

In case it is relevant: this build is configured with
  --enable-static --enable-shared --enable-mpi-fortran=usempi --disable-vt

The "--enable-static --enable-shared" flags are just to make for a more
thorough test.
However, retesting without --enable-static did not resolve the problem.

The --enable-mpi-fortran flag is necessary because the F08 bindings don't
build with this compiler (
http://www.open-mpi.org/community/lists/devel/2014/01/13802.php).

The --disable-vt flag is necessary because the compiler crashes building VT.

Some misc bits of info:

$ shmemfort -g ring_oshmemfh.f90 -o ring_oshmemfh --show
xlf -g ring_oshmemfh.f90 -o ring_oshmemfh
-I/home/phh1/SCRATCH/OMPI/openmpi-trunk-linux-ppc32-xlc-11.1/INST/include
-I/home/phh1/SCRATCH/OMPI/openmpi-trunk-linux-ppc32-xlc-11.1/INST/lib
-Wl,-rpath
-Wl,/home/phh1/SCRATCH/OMPI/openmpi-trunk-linux-ppc32-xlc-11.1/INST/lib
-Wl,--enable-new-dtags
-L/home/phh1/SCRATCH/OMPI/openmpi-trunk-linux-ppc32-xlc-11.1/INST/lib
-loshmem -lmpi_mpifh -lmpi -lm -lnuma -ldl -lrt -lnsl -lutil -lpthread

$ nm INST/lib/liboshmem.so | grep shmem_put8
0009eab0 t .plt_pic32.shmem_put8_f
00063f20 T shmem_put8_
00063fa0 T shmem_put8__
00063e00 T shmem_put8_f
$ nm INST/lib/liboshmem.a | grep shmem_put8
shmem_put8_f.o:
0120 T shmem_put8_
01a0 T shmem_put8__
 T shmem_put8_f

-Paul

-- 
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
Computer and Data Sciences Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900