[hwloc-devel] Create success (hwloc git 1.11.0-34-g0f3a657)

2015-08-31 Thread MPI Team
Creating nightly hwloc snapshot git tarball was a success.

Snapshot:   hwloc 1.11.0-34-g0f3a657
Start time: Mon Aug 31 21:03:06 EDT 2015
End time:   Mon Aug 31 21:04:36 EDT 2015

Your friendly daemon,
Cyrador


Re: [hwloc-devel] Add support for PCIe drives

2015-08-31 Thread Brice Goglin
I applied a slightly different patch to v1.11 (nothing is needed in
master since the discovery logic is different and more generic).
thanks
Brice



Le 28/08/2015 21:53, Tannenbaum, Barry M a écrit :
>
> PCIe drives (like the Intel DC P3500/P3600/P3700) do not have a
> controller – they appear directly on the PCIe bus.
>
>
> support-pcie-disk.patch
>
>
> diff --git a/src/topology-linux.c b/src/topology-linux.c
> --- a/src/topology-linux.c
> +++ b/src/topology-linux.c
> @@ -4656,6 +4656,11 @@
>/* restore parent path */
>pathlen -= devicedlen;
>path[pathlen] = '\0';
> +} else if (strcmp(devicedirent->d_name, "block") == 0) {
> +  /* found a block device - lookup block class for real */
> +  res += hwloc_linux_class_readdir(backend, pcidev, path,
> +   HWLOC_OBJ_OSDEV_BLOCK, "block",
> +   hwloc_linux_block_class_fillinfos);
>  }
>}
>closedir(devicedir);
>
>
> ___
> hwloc-devel mailing list
> hwloc-de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/hwloc-devel/2015/08/4581.php



Re: [OMPI devel] Dual rail IB card problem

2015-08-31 Thread Brice Goglin
The locality is mlx4_0 as reported by lstopo is "near the entire
machine" (while mlx4_1 is reported near NUMA node #3). I would vote for
buggy PCI-NUMA affinity being reported by the BIOS. But I am not very
familiar with 4x E5-4600 machines so please make sure this PCI slot is
really attached to a single NUMA node (some older 4-socket machines have
some I/O hub attached to 2 sockets).

Given the lspci output, mlx4_0 is likely on the PCI bus attached to NUMA
node #0, so you should be able to work-around the issue by setting
HWLOC_PCI__00_LOCALCPUS=0xfff in the environment.

There are 8 hostbridges in this machine, 2 attached to each processor,
there are likely similar issues for others.

Brice



Le 31/08/2015 22:06, Rolf vandeVaart a écrit :
>
> There was a problem reported on the User's list about Open MPI always
> picking one Mellanox card when they were two in the machine.
>
>
> http://www.open-mpi.org/community/lists/users/2015/08/27507.php
>
>
> We dug a little deeper and I think this has to do with how hwloc is
> figuring out where one of the cards is located.  This verbose output
> (with some extra printfs) shows that it cannot figure out which NUMA
> node mlx4_0 is closest too. It can only determine it is located on
> HWLOC_OBJ_SYSTEM and therefore Open MPI assumes a distance of 0.0. 
> Because of this (smaller is better) Open MPI library always picks
> mlx4_0 for all sockets.  I am trying to figure out if this is a hwloc
> or Open MPI bug. Any thoughts on this?
>
>
> [node1.local:05821] Checking distance for device=mlx4_1
> [node1.local:05821] hwloc_distances->nbobjs=4
> [node1.local:05821] hwloc_distances->latency[0]=1.00
> [node1.local:05821] hwloc_distances->latency[1]=2.10
> [node1.local:05821] hwloc_distances->latency[2]=2.10
> [node1.local:05821] hwloc_distances->latency[3]=2.10
> [node1.local:05821] hwloc_distances->latency[4]=2.10
> [node1.local:05821] hwloc_distances->latency[5]=1.00
> [node1.local:05821] hwloc_distances->latency[6]=2.10
> [node1.local:05821] hwloc_distances->latency[7]=2.10
> [node1.local:05821] ibv_obj->type = 4
> [node1.local:05821] ibv_obj->logical_index=1
> [node1.local:05821] my_obj->logical_index=0
> [node1.local:05821] Proc is bound: distance=2.10
>
> [node1.local:05821] Checking distance for device=mlx4_0
> [node1.local:05821] hwloc_distances->nbobjs=4
> [node1.local:05821] hwloc_distances->latency[0]=1.00
> [node1.local:05821] hwloc_distances->latency[1]=2.10
> [node1.local:05821] hwloc_distances->latency[2]=2.10
> [node1.local:05821] hwloc_distances->latency[3]=2.10
> [node1.local:05821] hwloc_distances->latency[4]=2.10
> [node1.local:05821] hwloc_distances->latency[5]=1.00
> [node1.local:05821] hwloc_distances->latency[6]=2.10
> [node1.local:05821] hwloc_distances->latency[7]=2.10
> [node1.local:05821] ibv_obj->type = 1
> <-HWLOC_OBJ_MACHINE
> [node1.local:05821] ibv_obj->type set to NULL
> [node1.local:05821] Proc is bound: distance=0.00
>
> [node1.local:05821] [rank=0] openib: skipping device mlx4_1; it is too
> far away
> [node1.local:05821] [rank=0] openib: using port mlx4_0:1
> [node1.local:05821] [rank=0] openib: using port mlx4_0:2
>
>
> Machine (1024GB)
>   NUMANode L#0 (P#0 256GB) + Socket L#0 + L3 L#0 (30MB)
> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU
> L#0 (P#0)
> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU
> L#1 (P#1)
> L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU
> L#2 (P#2)
> L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU
> L#3 (P#3)
> L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU
> L#4 (P#4)
> L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU
> L#5 (P#5)
> L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU
> L#6 (P#6)
> L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU
> L#7 (P#7)
> L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU
> L#8 (P#8)
> L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU
> L#9 (P#9)
> L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 +
> PU L#10 (P#10)
> L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 +
> PU L#11 (P#11)
>   NUMANode L#1 (P#1 256GB)
> Socket L#1 + L3 L#1 (30MB)
>   L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12
> + PU L#12 (P#12)
>   L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13
> + PU L#13 (P#13)
>   L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14
> + PU L#14 (P#14)
>   L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15
> + PU L#15 (P#15)
>   L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16
> + PU L#16 (P#16)
>   L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17
> + PU L#17 (P#17)
>   L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + 

[OMPI devel] Dual rail IB card problem

2015-08-31 Thread Rolf vandeVaart
There was a problem reported on the User's list about Open MPI always picking 
one Mellanox card when they were two in the machine.


http://www.open-mpi.org/community/lists/users/2015/08/27507.php


We dug a little deeper and I think this has to do with how hwloc is figuring 
out where one of the cards is located.  This verbose output (with some extra 
printfs) shows that it cannot figure out which NUMA node mlx4_0 is closest too. 
It can only determine it is located on HWLOC_OBJ_SYSTEM and therefore Open MPI 
assumes a distance of 0.0.  Because of this (smaller is better) Open MPI 
library always picks mlx4_0 for all sockets.  I am trying to figure out if this 
is a hwloc or Open MPI bug. Any thoughts on this?


[node1.local:05821] Checking distance for device=mlx4_1
[node1.local:05821] hwloc_distances->nbobjs=4
[node1.local:05821] hwloc_distances->latency[0]=1.00
[node1.local:05821] hwloc_distances->latency[1]=2.10
[node1.local:05821] hwloc_distances->latency[2]=2.10
[node1.local:05821] hwloc_distances->latency[3]=2.10
[node1.local:05821] hwloc_distances->latency[4]=2.10
[node1.local:05821] hwloc_distances->latency[5]=1.00
[node1.local:05821] hwloc_distances->latency[6]=2.10
[node1.local:05821] hwloc_distances->latency[7]=2.10
[node1.local:05821] ibv_obj->type = 4
[node1.local:05821] ibv_obj->logical_index=1
[node1.local:05821] my_obj->logical_index=0
[node1.local:05821] Proc is bound: distance=2.10

[node1.local:05821] Checking distance for device=mlx4_0
[node1.local:05821] hwloc_distances->nbobjs=4
[node1.local:05821] hwloc_distances->latency[0]=1.00
[node1.local:05821] hwloc_distances->latency[1]=2.10
[node1.local:05821] hwloc_distances->latency[2]=2.10
[node1.local:05821] hwloc_distances->latency[3]=2.10
[node1.local:05821] hwloc_distances->latency[4]=2.10
[node1.local:05821] hwloc_distances->latency[5]=1.00
[node1.local:05821] hwloc_distances->latency[6]=2.10
[node1.local:05821] hwloc_distances->latency[7]=2.10
[node1.local:05821] ibv_obj->type = 1 <-HWLOC_OBJ_MACHINE
[node1.local:05821] ibv_obj->type set to NULL
[node1.local:05821] Proc is bound: distance=0.00

[node1.local:05821] [rank=0] openib: skipping device mlx4_1; it is too far away
[node1.local:05821] [rank=0] openib: using port mlx4_0:1
[node1.local:05821] [rank=0] openib: using port mlx4_0:2


Machine (1024GB)
  NUMANode L#0 (P#0 256GB) + Socket L#0 + L3 L#0 (30MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 
(P#10)
L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 
(P#11)
  NUMANode L#1 (P#1 256GB)
Socket L#1 + L3 L#1 (30MB)
  L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 
(P#12)
  L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 
(P#13)
  L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 
(P#14)
  L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 
(P#15)
  L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU L#16 
(P#16)
  L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU L#17 
(P#17)
  L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU L#18 
(P#18)
  L2 L#19 (256KB) + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU L#19 
(P#19)
  L2 L#20 (256KB) + L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20 + PU L#20 
(P#20)
  L2 L#21 (256KB) + L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21 + PU L#21 
(P#21)
  L2 L#22 (256KB) + L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22 + PU L#22 
(P#22)
  L2 L#23 (256KB) + L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23 + PU L#23 
(P#23)
HostBridge L#5
  PCIBridge
PCI 15b3:1003
  Net L#7 "ib2"
  Net L#8 "ib3"
  OpenFabrics L#9 "mlx4_1"

  NUMANode L#2 (P#2 256GB) + Socket L#2 + L3 L#2 (30MB)
L2 L#24 (256KB) + L1d L#24 (32KB) + L1i L#24 (32KB) + Core L#24 + PU L#24 
(P#24)
L2 L#25 (256KB) + L1d L#25 (32KB) + L1i L#25 (32KB) + Core L#25 + PU L#25 
(P#25)
L2 L#26 (256KB) + L1d L#26 (32KB) + L1i L#26 (32KB) + Core L#26 + PU L#26 
(P#26)

Re: [OMPI devel] Status update: PMIx on master

2015-08-31 Thread Howard Pritchard
Hi Ralph,

Thanks for getting this in!

I verified that for master/HEAD today that, modulo the caveats
about spawn/pub/sub etc. job launches on Cray using aprun or
srun work as expected, so some of the MTT failures over the
weekend should go away with runs this week.

Thanks,

Howard




2015-08-31 9:59 GMT-06:00 Ralph Castain :

> Hi folks
>
> Per last week’s telecon, I committed the PR to bring PMIx into the master.
> As discussed, things are generally working okay - we had a little cleanup
> to do once the code was exposed to different environments, but not too
> horrible (thanks Gilles!).
>
> First, a quick status update. We know that the MPI-2 dynamics are broken -
> this includes comm_spawn (will launch but not connect), connect/accept, and
> publish/lookup/unpublish. I am working on those now and hope to have them
> fully operational inn the next day or two. Everything else should be
> functional - if not, please report the bug.
>
> There are a few warnings still being emitted for unused functions. Please
> ignore these for the moment as those functions will be used once we
> complete the integration.
>
> Direct modex is working, but we are not yet making use of it. We still
> default to doing a full data exchange at startup. I’m not sure where we are
> relative to the async add_procs, but once that is ready we have the
> necessary support in-place.
>
> You are certainly welcome to help fix issues with the PMIx code! We ask
> that any changes to the embedded PMIx code itself please be posted as PRs
> against the PMIx master - I will update the OMPI master from the PMIx
> tarball. This will help avoid losing your changes as we move forward.
>
> https://github.com/open-mpi/pmix
>
> So - what changed, you ask? Most of the change is transparent, but two
> things are not:
>
> * the OMPI DPM framework has been eliminated and replaced with a core
> ompi/dpm directory. There is now only one way of doing dynamic process
> management, and that is thru the opal/mca/pmix framework, thus letting
> prior PMI implementations also support these functions (as much as they do)
>
> * the OMPI PUB framework has been eliminated. The respective MPI bindings
> now directly call the opal/mca/pmix functions to implement publish, lookup,
> and unpublish
>
>
> As a result of the changes, there isn’t much (if any) interaction between
> the MPI and ORTE layers any more - everything pretty much flows thru the
> OPAL/PMIx interface. Once the STCI folks have a chance to scratch their
> heads a bit, we may find that the OMPI/RTE framework can likewise disappear
> or be significantly reduced.
>
>
> The transparent changes do not currently take advantage of the
> enhanced/extended PMIx functionality - we basically just did a direct
> replacement, with the addition of direct modex support. The “hooks” are
> exposed for OMPI to take advantage of things like notification - we just
> need to decide which ones we want and how/where to wire them into the code.
>
> I’ll be updating the PMIx wiki over the next week or so to better explain
> the overall design. It is somewhat out-of-date in the details, though the
> broad design remains accurate.
>
> HTH
> Ralph
>
>
> ___
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2015/08/17902.php
>


[OMPI devel] Status update: PMIx on master

2015-08-31 Thread Ralph Castain
Hi folks

Per last week’s telecon, I committed the PR to bring PMIx into the master. As 
discussed, things are generally working okay - we had a little cleanup to do 
once the code was exposed to different environments, but not too horrible 
(thanks Gilles!).

First, a quick status update. We know that the MPI-2 dynamics are broken - this 
includes comm_spawn (will launch but not connect), connect/accept, and 
publish/lookup/unpublish. I am working on those now and hope to have them fully 
operational inn the next day or two. Everything else should be functional - if 
not, please report the bug.

There are a few warnings still being emitted for unused functions. Please 
ignore these for the moment as those functions will be used once we complete 
the integration.

Direct modex is working, but we are not yet making use of it. We still default 
to doing a full data exchange at startup. I’m not sure where we are relative to 
the async add_procs, but once that is ready we have the necessary support 
in-place.

You are certainly welcome to help fix issues with the PMIx code! We ask that 
any changes to the embedded PMIx code itself please be posted as PRs against 
the PMIx master - I will update the OMPI master from the PMIx tarball. This 
will help avoid losing your changes as we move forward.

https://github .com/open-mpi/pmix

So - what changed, you ask? Most of the change is transparent, but two things 
are not:

* the OMPI DPM framework has been eliminated and replaced with a core ompi/dpm 
directory. There is now only one way of doing dynamic process management, and 
that is thru the opal/mca/pmix framework, thus letting prior PMI 
implementations also support these functions (as much as they do)

* the OMPI PUB framework has been eliminated. The respective MPI bindings now 
directly call the opal/mca/pmix functions to implement publish, lookup, and 
unpublish


As a result of the changes, there isn’t much (if any) interaction between the 
MPI and ORTE layers any more - everything pretty much flows thru the OPAL/PMIx 
interface. Once the STCI folks have a chance to scratch their heads a bit, we 
may find that the OMPI/RTE framework can likewise disappear or be significantly 
reduced.


The transparent changes do not currently take advantage of the 
enhanced/extended PMIx functionality - we basically just did a direct 
replacement, with the addition of direct modex support. The “hooks” are exposed 
for OMPI to take advantage of things like notification - we just need to decide 
which ones we want and how/where to wire them into the code.

I’ll be updating the PMIx wiki over the next week or so to better explain the 
overall design. It is somewhat out-of-date in the details, though the broad 
design remains accurate.

HTH
Ralph



Re: [OMPI devel] fortran calling MPI_* instead of PMPI_*

2015-08-31 Thread Jeff Squyres (jsquyres)
Sweet.  Let's followup on that PR.  Thanks!

> On Aug 31, 2015, at 3:10 AM, Gilles Gouaillardet  wrote:
> 
> Jeff,
> 
> i filed PR #845 https://github.com/open-mpi/ompi/pull/845
> 
> could you please have a look ?
> 
> Cheers,
> 
> Gilles
> 
> On 8/30/2015 9:20 PM, Gilles Gouaillardet wrote:
>> ok, will do
>> 
>> basically, I simply have to
>> #include "ompi/mpi/c/profile/defines.h"
>> if configure set the WANT_MPI_PROFILING macro
>> (since this is an AM_CONDITIONAL, I will have the Makefile.am sets the CPP 
>> flags for the compiler)
>> 
>> makes sense ?
>> 
>> /* the patch will be pretty large since all *_f files are impacted, and for 
>> mpif-h only,
>> so i'd rather ask before I fill the pr, and even if a sed command will do 
>> most of the job */
>> 
>> Cheers,
>> 
>> Gilles
>> 
>> On Saturday, August 29, 2015, Jeff Squyres (jsquyres)  
>> wrote:
>> On Aug 27, 2015, at 3:25 AM, Gilles Gouaillardet  wrote:
>> >
>> > I am lost ...
>> 
>> Fortran does that to ya.  ;-)
>> 
>> > from ompi/mpi/fortran/mpif-h/profile/palltoall_f.c
>> >
>> > void ompi_alltoall_f(char *sendbuf, MPI_Fint *sendcount, MPI_Fint 
>> > *sendtype,
>> >char *recvbuf, MPI_Fint *recvcount, MPI_Fint *recvtype,
>> >MPI_Fint *comm, MPI_Fint *ierr)
>> > {
>> >[...]
>> >c_ierr = MPI_Alltoall(sendbuf,
>> >  OMPI_FINT_2_INT(*sendcount),
>> >  c_sendtype,
>> >  recvbuf,
>> >  OMPI_FINT_2_INT(*recvcount),
>> >  c_recvtype, c_comm);
>> >[...]
>> > }
>> >
>> > $ nm ompi/mpi/fortran/mpif-h/profile/.libs/palltoall_f.o | grep 
>> > MPI_Alltoall
>> > U MPI_Alltoall
>> >  W MPI_Alltoall_f
>> >  W MPI_Alltoall_f08
>> >  W PMPI_Alltoall_f
>> >  W PMPI_Alltoall_f08
>> >
>> > ompi_alltoall_f() calls MPI_Alltoall()
>> >
>> >
>> > the "natural" way of writing a tool is to write mpi_alltoall_ (that calls 
>> > pmpi_alltoall_)
>> > *and* MPI_Alltoall (that calls PMPI_Alltoall)
>> 
>> Sidenote: the only correct way to write a tool that intercepts Fortran MPI 
>> API calls is to write those interceptions *in Fortran*.  I.e., the tool 
>> should provide MPI_ALLTOALL as a Fortran subroutine.  I realize that this is 
>> not the point of what you are saying :-), but everyone always gets this 
>> point wrong, so I feel the need to keep pointing this out.
>> 
>> > since ompi_alltoall_f invokes MPI_Alltoall (and not PMPI_Alltoall), the 
>> > tool is invoked twice, by both the Fortran and C wrapper.
>> 
>> I didn't think that this was true, but I just confirmed it by looking at 
>> "gcc -E" output in the mpif-h/profile directory.
>> 
>> I don't think that it was the intent.  See below.
>> 
>> > my initial question was
>> > "why does ompi_alltoall_f invokes MPI_Alltoall instead of PMPI_Alltoall ?"
>> >
>> > /* since we share the same source code when building with or without mpi 
>> > profiling,
>> > that means we would need to
>> > #define MPI_Alltoall PMPI_Alltoall
>> > when ompi is configure'd with --enable-mpi-profile
>> > */
>> 
>> I'm guessing that the complexity in the build system to support environments 
>> without and with weak symbols (i.e., OS X vs. just about everyone else) have 
>> made this get lost over time.
>> 
>> Can you supply a patch?
>> 
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to: 
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>> 
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/devel/2015/08/17897.php
>> 
>> 
>> ___
>> devel mailing list
>> 
>> de...@open-mpi.org
>> 
>> Subscription: 
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/devel/2015/08/17899.php
> 
> ___
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2015/08/17900.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/