After lots of make cleans it works again. Thanks.

On Wed, Sep 09, 2015 at 10:00:10AM +0000, Jeff Squyres (jsquyres) wrote:
> Try making clean (perhaps just in ompi/coll/ml) and trying again -- this 
> looks like it could just be a stale file in your tree.
> 
> > On Sep 9, 2015, at 5:41 AM, Adrian Reber <adr...@lisas.de> wrote:
> > 
> > I was about to try Gilles' patch but the current master checkout does
> > not build on my ppc64 system: (b79cffc73b88c2e5e2f2161e096c49aed5b9d2ed)
> > 
> > Making all in mca/coll/ml
> > make[2]: Entering directory '/home/adrian/ompi/build/ompi/mca/coll/ml'
> > /bin/sh ../../../../libtool  --tag=CC   --mode=link gcc -std=gnu99  -g 
> > -Wall -Wundef -Wno-long-long -Wsign-compare -Wmissing-prototypes 
> > -Wstrict-prototypes -Wcomment -pedantic 
> > -Werror-implicit-function-declaration -finline-functions 
> > -fno-strict-aliasing -pthread -module -avoid-version  -o mca_coll_ml.la 
> > -rpath /tmp/ompi/lib/openmpi coll_ml_module.lo coll_ml_allocation.lo 
> > coll_ml_barrier.lo coll_ml_bcast.lo coll_ml_component.lo 
> > coll_ml_copy_fns.lo coll_ml_descriptors.lo coll_ml_hier_algorithms.lo 
> > coll_ml_hier_algorithms_setup.lo coll_ml_hier_algorithms_bcast_setup.lo 
> > coll_ml_hier_algorithms_allreduce_setup.lo 
> > coll_ml_hier_algorithms_reduce_setup.lo 
> > coll_ml_hier_algorithms_common_setup.lo 
> > coll_ml_hier_algorithms_allgather_setup.lo 
> > coll_ml_hier_algorithm_memsync_setup.lo coll_ml_custom_utils.lo 
> > coll_ml_progress.lo coll_ml_reduce.lo coll_ml_allreduce.lo 
> > coll_ml_allgather.lo coll_ml_mca.lo coll_ml_lmngr.lo 
> > coll_ml_hier_algorithms_barrier_setup.lo coll_ml_select.lo coll_ml_memsyn
>  c.
> > lo coll_ml_lex.lo coll_ml_config.lo  -lrt  -lm -lutil   -lm -lutil  
> > libtool: link: `coll_ml_bcast.lo' is not a valid libtool object
> > Makefile:1860: recipe for target 'mca_coll_ml.la' failed
> > make[2]: *** [mca_coll_ml.la] Error 1
> > make[2]: Leaving directory '/home/adrian/ompi/build/ompi/mca/coll/ml'
> > Makefile:3366: recipe for target 'all-recursive' failed
> > 
> > 
> > 
> > 
> > On Tue, Sep 08, 2015 at 05:19:56PM +0000, Jeff Squyres (jsquyres) wrote:
> >> Thanks Adrian; I turned this into 
> >> https://github.com/open-mpi/ompi/issues/874.
> >> 
> >>> On Sep 8, 2015, at 9:56 AM, Adrian Reber <adr...@lisas.de> wrote:
> >>> 
> >>> Since a few days the MTT runs on my ppc64 systems are failing with:
> >>> 
> >>> [bimini:11716] *** Process received signal ***
> >>> [bimini:11716] Signal: Segmentation fault (11)
> >>> [bimini:11716] Signal code: Address not mapped (1)
> >>> [bimini:11716] Failing at address: (nil)[bimini:11716] [ 0] 
> >>> [0x3fffa2bb0448]
> >>> [bimini:11716] [ 1] /lib64/libc.so.6(+0xcb074)[0x3fffa27eb074] 
> >>> [bimini:11716] [ 2]
> >>> /home/adrian/mtt-scratch/installs/GubX/install/lib/libpmix.so.0(opal_pmix_pmix1xx_pmix_value_xfer-0x68758)[0x3fffa2158a10]
> >>>  [bimini:11716] [ 3]
> >>> /home/adrian/mtt-scratch/installs/GubX/install/lib/libpmix.so.0(OPAL_PMIX_PMIX1XX_PMIx_Put-0x48338)[0x3fffa2179f70]
> >>>  [bimini:11716] [ 4]
> >>> /home/adrian/mtt-scratch/installs/GubX/install/lib/openmpi/mca_pmix_pmix1xx.so(pmix1_put-0x27efc)[0x3fffa21d858c]
> >>> 
> >>> I think I do not see these kind of errors on any of the other MTT setups
> >>> so it might be ppc64 related. Just wanted to point it out.
> >>> 
> >>>           Adrian

Reply via email to