Thanks Gilles!

I added "static" keyword as you suggested, the build succeeded.

Will this be fixed in later release?

Thanks again!


*Limin Gu*  | Software Engineer

____________________________

*Penguin Computing*
45800 Northport Loop West
Fremont, CA 94 538

*p.*   *415.954.2800 <415.954.2800>*

*e.   *l...@penguincomputing.com


*Changing the world through technical innovation*


www.penguincomputing.com

www.penguincomputing.iapplicants.com

*Follow us on Twitter: @PenguinHPC*


On Tue, Sep 27, 2016 at 11:51 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Hi,
>
> I can see this error happening if you configure with --disable-dlopen
> --with-pmi
>
> In opal/mca/pmix/s?/pmix_s?.c, you can try to add the static keyword
> before
> OBJ_CLASS_INSTANCE(pmi_opcaddy_t, ...)
> Or you can update the files to use unique class name (probably safer...)
>
> Cheers,
>
> Gilles
>
> On Wednesday, September 28, 2016, Limin Gu <l...@penguincomputing.com>
> wrote:
>
>> Hi,
>>
>> I have openmpi-2.0.1 same build error on Centos 6.8 and Centos 7.2,
>> Any idea what might have caused this problem?
>>
>> Thank you!
>>
>> make[2]: Entering directory `/usr/src/redhat/BUILD/openmpi-2.0.1/opal'
>>
>>   CC       class/opal_bitmap.lo
>>
>>   CC       class/opal_free_list.lo
>>
>>   CC       class/opal_hash_table.lo
>>
>>   CC       class/opal_hotel.lo
>>
>>   CC       class/opal_tree.lo
>>
>>   CC       class/opal_list.lo
>>
>>   CC       class/opal_graph.lo
>>
>>   CC       class/opal_object.lo
>>
>>   CC       class/opal_lifo.lo
>>
>>   CC       class/opal_fifo.lo
>>
>>   CC       class/opal_pointer_array.lo
>>
>>   CC       class/opal_value_array.lo
>>
>>   CC       class/opal_ring_buffer.lo
>>
>>   CC       class/opal_rb_tree.lo
>>
>>   CC       errhandler/opal_errhandler.lo
>>
>>   CC       memoryhooks/memory.lo
>>
>>   CC       runtime/opal_progress.lo
>>
>>   CC       runtime/opal_finalize.lo
>>
>>   CC       runtime/opal_init.lo
>>
>>   CC       runtime/opal_params.lo
>>
>>   CC       runtime/opal_info_support.lo
>>
>>   CC       runtime/opal_progress_threads.lo
>>
>>   CC       threads/condition.lo
>>
>>   CC       threads/mutex.lo
>>
>>   CC       threads/thread.lo
>>
>>   CC       threads/wait_sync.lo
>>
>>   CC       dss/dss_internal_functions.lo
>>
>>   CC       dss/dss_compare.lo
>>
>>   CC       dss/dss_copy.lo
>>
>>   CC       dss/dss_dump.lo
>>
>>   CC       dss/dss_load_unload.lo
>>
>>   CC       dss/dss_lookup.lo
>>
>>   CC       dss/dss_pack.lo
>>
>>   CC       dss/dss_peek.lo
>>
>>   CC       dss/dss_print.lo
>>
>>   CC       dss/dss_register.lo
>>
>>   CC       dss/dss_unpack.lo
>>
>>   CC       dss/dss_open_close.lo
>>
>>   CCLD     libopen-pal.la
>>
>> mca/pmix/s1/.libs/libmca_pmix_s1.a(libmca_pmix_s1_la-pmix_s1.o):(.data.rel+0x0):
>> multiple definition of `pmi_opcaddy_t_class'
>>
>> mca/pmix/s2/.libs/libmca_pmix_s2.a(libmca_pmix_s2_la-pmix_s2.o):(.data.rel+0x0):
>> first defined here
>>
>> collect2: ld returned 1 exit status
>>
>> make[2]: *** [libopen-pal.la] Error 1
>>
>> make[2]: Leaving directory `/usr/src/redhat/BUILD/openmpi-2.0.1/opal'
>>
>> make[1]: *** [all-recursive] Error 1
>>
>> make[1]: Leaving directory `/usr/src/redhat/BUILD/openmpi-2.0.1/opal'
>>
>> make: *** [all-recursive] Error 1
>>
>> error: Bad exit status from /var/tmp/rpm-tmp.EtuXQa (%build)
>>
>>
>>
>> --
>>
>> *Limin Gu*  | Software Engineer
>>
>> ____________________________
>>
>> *Penguin Computing*
>> 45800 Northport Loop West
>> Fremont, CA 94 538
>>
>> *p.*   *415.954.2800 <415.954.2800>*
>>
>> *e.   *spars...@penguincomputing.com
>>
>>
>> *Changing the world through technical innovation*
>>
>>
>> www.penguincomputing.com
>>
>> www.penguincomputing.iapplicants.com
>>
>> *Follow us on Twitter: @PenguinHPC*
>>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to