Hi, George:

attached is the ompi_info.  I built it on Power8 arch. The configure is
also simple.

../configure --prefix=${installdir} \
--enable-orterun-prefix-by-default

Dahai

On Thu, May 4, 2017 at 4:45 PM, George Bosilca <bosi...@icl.utk.edu> wrote:

> Dahai,
>
> You are right the segfault is unexpected. I can't replicate this on my
> mac. What architecture are you seeing this issue ? How was your OMPI
> compiled ?
>
> Please post the output of ompi_info.
>
> Thanks,
> George.
>
>
>
> On Thu, May 4, 2017 at 5:42 PM, Dahai Guo <dahai....@gmail.com> wrote:
>
>> Those messages are what I like to see. But, there are some other error
>> messages and core dump I don't like, as I attached in my previous email.  I
>> think something might be wrong with errhandler in openmpi.  Similar thing
>> happened for Bcast, etc
>>
>> Dahai
>>
>> On Thu, May 4, 2017 at 4:32 PM, Nathan Hjelm <hje...@me.com> wrote:
>>
>>> By default MPI errors are fatal and abort. The error message says it all:
>>>
>>> *** An error occurred in MPI_Reduce
>>> *** reported by process [3645440001 <(364)%20544-0001>,0]
>>> *** on communicator MPI_COMM_WORLD
>>> *** MPI_ERR_COUNT: invalid count argument
>>> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
>>> *** and potentially your MPI job)
>>>
>>> If you want different behavior you have to change the default error
>>> handler on the communicator using MPI_Comm_set_errhandler. You can set it
>>> to MPI_ERRORS_RETURN and check the error code or you can create your own
>>> function. See MPI 3.1 Chapter 8.
>>>
>>> -Nathan
>>>
>>> On May 04, 2017, at 02:58 PM, Dahai Guo <dahai....@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> Using opemi 2.1,  the following code resulted in the core dump, although
>>> only a simple error msg was expected.  Any idea what is wrong?  It seemed
>>> related the errhandler somewhere.
>>>
>>>
>>> D.G.
>>>
>>>
>>>  *** An error occurred in MPI_Reduce
>>>  *** reported by process [3645440001 <(364)%20544-0001>,0]
>>>  *** on communicator MPI_COMM_WORLD
>>>  ***
>>> *MPI_ERR_COUNT: invalid count argument* *** MPI_ERRORS_ARE_FATAL
>>> (processes in this communicator will now abort,
>>>  ***    and potentially your MPI job)
>>> ......
>>>
>>> [1,1]<stderr>:1000151c0000-1000151e0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:1000151e0000-100015250000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015250000-100015270000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015270000-1000152e0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:1000152e0000-100015300000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015300000-100015510000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015510000-100015530000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015530000-100015740000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015740000-100015760000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015760000-100015970000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015970000-100015990000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015990000-100015ba0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015ba0000-100015bc0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015bc0000-100015dd0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015dd0000-100015df0000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100015df0000-100016000000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100016000000-100016020000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100016020000-100016230000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100016230000-100016250000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100016250000-100016460000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:100016460000-100016470000 rw-p 00000000 00:00 0
>>> [1,1]<stderr>:3fffd4630000-3fffd46c0000 rw-p 00000000 00:00
>>> 0                          [stack]
>>> ------------------------------------------------------------
>>> --------------
>>>
>>> #include <stdlib.h>
>>> #include <stdio.h>
>>> #include <mpi.h>
>>> int main(int argc, char** argv)
>>> {
>>>
>>>     int r[1], s[1];
>>>     MPI_Init(&argc,&argv);
>>>
>>>     s[0] = 1;
>>>     r[0] = -1;
>>>     MPI_Reduce(s,r,*-1*,MPI_INT,MPI_SUM,0,MPI_COMM_WORLD);
>>>     printf("%d\n",r[0]);
>>>     MPI_Finalize();
>>> }
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>>>
>>
>>
>> _______________________________________________
>> devel mailing list
>> devel@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>>
>
>
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>
              C bindings: yes
            C++ bindings: no
             Fort mpif.h: yes (all)
            Fort use mpi: yes (limited: overloading)
       Fort use mpi size: deprecated-ompi-info-value
        Fort use mpi_f08: no
 Fort mpi_f08 compliance: The mpi_f08 module was not built
  Fort mpi_f08 subarrays: no
           Java bindings: no
  Wrapper compiler rpath: runpath
              C compiler: gcc
     C compiler absolute: /bin/gcc
  C compiler family name: GNU
      C compiler version: 4.8.5
            C++ compiler: g++
   C++ compiler absolute: /bin/g++
           Fort compiler: gfortran
       Fort compiler abs: /bin/gfortran
         Fort ignore TKR: no
   Fort 08 assumed shape: no
      Fort optional args: no
          Fort INTERFACE: yes
    Fort ISO_FORTRAN_ENV: yes
       Fort STORAGE_SIZE: no
      Fort BIND(C) (all): no
      Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): no
       Fort TYPE,BIND(C): no
 Fort T,BIND(C,name="a"): no
            Fort PRIVATE: no
          Fort PROTECTED: no
           Fort ABSTRACT: no
       Fort ASYNCHRONOUS: no
          Fort PROCEDURE: no
         Fort USE...ONLY: no
           Fort C_FUNLOC: no
 Fort f08 using wrappers: no
         Fort MPI_SIZEOF: no
             C profiling: yes
           C++ profiling: no
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: no
          C++ exceptions: no
          Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
                          OMPI progress: no, ORTE progress: yes, Event lib:
                          yes)
           Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
              dl support: yes
   Heterogeneous support: no
 mpirun default --prefix: yes
         MPI I/O support: yes
       MPI_WTIME support: native
     Symbol vis. support: yes
   Host topology support: yes
          MPI extensions: affinity, cuda
  MPI_MAX_PROCESSOR_NAME: 256
    MPI_MAX_ERROR_STRING: 256
     MPI_MAX_OBJECT_NAME: 64
        MPI_MAX_INFO_KEY: 36
        MPI_MAX_INFO_VAL: 256
       MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
           MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v2.1.0)
           MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v2.1.0)
           MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA btl: vader (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA btl: tcp (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA btl: sm (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA btl: self (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA btl: openib (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                  MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA event: libevent2022 (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA hwloc: hwloc1112 (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
                  MCA if: posix_ipv4 (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
                  MCA if: linux_ipv6 (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
         MCA installdirs: env (MCA v2.1.0, API v2.0.0, Component v2.1.0)
         MCA installdirs: config (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA memory: patcher (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA mpool: hugepage (MCA v2.1.0, API v3.0.0, Component v2.1.0)
             MCA patcher: overwrite (MCA v2.1.0, API v1.0.0, Component
                          v2.1.0)
                MCA pmix: pmix112 (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA pstat: linux (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA rcache: grdma (MCA v2.1.0, API v3.3.0, Component v2.1.0)
                 MCA sec: basic (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA shmem: posix (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA shmem: sysv (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA shmem: mmap (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA timer: linux (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA dfs: app (MCA v2.1.0, API v1.0.0, Component v2.1.0)
                 MCA dfs: test (MCA v2.1.0, API v1.0.0, Component v2.1.0)
                 MCA dfs: orted (MCA v2.1.0, API v1.0.0, Component v2.1.0)
              MCA errmgr: default_app (MCA v2.1.0, API v3.0.0, Component
                          v2.1.0)
              MCA errmgr: default_orted (MCA v2.1.0, API v3.0.0, Component
                          v2.1.0)
              MCA errmgr: default_hnp (MCA v2.1.0, API v3.0.0, Component
                          v2.1.0)
              MCA errmgr: default_tool (MCA v2.1.0, API v3.0.0, Component
                          v2.1.0)
                 MCA ess: env (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA ess: singleton (MCA v2.1.0, API v3.0.0, Component
                          v2.1.0)
                 MCA ess: hnp (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA ess: pmi (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA ess: slurm (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA ess: tool (MCA v2.1.0, API v3.0.0, Component v2.1.0)
               MCA filem: raw (MCA v2.1.0, API v2.0.0, Component v2.1.0)
             MCA grpcomm: direct (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA iof: mr_orted (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA iof: tool (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA iof: mr_hnp (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA iof: orted (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA iof: hnp (MCA v2.1.0, API v2.0.0, Component v2.1.0)
            MCA notifier: syslog (MCA v2.1.0, API v1.0.0, Component v2.1.0)
                MCA odls: default (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA oob: tcp (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA oob: ud (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA oob: usock (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA plm: slurm (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA plm: isolated (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA plm: rsh (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA ras: slurm (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA ras: simulator (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
                 MCA ras: loadleveler (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA rmaps: seq (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA rmaps: rank_file (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA rmaps: resilient (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA rmaps: staged (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA rmaps: mindist (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA rmaps: round_robin (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA rmaps: ppr (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA rml: oob (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA routed: radix (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA routed: direct (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA routed: debruijn (MCA v2.1.0, API v2.0.0, Component v2.1.0)
              MCA routed: binomial (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA rtc: freq (MCA v2.1.0, API v1.0.0, Component v2.1.0)
                 MCA rtc: hwloc (MCA v2.1.0, API v1.0.0, Component v2.1.0)
              MCA schizo: ompi (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: tool (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: staged_orted (MCA v2.1.0, API v1.0.0, Component
                          v2.1.0)
               MCA state: dvm (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: orted (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: staged_hnp (MCA v2.1.0, API v1.0.0, Component
                          v2.1.0)
               MCA state: novm (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: app (MCA v2.1.0, API v1.0.0, Component v2.1.0)
               MCA state: hnp (MCA v2.1.0, API v1.0.0, Component v2.1.0)
                 MCA bml: r2 (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: self (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: libnbc (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: inter (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: sync (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: tuned (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: basic (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA coll: sm (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                MCA fbtl: posix (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA fcoll: static (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA fcoll: two_phase (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA fcoll: dynamic (MCA v2.1.0, API v2.0.0, Component v2.1.0)
               MCA fcoll: dynamic_gen2 (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
               MCA fcoll: individual (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
                  MCA fs: ufs (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                  MCA io: romio314 (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                  MCA io: ompio (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA osc: pt2pt (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA osc: rdma (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA osc: sm (MCA v2.1.0, API v3.0.0, Component v2.1.0)
                 MCA pml: v (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA pml: ob1 (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA pml: cm (MCA v2.1.0, API v2.0.0, Component v2.1.0)
                 MCA rte: orte (MCA v2.1.0, API v2.0.0, Component v2.1.0)
            MCA sharedfp: sm (MCA v2.1.0, API v2.0.0, Component v2.1.0)
            MCA sharedfp: lockedfile (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
            MCA sharedfp: individual (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
                MCA topo: basic (MCA v2.1.0, API v2.2.0, Component v2.1.0)
           MCA vprotocol: pessimist (MCA v2.1.0, API v2.0.0, Component
                          v2.1.0)
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to