Hi Mohamad,
please found attached the output of "ompi_info" and the log of the
faulty hdf5 test.
Thanks!
Regards,
Joseph
Idézet (Mohamad Chaarawi <[email protected]>):
Hi Joseph,
What is the test that fails in make check? When you run make check,
it should tell you which test is running. Please provide that.
I also would like to know the Open-MPI version you are using.
Thanks,
Mohamad
-----Original Message-----
From: Hdf-forum [mailto:[email protected]] On Behalf
Of Kóbori József
Sent: Tuesday, November 24, 2015 8:04 AM
To: [email protected]
Subject: Re: [Hdf-forum] unsupported data type
Hi Richard,
I used "configure" with the following options:
--prefix=/some/dir --enable-parallel
Platform:
Linux atlasz 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64
GNU/Linux
The libhdf5.settings file is attached.
Thanks!
Regards,
Joseph
Idézet (Richard van Hees <[email protected]>):
> Hi Joseph ,
>
> It would be helpful if you would indicate which way you configured
> HDF5. Using cmake? Using configure? Which options did you use? On
> which platform?
>
> Helpful also be the content of the file
> <HDF5_DIR>/lib/libhdf5.settings, where <HDF5_DIR> is the installation
> directory for HDF5, default "/usr/local" or specified by you using
> PREFIX or --prefix
>
> Greetings, Richard
>
> On 11/24/2015 12:40 PM, Kóbori József wrote:
>> Hi Mohamad,
>>
>> I compiled the latest version of HDF5: hdf5-1.8.16 After compiling
>> (make) I simply typed the "make check" command, and I got the error
>> below, so I don't have a particular test case.
>> If You need more info please let me know!
>> Thank You!
>>
>> Regards,
>> Joseph
>>
>> Idézet (Mohamad Chaarawi <[email protected]>):
>>
>>> Hi Joseph,
>>>
>>> Could you please indicate the version of the HDF5 library and
>>> Open-MPI that you are using?
>>> Furthermore, could you provide a test case that replicates the
>>> failure below?
>>>
>>> You are correct that it seems that we are constructing a derived
>>> datatype that the ADIO driver in ROMIO doesn't seem to be able to
>>> parse properly. We don't test however much with Open-MPI, at lease
>>> to the extent that we do with MPICH. I'd be curious if you can try
>>> the same program with MPICH and see if it gives the same error.
>>>
>>> Thanks,
>>> Mohamad
>>>
>>>> -----Original Message-----
>>>> From: Hdf-forum [mailto:[email protected]] On
>>>> Behalf Of Kóbori József
>>>> Sent: Monday, November 16, 2015 5:30 AM
>>>> To: [email protected]
>>>> Subject: [Hdf-forum] unsupported data type
>>>>
>>>> Dear All,
>>>>
>>>> when trying to test the previously compiled (with mpicc) HDF5
>>>> source I encountered the following error message:
>>>>
>>>> "MPI_ABORT was invoked on rank 3 in communicator
MPI_COMM_WORLD
>>>> with errorcode 1.
>>>>
>>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>>> You may or may not see output from other processes, depending on
>>>> exactly when Open MPI kills them.
>>>> -------------------------------------------------------------------
>>>> ------- Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> Error: Unsupported datatype passed to
>>>> ADIOI_Count_contiguous_blocks, combiner = 17
>>>> -------------------------------------------------------------------
>>>> ------- mpiexec has exited due to process rank 1 with PID 17756 on
>>>> node XXXX exiting without calling "finalize". This may have caused
>>>> other processes in the application to be terminated by signals sent
>>>> by mpiexec (as reported here)."
>>>>
>>>> As I learnt it has to do something with a particular data type (as
>>>> the error message indicates it), but I'm stuck, I don't know how to
>>>> proceed.
>>>>
>>>> Could You give some help?
>>>>
>>>> Best regards,
>>>> Joseph
>>>>
>>>>
>>>> _______________________________________________
>>>> Hdf-forum is for HDF software users discussion.
>>>> [email protected]
>>>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup
>>>> .org
>>>> Twitter: https://twitter.com/hdf5
>>>
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> [email protected]
>>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.
>>> org
>>> Twitter: https://twitter.com/hdf5
>>>
>>
>>
>>
>> -------------------------------------------
>>
>> József Kóbori
>> PhD Student - Particle Physics and Astronomy Program
>> Room: Building North, Number 5.91, Phone: +36 30-266-5673 Eötvös
>> Loránd University
>> 1117 Budapest, Pázmány P. stny. 1/A
>>
>>
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> [email protected]
>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.o
>> rg
>> Twitter: https://twitter.com/hdf5
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
> g
> Twitter: https://twitter.com/hdf5
>
-------------------------------------------
József Kóbori
PhD Student - Particle Physics and Astronomy Program
Room: Building North, Number 5.91, Phone: +36 30-266-5673 Eötvös Loránd
University
1117 Budapest, Pázmány P. stny. 1/A
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
-------------------------------------------
József Kóbori
PhD Student - Particle Physics and Astronomy Program
Room: Building North, Number 5.91, Phone: +36 30-266-5673
Eötvös Loránd University
1117 Budapest, Pázmány P. stny. 1/A
============================
Testing testphdf5
============================
testphdf5 Test Log
============================
===================================
PHDF5 TESTS START
===================================
MPI-process 1.MPI-process 2. hostname=x
For help use: /users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
MPI-process 3. hostname=x
For help use: /users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
MPI-process 5. hostname=x
For help use: /users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
MPI-process 0. hostname=x
For help use: 3/users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
hostname=x
For help use: /users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
MPI-process 4. hostname=x
For help use: /users/jkobori/hdf5/hdf5-1.8.16_test/testpar/.libs/lt-testphdf5 -help
Linked with hdf5 version 1.8 release 16
Test filenames are:
ParaTest.h5
Testing -- fapl_mpio duplicate (mpiodup)
*** Hint ***
You can use environment variable HDF5_PARAPREFIX to run parallel test files in a
different directory or to add file type prefix. E.g.,
HDF5_PARAPREFIX=pfs:/PFS/user/me
export HDF5_PARAPREFIX
*** End of Hint ***
Test filenames are:
Test filenames are:Test filenames are:
ParaTest.h5
Testing -- fapl_mpio duplicate (mpiodup)
Test filenames are:
ParaTest.h5
Testing -- fapl_mpio duplicate (mpiodup)
Test filenames are:
ParaTest.h5
Testing -- fapl_mpio duplicate (mpiodup)
ParaTest.h5
ParaTest.h5
Testing -- fapl_mpio duplicate (mpiodup)
Testing -- fapl_mpio duplicate (mpiodup)
Testing -- dataset using split communicators (split)
Testing -- dataset using split communicators (split)
Testing -- dataset using split communicators (split)
Testing -- dataset using split communicators (split)
Testing -- dataset using split communicators (split)
Testing -- dataset using split communicators (split)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent write (idsetw)
Testing -- dataset independent read (idsetr)
Testing -- dataset independent read (idsetr)
Testing -- dataset independent read (idsetr)
Testing -- dataset independent read (idsetr)
Testing -- dataset independent read (idsetr)
Testing -- dataset independent read (idsetr)
Testing -- dataset collective write (cdsetw)
Testing -- dataset collective write (cdsetw)
Testing -- dataset collective write (cdsetw)
Testing -- dataset collective write (cdsetw)
Testing -- dataset collective write (cdsetw)
Testing -- dataset collective write (cdsetw)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks, combiner = 17
--------------------------------------------------------------------------
mpiexec has exited due to process rank 3 with PID 12232 on
node x exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
--------------------------------------------------------------------------
[x:12228] 5 more processes have sent help message help-mpi-api.txt / mpi-abort
[x:12228] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Command exited with non-zero status 1
0.74user 0.15system 0:01.41elapsed 63%CPU (0avgtext+0avgdata 8936maxresident)k
9616inputs+320outputs (20major+43631minor)pagefaults 0swaps
make[4]: *** [testphdf5.chkexe_] Error 1
make[4]: Leaving directory `/users/jkobori/hdf5/hdf5-1.8.16_test/testpar'
make[3]: *** [build-check-p] Error 1
make[3]: Leaving directory `/users/jkobori/hdf5/hdf5-1.8.16_test/testpar'
make[2]: *** [test] Error 2
make[2]: Leaving directory `/users/jkobori/hdf5/hdf5-1.8.16_test/testpar'
make[1]: *** [check-am] Error 2
make[1]: Leaving directory `/users/jkobori/hdf5/hdf5-1.8.16_test/testpar'
make: *** [check-recursive] Error 1
Package: Open MPI buildd@brahms Distribution
Open MPI: 1.4.5
Open MPI SVN revision: r25905
Open MPI release date: Feb 10, 2012
Open RTE: 1.4.5
Open RTE SVN revision: r25905
Open RTE release date: Feb 10, 2012
OPAL: 1.4.5
OPAL SVN revision: r25905
OPAL release date: Feb 10, 2012
Ident string: 1.4.5
Prefix: /usr
Configured architecture: x86_64-pc-linux-gnu
Configure host: brahms
Configured by: buildd
Configured on: Fri May 18 12:39:47 UTC 2012
Configure host: brahms
Built by: buildd
Built on: Fri May 18 13:00:39 UTC 2012
Built host: brahms
C bindings: yes
C++ bindings: yes
Fortran77 bindings: yes (all)
Fortran90 bindings: yes
Fortran90 bindings size: small
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fortran77 compiler: gfortran
Fortran77 compiler abs: /usr/bin/gfortran
Fortran90 compiler: gfortran
Fortran90 compiler abs: /usr/bin/gfortran
C profiling: yes
C++ profiling: yes
Fortran77 profiling: yes
Fortran90 profiling: yes
C++ exceptions: no
Thread support: posix (mpi: no, progress: no)
Sparse Groups: no
Internal debug support: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: yes
Heterogeneous support: yes
mpirun default --prefix: no
MPI I/O support: yes
MPI_WTIME support: gettimeofday
Symbol visibility support: yes
FT Checkpoint support: yes (checkpoint thread: no)
MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.5)
MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.5)
MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.5)
MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.5)
MCA carto: file (MCA v2.0, API v2.0, Component v1.4.5)
MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.5)
MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.5)
MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.5)
MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.5)
MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.5)
MCA crs: none (MCA v2.0, API v2.0, Component v1.4.5)
MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.5)
MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.5)
MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.5)
MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: self (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.5)
MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.5)
MCA io: romio (MCA v2.0, API v2.0, Component v1.4.5)
MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.5)
MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.5)
MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.5)
MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.5)
MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.4.5)
MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.5)
MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.5)
MCA pml: v (MCA v2.0, API v2.0, Component v1.4.5)
MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.5)
MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.5)
MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.5)
MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.5)
MCA btl: self (MCA v2.0, API v2.0, Component v1.4.5)
MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.5)
MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.5)
MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.5)
MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.5)
MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.5)
MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.4.5)
MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.5)
MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.5)
MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.5)
MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.5)
MCA odls: default (MCA v2.0, API v2.0, Component v1.4.5)
MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.5)
MCA ras: tm (MCA v2.0, API v2.0, Component v1.4.5)
MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.4.5)
MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.5)
MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.5)
MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.5)
MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.5)
MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.5)
MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.5)
MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.5)
MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.5)
MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.5)
MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.5)
MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.5)
MCA snapc: full (MCA v2.0, API v2.0, Component v1.4.5)
MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.5)
MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.5)
MCA ess: env (MCA v2.0, API v2.0, Component v1.4.5)
MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.5)
MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.5)
MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.5)
MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.5)
MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.5)
MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.5)
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5