I would suggest you use the Fortran 2003 interface for h5dwrite_f instead:
SUBROUTINE h5dwrite_f(dset_id, mem_type_id, buf, hdferr, &
mem_space_id, file_space_id, xfer_prp)
INTEGER(HID_T), INTENT(IN) :: dset_id
INTEGER(HID_T), INTENT(IN) :: mem_type_id
TYPE(C_PTR) , INTENT(IN) :: buf
INTEGER , INTENT(OUT) :: hdferr
INTEGER(HID_T), INTENT(IN) , OPTIONAL :: mem_space_id
INTEGER(HID_T), INTENT(IN) , OPTIONAL :: file_space_id
INTEGER(HID_T), INTENT(IN) , OPTIONAL :: xfer_prp
as you don’t have to worry about passing the dimensions of the array to the
API.
You can also use ‘h5kind_to_type' to pass the appropriate type, for example
!
! Find the HDF type corresponding to the specified KIND
!
h5_kind_type_i = h5kind_to_type(ip,H5_INTEGER_KIND)
Take a look at h5ex_d_rdwr_kind_F03.f90 for an example of how to handle
different kinds of integers and reals,
https://www.hdfgroup.org/HDF5/examples/api18-fortran.html
Scot
> On Nov 10, 2015, at 2:17 AM, victor sv <[email protected]> wrote:
>
> Hi all,
>
> I'm trying to implement a layer to export the data of our application to XDMF
> format on top of HDF5. The data
> types that we want to handle are integers and reals of both single and double
> precision.
>
> I would like to know the actual state of the HDF5 Fortran interface for
> handling double precision integers.
>
> In particular, if we observe to the interface of the H5DWrite_f subroutine
> below (and its comments) we can see that the
> data type of the buffer (buf) could be INTEGER, and the data type of the
> dimensions is HSIZE_T.
>
> SUBROUTINE h5dwrite_vl_f(dset_id, mem_type_id, buf, dims
> , len, hdferr, &
> mem_space_id, file_space_id, xfer_prp)
> IMPLICIT NONE
> ...
>
> INTEGER(HID_T), INTENT(IN) :: mem_type_id ! Memory datatype identifier
> TYPE, INTENT(IN), & DIMENSION(dims(1),dims(2)) :: buf
> ! Data buffer; may be a scalar
> ! or an array
> ! TYPE must be one of the
> following
> ! INTEGER
> ! REAL
> ! CHARACTER
> INTEGER(HSIZE_T), INTENT(IN), DIMENSION(2) :: dims
> ! Array to hold corresponding
> ! dimension sizes of data
> ! buffer buf
> ! dim(k) has value of the k-th
> ! dimension of buffer buf
> ! Values are ignored if buf is
> ! a scalar
>
> ...
>
> I've read that the HSIZE_T data type depends on the architecture and it's
> defined at HDF5 compilation time and I can check
> that in my own compilation it's as double precision integer (64 bits). Is
> there any case where this value is a single precision
> integer (32)? In any case, how can I handle the writing of datasets bigger
> than max(HSIZE_T)?
>
> A different questions are about the data type of the raw data. In my HDF5
> compilation it seems that the H5DWrite_f procedure
> doesn't compile if the buf actual argument is a double precision integer.
> There is a native Fortran HDF5 mem_type_id for
> double precision integers?
>
> In some forum I've also read that the H5T_NATIVE_INTEGER could be a doble
> precision integer. If this is true, it is posible to
> handle single precision and double precision integers in the same
> application/software?
>
> Thanks in advance,
> Víctor.
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5