Hi Neil,

Is the code using a 0-sized array because a process does not have data to write?
We have an FAQ with examples of how to write data collectively and 
independently when one process does not have data
or does not need to write data. See:

                https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata

-Barbara
[email protected]



From: Hdf-forum [mailto:[email protected]] On Behalf Of 
Carlson, Neil
Sent: Monday, February 20, 2017 11:46 AM
To: [email protected]
Subject: [Hdf-forum] Parallel HDF5 and 0-sized arrays


I'm updating an mpi-parallel code that does serial HDF5 output (data gathered 
to rank 0 before being written) to use parallel HDF5.  In certain

cases the code will write a 0-sized array; i.e., an array where one of the

dimensions is 0.  This has worked fine in serial -- the H5Dwrite call has

no issues.  But in parallel H5Dwrite throws an error:

    H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data

In the latter call, the xfer_plist_id argument is set to:
    H5Pset_dxpl_mpio(xfer_plist_id, H5FD_MPIO_COLLECTIVE)

Is this a known issue with parallel HDF5?

I've experimented, and it seems that skipping the H5Dwrite call in
the case of a 0-sized array works.  Is that a legitimate thing to do?
From a naive user perspective (mine) that call is a no-op, though
I don't know how else it might be altering the file (metadata?)

For background, the code does not use HDF5 directly, but indirectly
through third party libraries (one serial, and a different one parallel).
So I'm debugging code I have little understanding of.

Thanks for any advice.​

-Neil
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to