I'm updating an mpi-parallel code that does serial HDF5 output (data gathered
to rank 0 before being written) to use parallel HDF5. In certain
cases the code will write a 0-sized array; i.e., an array where one of the
dimensions is 0. This has worked fine in serial -- the H5Dwrite call has
no issues. But in parallel H5Dwrite throws an error:
H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data
In the latter call, the xfer_plist_id argument is set to:
H5Pset_dxpl_mpio(xfer_plist_id, H5FD_MPIO_COLLECTIVE)
Is this a known issue with parallel HDF5?
I've experimented, and it seems that skipping the H5Dwrite call in
the case of a 0-sized array works. Is that a legitimate thing to do?
>From a naive user perspective (mine) that call is a no-op, though
I don't know how else it might be altering the file (metadata?)
For background, the code does not use HDF5 directly, but indirectly
through third party libraries (one serial, and a different one parallel).
So I'm debugging code I have little understanding of.
Thanks for any advice.?
-Neil
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5