I'm trying to write an HDF5 file with dataset compression from an MPI job. (Using PETSc 3.8 compiled against MVAPICH2, if that matters.) After running into the "Parallel I/O does not support filters yet" error message in release versions of HDF5, I have turned to the develop branch. Clearly there has been much work towards collective filtered IO in the run-up to a 1.11 (1.12?) release; equally clearly it is not quite ready for prime time yet. So far I've encountered a livelock scenario with ZFP, reproduced it with SZIP, and, with no filters at all, obtained this nifty error message:
ex12: H5Dchunk.c:1849: H5D__create_chunk_mem_map_hyper: Assertion `fm->m_ndims==fm->f_ndims' failed. Has anyone on this list been able to write parallel HDF5 using a recent state of the develop branch, with or without filters configured? Thanks, - Michael _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
