Hi, I think the current default chuck cache size behaviour of HDF5 is inadequate for the type of data it is typically used with. This is specially a problem when using compression as most users/readers will not set any chunk cache and cause endless decompression calls. I think this hinders the use of compression and other kinds of filters.
I would like to propose that when a dataset is opened its chuck cache be set to the largest of the file chunk cache (the one set with H5Pset_cache) and the dataset chunk size. I think this would be beneficial for the vast majority of the workloads. Cheers, Filipe
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
