Hello HDF5 developers!

Currently, HDF5 library presents two ways to control cache size when accessing datasets:

* H5Pset_cache / H5Pget_cache
* H5Pset_chunk_cache / H5Pget_chunk_cache

The former control the default cache buffer size for all datasets, while the latter allow to fine-tune the cache buffer size on per-dataset basis.

It works nicely in many cases. However working with bigger, multi-dataset HDF5 files reveals a considerable flow. Cache is way to trade memory for speed. How much memory one would trade naturally depends on the total memory available, i.e. memory is (a scarce) global resource. Thus, more often than not it is desirable to set *global* cache size for *all* HDF5 datasets, regardless of number of datasets (and even files) open.

E.g, I'd like to be able to say: "Use no more than 1GB of memory for cache" instead of "Use no more than 50MB of memory for caching each dataset". The latter is not as useful as the former, as number of datasets may vary greatly.

Currently there seems no way to impose global cache size limit. Would it be hard to implement such a feature, in one of future versions?

Thank you for your work,
Andrey Paramonov

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to