Hi,

you can make a chunk as large as 4GB, which may cover the entire dataset at once, if you prefer so. For datasets larger than 4GB, then chunking it into sections that can be accessed with 32-bit indices seems important.

If you always read an entire dataset at once, then using a single chunk may be just fine. However, if you ever use hyperslabs and you want to read only a part of a bigger datasets, then the ability to read only those parts, and even more, to decompress only those parts that are of interested instead of needing to decompress the entire dataset will be beneficial. Also, if you have multiple chunks, then each chunk can be decompressed independently, thus in parallel (though I don't know if there are filters implemented that way for serial HDF5).

      Werner


On 06.09.2016 18:31, MEYNARD Rolih wrote:
Hi,

I would like to know the reasons why it is required to create chunks for compression on HDF5 ?

For example Why is not possible to compress dataset without creating of chunks ?

Thank you,

Rolih

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

--
___________________________________________________________________________
Dr. Werner Benger                Visualization Research
Center for Computation & Technology at Louisiana State University (CCT/LSU)
2019  Digital Media Center, Baton Rouge, Louisiana 70803
Tel.: +1 225 578 4809                        Fax.: +1 225 578-5362


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to