http://hdf-forum.184993.n3.nabble.com/Deflate-and-partial-chunk-writes-td4028713.html
Basically, my solution was to locally buffer data until I'd filled up an entire chunk before writing to disk. Otherwise there are some inefficiencies in the compression that will cause your files to be oversized.
--Patrick On 10/5/2016 1:06 AM, [email protected] wrote:
From: Carlos Penedo Rocha <[email protected]> To: "[email protected]" <[email protected]> Subject: [Hdf-forum] File size Schlumberger-Private Hi, I have a scenario in which my compressed h5 file needs to be updated with new data that is coming in every, say, 5 seconds. Approach #1: keep the file opened and just write data as they come, or write a buffer at once. Approach #2: open the file (RDWR), write the data (or a buffer) and then close the file. Approach #1 is not desirable for my case because if there's any problem (outage, etc), then the h5 file will likely get corrupted. Or if I want to have a look at the file, I can't because it's still writing (still opened). Approach #2 is good to address the issue above, BUT I noticed that if I open/write/close the file every 5 seconds, the file compression gets really bad and the file size goes up big time. Approach 1 doesn't suffer from this problem. So, my question is: is there an "Approach #3" that gives me the best of the two worlds? Less likely to get me a corrupted h5 file and at the same time, a good compression rate? Thanks, Carlos R.
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
