This post is further to previous post : " memory leak with reading chunked data"
Test dataset = (250, 400, 300), chunk size = (1, 50, 75)
Test involves looping through first 2 dimensions reading (1,1,300).
After further investigation I believe it is incorrect to call the situation I 
am encountering a "memory leak". As far as I can tell all memory allocated is 
later freed. However memory use does grow throughout the process, in my actual 
use case this causes the program to be killed due to excessive memory use.

I have profiled with valgrind --tool=massif and run test program with chunk 
cache set to zero. Memory is allocated consistently throughout run which is not 
freed until the end. This allocation can be traced to a call to
H5D__btree_idx_get_addr - (eventually requesting 18,208b in my case from 
H5FL_blk_malloc)
I am wondering if this is desired behaviour or a bug?
Is there any way to limit this growth in memory?

Cheers
Stuart

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to