Hi Quincey, I have run several tests based on the solutions you propose and I couldn’t free the memory. Anyway let me share some interesting results. I am using 10.0.1-pre2.
1- Use of H5garbage_collect() after H5Fclose has no effect in memory usage of the file. 2- Build version of the library with "—enable-using-memchecker”, has no effect in memory consume in my example. 3- H5Pset_evict_on_evict has no effect 4- If every dataset is closed just after creating, the use of memory is much less (27MB) that if all datasets are created and then all of them are closed (600MB). 5- After writing data in all datasets, H5Fflush(fid,H5F_SCOPE_GLOBAL) has very different times depending on open close dataset strategy. For 40K datasets and 300 records to write per dataset: a) Open and close every dataset when data is written: 7 seconds (including flush time) that is a latency very similar that I got in previous release. b) Keep all datasets open and Write in all datasets and then flush with “H5Fflush(fid,H5F_SCOPE_GLOBAL)”: 95 seconds. I hope this information could help. Thank you very much. Regards, Rodrigo El 24 abr 2017, a las 20:11, Quincey Koziol <[email protected]<mailto:[email protected]>> escribió: Hi Rodrigo, Sounds like the automatic memory manager in HDF5 is not what you want / need. You can force the memory to be freed early by calling H5garbage_collect() (https://support.hdfgroup.org/HDF5/doc/RM/RM_H5.html#Library-GarbageCollect) or can disable the feature in the library you build by passing the—enable-using-memchecker flag to configure when you build the package. Quincey On Apr 21, 2017, at 1:29 PM, Castro Rojo, Rodrigo <[email protected]<mailto:[email protected]>> wrote: Hi, I am using HDF5 storage in SWMR mode. I am open files with huge number of datasets and then I close it but after H5Fclose the amount of memory reserved by the process is the same. The same effect happens when data is written to a data set. All memory reserved by the process (for chunks, caches, etc) is not freed after file is closed (H5Fclose). I have checked with valgrind but no memory leak is detected. It seams there is a free of memory before the process finishes but I need this free when file is closed. Is possible to have memory free after H5Fclose without finishing the process? A simplified example of my code follows the typical sequence: ——————————————————————————————— #include "hdf5.h" #include "hdf5_hl.h" #include <stdlib.h> // Number of tables to create #define NUM_DATASETS 10000 #define CHUNK_SIZE 100 typedef struct { double data; long long timestamp; } data_t; int main(void) { hid_t fid; hid_t sid; hid_t dcpl; hid_t pdsets[NUM_DATASETS]; char dname[300]; hsize_t dims[2] = {1, 0}; /* Dataset starting dimensions */ hsize_t max_dims[2] = {1, H5S_UNLIMITED}; /* Dataset maximum dimensions */ hsize_t chunk_dims[2] = {1, CHUNK_SIZE}; /* Chunk dimensions */ int i; printf("Creating file\n"); // Open file fid = H5Fcreate("packet.h5", H5F_ACC_TRUNC | H5F_ACC_SWMR_WRITE, H5P_DEFAULT, H5P_DEFAULT); // Create compound data type hid_t datatype = H5Tcreate(H5T_COMPOUND, sizeof(data_t)); H5Tinsert(datatype, "Data", HOFFSET(data_t, data), H5T_NATIVE_DOUBLE); H5Tinsert(datatype, "Timestamp", HOFFSET(data_t, timestamp), H5T_NATIVE_LLONG); /* Create dataspace for creating datasets */ if((sid = H5Screate_simple(2, dims, max_dims)) < 0) return 1; /* Create dataset creation property list */ if((dcpl = H5Pcreate(H5P_DATASET_CREATE)) < 0) return -1; if(H5Pset_chunk(dcpl, 2, chunk_dims) < 0) return -1; printf("Creating %d datasets\n", NUM_DATASETS); // Create datasets for (i = 0; i < NUM_DATASETS; i++) { sprintf(dname,"dset_%d",i); if((pdsets[i] = H5Dcreate2(fid, dname, datatype, sid, H5P_DEFAULT, dcpl, H5P_DEFAULT)) < 0) return 1; if(H5Dclose(pdsets[i]) < 0) return -1; } printf("Closing everything\n"); if(H5Pclose(dcpl) < 0) return -1; if(H5Sclose(sid) < 0) return -1; if(H5Tclose(datatype) < 0) return -1; if(H5Fclose(fid) < 0) return -1; printf("After closing...\n"); return 0; } --------------------------------------------------- Thank you. Rodrigo ---------------------------- Confidencialidad: Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener informaci�n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci�n, divulgaci�n y/o copia sin autorizaci�n est� prohibida en virtud de la legislaci�n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucci�n. Disclaimer: This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately. ---------------------------- _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected]<mailto:[email protected]> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5 _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected]<mailto:[email protected]> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
