Hi Dana, Thank you very much for your proposal. The type of memory consume that you describe is an effect that we realised in some performance tests we designed for our archiving system. In these tests, we were able to reproduce this memory behaviour of the linux OS with a simple “dd if=/dev/none of=file bs=4k count=1000000”, and as you perfectly describe, the OS tries to get as much amount of memory as is possible until a limit, but if a process requires it then the OS provides it to this process.
In the case that you have commented, I have made some tests and although the linux OS of the machine takes almost all the memory but this memory is not associated to the process that makes the file operations. Even if caches are cleaned, no change in memory consume of the process is produced. Also it is important to remark that the memory usage I need to free produces finally a critical failure in the process. Regards, Rodrigo > El 24 abr 2017, a las 19:13, Dana Robinson <[email protected]> escribió: > > Hi Rodrigo, > > From what I understand, glibc (I'm using Linux as an example) can either > service memory requests using sbrk(2) to increase the process break (smaller > allocations) or mmap(2) (larger allocations, over ~128k on most systems but I > think this is dynamic now). Anything served from mmap() should, in theory, be > immediately reclaimable by the kernel after a call to free() but I believe > this is done lazily. You can probably check this by writing to drop_caches > (as described here: > https://unix.stackexchange.com/questions/17936/setting-proc-sys-vm-drop-caches-to-clear-cache > ) after the H5Fclose() call and seeing what happens to your program's memory > footprint. I would image that most of what is causing your problem (chunk > caches, etc.) will be larger than the memory allocator inflection point and > is simply being cached. I've always been under the impression that the OS > typically discards those freed pages easily when other processes need the > memory so you shouldn't be forced to go to the disk for swap space. > > Dana Robinson > Software Engineer > The HDF Group > > -----Original Message----- > From: Hdf-forum [mailto:[email protected]] On Behalf Of > Castro Rojo, Rodrigo > Sent: Friday, April 21, 2017 1:30 PM > To: [email protected] > Subject: [Hdf-forum] How to free memory after H5Fclose > > Hi, > > I am using HDF5 storage in SWMR mode. I am open files with huge number of > datasets and then I close it but after H5Fclose the amount of memory reserved > by the process is the same. > > The same effect happens when data is written to a data set. All memory > reserved by the process (for chunks, caches, etc) is not freed after file is > closed (H5Fclose). > > I have checked with valgrind but no memory leak is detected. It seams there > is a free of memory before the process finishes but I need this free when > file is closed. > > Is possible to have memory free after H5Fclose without finishing the process? > > > A simplified example of my code follows the typical sequence: > > ——————————————————————————————— > #include "hdf5.h" > #include "hdf5_hl.h" > #include <stdlib.h> > > // Number of tables to create > #define NUM_DATASETS 10000 > #define CHUNK_SIZE 100 > > typedef struct > { > double data; > long long timestamp; > } data_t; > > int main(void) > { > hid_t fid; > hid_t sid; > hid_t dcpl; > hid_t pdsets[NUM_DATASETS]; > char dname[300]; > hsize_t dims[2] = {1, 0}; /* Dataset starting dimensions */ > hsize_t max_dims[2] = {1, H5S_UNLIMITED}; /* Dataset maximum dimensions > */ > hsize_t chunk_dims[2] = {1, CHUNK_SIZE}; /* Chunk dimensions */ > int i; > > printf("Creating file\n"); > > // Open file > fid = H5Fcreate("packet.h5", H5F_ACC_TRUNC | H5F_ACC_SWMR_WRITE, > H5P_DEFAULT, H5P_DEFAULT); > > // Create compound data type > hid_t datatype = H5Tcreate(H5T_COMPOUND, sizeof(data_t)); > H5Tinsert(datatype, "Data", HOFFSET(data_t, data), H5T_NATIVE_DOUBLE); > H5Tinsert(datatype, "Timestamp", HOFFSET(data_t, timestamp), > H5T_NATIVE_LLONG); > > /* Create dataspace for creating datasets */ > if((sid = H5Screate_simple(2, dims, max_dims)) < 0) > return 1; > > /* Create dataset creation property list */ > if((dcpl = H5Pcreate(H5P_DATASET_CREATE)) < 0) > return -1; > if(H5Pset_chunk(dcpl, 2, chunk_dims) < 0) > return -1; > > printf("Creating %d datasets\n", NUM_DATASETS); > // Create datasets > for (i = 0; i < NUM_DATASETS; i++) { > sprintf(dname,"dset_%d",i); > if((pdsets[i] = H5Dcreate2(fid, dname, datatype, sid, > H5P_DEFAULT, dcpl, H5P_DEFAULT)) < 0) > return 1; > if(H5Dclose(pdsets[i]) < 0) > return -1; > } > > printf("Closing everything\n"); > > if(H5Pclose(dcpl) < 0) > return -1; > if(H5Sclose(sid) < 0) > return -1; > if(H5Tclose(datatype) < 0) > return -1; > if(H5Fclose(fid) < 0) > return -1; > > printf("After closing...\n"); > > return 0; > } > --------------------------------------------------- > > > Thank you. > > Rodrigo > ---------------------------- > Confidencialidad: > Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su > destinatario y puede contener informaci�n privilegiada o confidencial. Si no > es vd. el destinatario indicado, queda notificado de que la utilizaci�n, > divulgaci�n y/o copia sin autorizaci�n est� prohibida en virtud de la > legislaci�n vigente. Si ha recibido este mensaje por error, le rogamos que > nos lo comunique inmediatamente respondiendo al mensaje y proceda a su > destrucci�n. > > Disclaimer: > This message and its attached files is intended exclusively for its > recipients and may contain confidential information. If you received this > e-mail in error you are hereby notified that any dissemination, copy or > disclosure of this communication is strictly prohibited and may be unlawful. > In this case, please notify us by a reply and delete this email and its > contents immediately. > ---------------------------- > > > _______________________________________________ > Hdf-forum is for HDF software users discussion. > [email protected] > http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org > Twitter: https://twitter.com/hdf5 _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
