Hi,

I am trying to use the Java HDF5 interface (JHI5) in an application server 
environment, where I am writing
to many different HDF5 files within a single JVM instance.

However, while using HDF5 I am running into memory issues. Basically, I am 
facing the issue that writing
to an HDF5 file causes memory to be allocated, which, even after successful 
writing and
closing the file is not deallocated.

So I would like to know if there is a way to clear all allocated memory after 
writing to the file using the JHI5 library.

I already made sure that everything is closed and also tried to limit the cache 
size using H5Pset_cache and H5Pset_chunk_cache.
However, changing cache sizes does not eliminate the problem, that the memory 
is not deallocated after closing the file.
Also calling the function H5garbage_collect does not seem to change this 
behaviour.
I saw in the docs that the native implementation provides the call 
H5Pset_evict_on_close 
(https://support.hdfgroup.org/HDF5/doc/RM/RM_H5P.html#Property-SetEvictOnClose),
however, this call seems not to be available in the JHI5 version.

Is there any other way to make sure that all memory is deallocated, or am I 
doing something wrong? To this end, I am posting an example code I am using:

final long[] dims = { 0, 0 };
final long[] maxdims = { HDF5Constants.H5S_UNLIMITED, 
HDF5Constants.H5S_UNLIMITED };
final int RANK = 2;
long cache_size = 1024L*1024; // cache size in bytes

try {
  dims[0] = data.length; // num rows
  dims[1] = data[0].length; // num cols
  int file_id = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
  H5.H5Pset_cache(file_id, 0, 521L, cache_size, 1);
  file_id = H5.H5Fcreate(filename, HDF5Constants.H5F_ACC_TRUNC, 
HDF5Constants.H5P_DEFAULT, file_id);
  int dataspace_id = H5.H5Screate_simple(RANK, dims, maxdims);
  int dataset_access_property_list_id = 
H5.H5Pcreate(HDF5Constants.H5P_DATASET_ACCESS);
  H5.H5Pset_chunk_cache(dataset_access_property_list_id, 521L, cache_size, 1);
  int dataset_creation_property_list_id = 
H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
  long[] dim_chunk = { dims[1], 1 };
  H5.H5Pset_chunk(dataset_creation_property_list_id, RANK, dim_chunk);
  int dataset_id = H5.H5Dcreate(file_id, path, HDF5Constants.H5T_NATIVE_DOUBLE, 
dataspace_id,
                HDF5Constants.H5P_DEFAULT, dataset_creation_property_list_id, 
dataset_access_property_list_id);
  H5.H5Dwrite(dataset_id, HDF5Constants.H5T_NATIVE_DOUBLE, 
HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,
        HDF5Constants.H5P_DEFAULT, data);

  H5.H5Fflush(dataset_id, HDF5Constants.H5F_SCOPE_GLOBAL);
  H5.H5Dclose(dataset_id);
  H5.H5Sclose(dataspace_id);
  H5.H5Pclose(dataset_creation_property_list_id);
  H5.H5Pclose(dataset_access_property_list_id);
  H5.H5Fclear_elink_file_cache(file_id);
  H5.H5Pclose(file_id);
  H5.H5Fclose(file_id);
  H5.H5garbage_collect();
} catch (final Exception e) {
  e.printStackTrace();
}

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to