Dear Forum Members,
I am trying to store application data in a compound dataset (via Matlab
low-level API). The dataset looks like this:
Column1:
type1 = H5T.vlen_create('H5T_NATIVE_DOUBLE')
Column2:
type2= H5T.copy('H5T_C_S1')
H5T.set_size(type2, 'H5T_VARIABLE')
In order to try out the performance, I varied the length of type 1, i.e.
storing 10/100/1000/10000 doubles in 10000 rows. Storing though seems
pretty fast, but when I try to open the data set with the HDF Viewer I
encounter significant performance issues for 10000 values. I already
tried to vary the chunking parameter (1/10/100/1000) but without success.
Is this an issue with the HDF viewer or rather a problem with my data
model? Is there a trick in order to enhance dataset loading performance
inside the viewer?
Any hint is appreciated!
Thanks,
Daniel
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5