I was asked privately about patches more than a few times - most recently from Brock here. I pushed my git repo upstream to https://github.com/nevion/hdf5 so you can have patches as I use them and incase I get lazy and forget to remind the HDF group to review/accept them again - I can lose more than a few weeks before starting this process again. Then I get to lose a few hours next release trying to [re]integrate them back in on the next release which is work I'd like to stop doing.
I encourage someone from the HDF group to try and incorporate the patches from git rather than me submitting them / half ignored with the help desk without any commit messages... almost all of these are nearing 2 years old now although we have whittled them down a little in the last 2-3 releases. There's shouldn't be anything controversial in these patches... Can't wait and hope we have better luck if we ever see hdf5 under https://github.com/HDFGroup and pull requests. In the mean time if there's anything I can do to help get them in, please let me know. -Jason On Tue, Aug 11, 2015 at 2:01 PM, Jason Newton <[email protected]> wrote: > I use compression and they do indeed support filters and type conversion - > be careful though because filters or conversions may add significant > overhead in a data acquisition loop. H5PTcreate_fl has a compression > parameter (for the zip filter) which will clue you in on it's support: > https://www.hdfgroup.org/HDF5/doc/HL/RM_H5PT.html#H5PTcreate_fl - you can > also take a look at H5PT.c to answer some of your questions, maybe. > > However there is no version of this function upstream that allows you to > specify your own compression. I have patched one in and intend to submit > these patches upstream (again? I believe i submitted this one in the past > but could be wrong) where this variant supports the dataset property list > so you can specify whatever you want for compression/filters: > > >> H5PTcreate_fl2(hid_t loc_id, const char *dset_name, hid_t dtype_id, > hsize_t chunk_size, hid_t plist_id) > > I'll try and remember to submit my latest patches in the next few days. > But your answer is: technically yes but the exposure of filters is very > restricted - not clear why this was ever the case. > > -Jason > > On Tue, Aug 11, 2015 at 1:27 PM, Brock Hargreaves < > [email protected]> wrote: > >> Hi forum, >> >> Could anyone confirm my claim about packet tables not supporting filters? >> It would seem a natural thing to support since the filters require data >> chunking which is a requirement of packet tables. >> >> Cheers, >> Brock >> >> On Thu, Jul 23, 2015 at 10:32 AM, Brock Hargreaves < >> [email protected]> wrote: >> >>> Hi Daniel, >>> >>> Thanks for the response. Perhaps I misunderstood HDF5 Packet Tables when >>> I was reading about them about a week ago. For example, examining their >>> signature for creation: >>> >>> * hid_t* H5PTcreate_fl( *hid_t* loc_id, *const char ** dset_name, >>> *hid_t* dtype_id, *hsize_t* chunk_size, *int* compression ) >>> >>> Versus a traditional hdf5 dataset which can take various properties >>> list, in particular dcpl_id: >>> >>> * hid_t* H5Dcreate( *hid_t* loc_id, *const char **name, *hid_t* >>> dtype_id, *hid_t* space_id, *hid_t* lcpl_id, *hid_t* dcpl_id, *hid_t* >>> dapl_id ) >>> >>> This gives me the impression that Packet Tables do not support filters, >>> such as ones used for lossless compression. One of the main reasons I'm >>> looking into HDF5 is because of it's ability to incorporate such filters. >>> >>> Cheers, >>> Brock >>> >>> On Thu, Jul 23, 2015 at 8:51 AM, Daniel Kahn <[email protected]> >>> wrote: >>> >>>> Hi Brock, >>>> >>>> Have you investigated the HDF5 Packet Table API? It was created >>>> precisely for data acquisition problems. I've never used it and thus can't >>>> provide any personal experience, but that would be my starting point. >>>> >>>> Cheers, >>>> --dan >>>> >>>> >>>> On 07/23/15 10:16, Brock Hargreaves wrote: >>>> >>>> Hi forum, >>>> >>>> I apologize for the verbosity of this message ahead of time but the >>>> devil is in the details. I've scoured the archives and have had trouble >>>> finding something similar to my problem in terms of scale. >>>> >>>> >>>> -- >>>> Daniel Kahn >>>> Science Systems and Applications Inc.301-867-2162 >>>> >>>> >>>> _______________________________________________ >>>> Hdf-forum is for HDF software users discussion. >>>> [email protected] >>>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org >>>> Twitter: https://twitter.com/hdf5 >>>> >>> >>> >> >> _______________________________________________ >> Hdf-forum is for HDF software users discussion. >> [email protected] >> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org >> Twitter: https://twitter.com/hdf5 >> > >
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
