Hi Jason,

Thank you again!

2015-10-14 4:19 GMT+01:00 Jason Newton <[email protected]>:

> Right so, from your desire to avoid a copy, I must inform you that you are
> making a copy - maybe I didn't understand your case though - seemed you
> didn't want to have any overhead of memory and maybe cpu time.
>
> Yes memory and cpu overhead is what I am trying to avoid. There is
maintenance consideration as well, to avoid duplicating the struct
definition manually everytime it is updated.

What happens is the type conversion system is being ran inside the dataset
> write/read which unless provided with a background buffer via transfer
> properties, allocates a buffer, then proceeds to run a batch conversion
> from the source memory with source type to destination memory with
> destination type.
>
> Good to know this! How is the buffer size determined by default? Is it the
size to hold the whole dataset of destination type , or is conversion done
one by one on the array elements so that only one element need to be
allocated? How can I optimize it if it becomes a concern?

It's not very heavy at all if you do it in batches of the right size - and
> you must weight this against the fairly high overhead of the HDF api
> relative to C++ with smart memory management techniques and user conversion
> code.  But its usually not necessary to do that and the conversion api is
> there for free + robust.
>

Bests,

Jiaxin
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to