Hi,

I am relatively new to HDF5 and HDF5/parallel, and although I have experience with MPI it is not extensive. We are exploring ways of saving data in parallel using HDF5 in a field in which it is practically unknown up to now.

Our paradigm is "parallel modular event processing:"

 * A typical job processes many "events."
 * An event contains all of the interesting data (raw and processed)
   associated with some time interval.
 * Each event can be processed independently of all other events.
 * Each event's data can be subdivided into internal components, "data
   products."
 * "Modules" are processing subunits which read or generate one or more
   data products for each event.
 * One can calculate a data dependency graph specifying the allowed
   ordering and/or parallelism of modules processing one or more events
   simultaneously for a given job configuration and event structure.

We have been using h5py with HDF5 and OpenMPI to explore different strategies for parallel I/O in a future parallel event-processing framework. One of the approaches we have come up with so far is to have one HDF5 dataset per unique data product / writer module combination, keeping track of the different relevant sections of each dataset via (for now) an external database. This works well in serial tests, but in parallel tests we are running up against the constraint that dataset resizing is a collective operation, meaning that all ranks including non-writers will have to become aware of and duplicate dataset resizing operations required by other writers. The problem seems to get even worse if there's a possibility that two or more instances of a module would need to extend and write to the same dataset at the same time (while processing different events, say), since they will have to coordinate and agree on the new size of the dataset and their respective sections thereof.

Are we misunderstanding the problem, or is it really this hard? Has anyone else hit upon a reasonable strategy for handling this or something like it?

Any pointers appreciated.

Thanks,

Chris Green.

--
Chris Green <[email protected]>, FNAL CS/SCD/ADSS/SSI/TAC;
'phone (630) 840-2167; Skype: chris.h.green;
IM: [email protected], chissgreen (AIM),
chris.h.green (Google Talk).

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to