On Fri, Oct 25, 2013 at 12:22:40PM +0200, Matthieu Brucher wrote: > Hi, > > I'm trying to maximize the way data is read from our filesystem, and > mainly this means reading from several readers chunks of 1MiB. My > problem is that sometime the data is not just one elementxnb cells, > but may be 2, 3, or 10 elements. This means that I may not be able to > properly read/write 1MiB if the data is stored as a (say) 2D array. > Is there a way to read from this array as if it were a 1D array? > Recomposing the data is not a problem for me as I has a mapping from > the data id to its location on each process.
If you turn on collective I/O it's more than likely the underlying MPI-IO implementation will sort out the uneven distribution among processes, coalesce small accesses into fewer bigger accesses, and maybe even designate "aggregator" nodes to assist with scalability. You'll get all this without needing to recompose the data. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
