On 04/08/15 17:15, Mohamad Chaarawi wrote: > Hi Wolf, > > It is OK to have to read/write different amount of elements/data from > each processor. That is not the problem. The problem is that you > cannot have each processor specify a different layout of the dataset > on disk. This is the same problem for example as having 1 process > says the layout of the dataset is contiguous and other says it's > chunked. > > The solution is very simple.. just don't adjust the chunk size for > the dataset on the last process. > > I modified the replicator that you provided and attached to > demonstrate how this would work (I didn't do a lot of testing on it, > just on my local machine, but it should work fine). > > Thanks, Mohamad
Okay, so the only thing you did is to move the H5Pxxxx calls up, /before/ the H5Sxxxx calls, and give them each the same arguments? BTW, why shouldn't the line read something like chunk_dims[0] = (nx%iNumOfProc) ? nx/iNumOfProc+1 : nx/iNumOfProc; Why does your version still work for np != 2^X even though the chunks will be too small? (on the other hand, with the above, the added size of the chunks will be too large, and a chunk size of 1 also seems to work...) I don't quite understand what this does in general, I guess. Now each processor has the same chunk size. However, the memspaces and hyperslabs are still different. Why aren't those calls collective? Does the chunk size only mean that each process writes the data it owns in chunks of the given size? If one chunk is not enough it simply writes a second/third/fourth chunk? And if the data is smaller than the chunk, it writes whatever it has? Is that how it works? Thank you very much for your help, Mohamad! Thanks to Mark and Timothy for their input, too! Much appreciated! Cheers, Wolf -- _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
