wkcn commented on a change in pull request #16175: [Dataset] add shard API URL: https://github.com/apache/incubator-mxnet/pull/16175#discussion_r324486007
########## File path: python/mxnet/gluon/data/dataset.py ########## @@ -64,6 +64,37 @@ def filter(self, fn): from . import FilterSampler return _SampledDataset(self, FilterSampler(fn, self)) + def shard(self, num_shards, index): + """Returns a new dataset includes only 1/num_shards of this dataset. + + For distributed training, be sure to shard before you randomize the dataset + (such as shuffle), if you want each worker to reach a unique subset. + + Parameters + ---------- + num_shards : int + A integer representing the number of data shards. + index : int + A integer representing the index of the current shard. + + Returns + ------- + Dataset + The result dataset. + """ + assert index < num_shards, 'Shard index of out bound: %d out of %d'%(index, num_shards) + assert num_shards > 0, 'Number of shards must be greater than 0' + length = len(self) + shard_len = length // num_shards + # Compute the start index for this partition + start = shard_len * index + # Compute the end index for this partition + end = start + shard_len + if index == num_shards - 1: + end = length Review comment: The sampler is not uniform. If there are 199 samples and 100 partitions, there will be `1+99` samples in the last partition, but `1` sample in other partitions. The following implementation will be more uniform. ```python shared_len = length // num_shareds rest = length % num_shareds start = shared_len * index + min(index, rest) end = start + shared_len + (index < rest) ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services