Instead of doing it in compute, you could rather override getPartitions
method of your RDD and return only the target partitions. This way tasks
for only target partitions will be created. Currently in your case, tasks
for all the partitions are getting created.

I hope it helps. I would like to hear if you take some other approach.


Hemant Bhanawat <https://www.linkedin.com/in/hemant-bhanawat-92a3811>
www.snappydata.io

On Wed, Apr 6, 2016 at 3:49 PM, Andrei <faithlessfri...@gmail.com> wrote:

> I'm writing a kind of sampler which in most cases will require only 1
> partition, sometimes 2 and very rarely more. So it doesn't make sense to
> process all partitions in parallel. What is the easiest way to limit
> computations to one partition only?
>
> So far the best idea I came to is to create a custom partition whose
> `compute` method looks something like:
>
> def compute(split: Partition, context: TaskContext) = {
>     if (split.index == targetPartition) {
>         // do computation
>     } else {
>        // return empty iterator
>     }
> }
>
>
>
> But it's quite ugly and I'm unlikely to be the first person with such a
> need. Is there easier way to do it?
>
>
>

Reply via email to