I think it is common in batch (and micro-batch for streaming) because at
any given time you're computing a "chunk" (pick your naming.. we have lot's
of them ;-) ) and slicing-up this chunk to distribute across more cpus if
available is clearly better, but I was wondering about "event-at-a-time"
processors and everything in-between - such as bundles that may be of size
1, but might contain more elements.

On Tue, Dec 6, 2016 at 10:18 PM Raghu Angadi <rang...@google.com.invalid>
wrote:

> On Sun, Dec 4, 2016 at 11:48 PM, Amit Sela <amitsel...@gmail.com> wrote:
>
> > For any downstream computation, is it common for stream processors to
> > "fan-out/parallelise" the stream by shuffling the data into more
> > streams/partitions/bundles ?
> >
>
> I think so. It is pretty common in batch processing too.
>

Reply via email to