Yes, I checked it. The issue is that RoundRobbinPartitioner is bound to the
producer. In a scenario with multiple producers it doesn't guarantee equal
distribution - from what I understood and from my tests, the following
situation happens with it:

[image: image.png]

Of course, the first partition is not always 1 and each producer may start
in a different point in time, anyway my point is that it does not guarantee
equal distribution.

The other option pointed out is to select the partition myself - either a
shared memory on the producers (assuming that this is possible - I mean I
would need to guarantee that producers CAN share a synchronized memory) or
include an intermediate topic with a single partition and a
dispatcher/producer using RoundRobinPartitioner (but this would include a
single point of failure).

[image: image.png]
[image: image.png]

None of these seem to be ideal as a Broker side round robin solution would.
Am I missing something? Any other ideas?

Thanks

On Tue, May 26, 2020 at 11:34 AM M. Manna <manme...@gmail.com> wrote:

> Hey Vinicius,
>
>
> On Tue, 26 May 2020 at 10:27, Vinicius Scheidegger <
> vinicius.scheideg...@gmail.com> wrote:
>
> > In a scenario with multiple independent producers (imagine ephemeral
> > dockers, that do not know the state of each other), what should be the
> > approach for the messages being sent to be equally distributed over a
> topic
> > partition?
> >
> > From what I understood the partition election is always on the Producer.
> Is
> > this understanding correct?
> >
> > If that's the case, how should one achieve an equally distributed load
> > balancing (round robin) over the partitions in a scenario with multiple
> > producers?
> >
> > Thank you,
> >
> > Vinicius Scheidegger
>
>
>  Have you checked RoundRobinPartitioner ? Also, you can always specify
> which partition you are writing to, so you can control the partitioning in
> your way.
>
> Regards,
>
> Regards,
>
> >
> >
>

Reply via email to