Hi,

In order to keep separated (physically) the data from different customers
in our application, we are using a custom partitioner to drive messages to
a concrete partition of a topic. We know that we are loosing parallelism
per topic this way, but our requirements regarding multitenancy are higher
than our throughput requirements.

So, in order to increase the number of customers working on a cluster, we
are increasing the number of partitions dinamically per topic as the new
customer arrives using kafka AdminUtilities.
Our problem arrives when using the new kafka consumer and a new partition
is added into the topic, as this consumer doesn't get updated with the "new
partition" and therefore messages driven into that new partition never
arrives to this consumer unless we reload the consumer itself. What was
surprising was to check that using the old consumer (configured to deal
with Zookeeper), a consumer does get messages from a new added partition.

Is there a way to emulate the old consumer behaviour when new partitions
are added in the new consumer?

Thanks in advance,
David

Reply via email to