OK so repeating with Java Kafka producer there is no problem – it’s specific to
the Kafka CLI Producer! Paul
From: Brebner, Paul
Date: Friday, 5 July 2024 at 1:21 PM
To: users@kafka.apache.org
Subject: Re: Kafka 20k topics metadata update taking long time
EXTERNAL EMAIL - USE CAUTION when click
Repeating my tests today with a bit more caution I can get up to around 47,000
partitions for a single topic before the producer fails with a bootstrap broker
disconnected warning (in practice the producer cannot send), here’s a graph of
the producer time (to send 1k messages) using producer CLI
I have 100+ sink connectors running with 100+ topics each with roughly 3
partitions per topic. How would you configure resources (mem and cpu) to
optimally handle this level of load. What would be your considerations?
Also, When considering this load, should i be thinking about it as an
aggregated
Thank you for having a look at this. I agree that the only way to really
gauge load is to look at lag. But the connector tasks should not crash and
die because of load. I will raise this with SF.
On Wed, Jul 3, 2024 at 7:14 PM Greg Harris
wrote:
> Hey Burton,
>
> Thanks for your question and bug
Hello
I have a source kafka cluster and 2 destination kafka clusters to which
i want to mirror messages _alternatively_, with this I mean that if
topic A was mirrored up to offset=3 to destination cluster 1 I want
mirrormaker to mirror offset=4 onwards to destination cluster 2
I dont care for co