This might help.
Try to replicate the configuration this guy is using for benchmarking kafka.
https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines
Am Do., 3. Okt. 2019 um 22:45 Uhr schrieb Eric Owhadi :
> There is a key piece of informatio
Hi Greg!!
Are you using offset auto commit or do you commit manually?
2017-03-22 22:21 GMT+01:00 Greg Lloyd :
> I have a 0.8.2.2 cluster which has been configured
> with offsets.storage=kafka. We are experiencing some issues after a few
> nodes went down and wrong nodes were brought up in their
erify?
>
> Thanks,
> Ismael
>
> On Wed, Oct 12, 2016 at 11:00 AM, Alexandru Ionita <
> alexandru.ion...@gmail.com> wrote:
>
> > OK. then my question is: why is not the producer trying to recover from
> > this error by updating its topic metadata right away i
ot; in new producer api.
> Its default value is 300sec.
>
> On Wed, Oct 12, 2016 at 3:04 PM, Alexandru Ionita <
> alexandru.ion...@gmail.com> wrote:
>
> > Hello kafka users!!
> >
> > I'm trying implement/use a mechanism to make a Kafka producer
> imperativ
Hello kafka users!!
I'm trying implement/use a mechanism to make a Kafka producer imperatively
update its topic metadata for a particular topic.
Here is the use case:
we are adding partitions on topics programmatically because we want to very
strictly control how messages are published to partic