Hi Florian,

The broker will accept a batch of records as a whole or reject them as a
whole unless it encounters an IOException while trying to append the
messages, which will be treated as a fatal error anyways.

Duplicates usually happen when the whole batch is accepted but the ack was
not delivered in time, and hence it was re-tried.


Guozhang


On Tue, Aug 30, 2016 at 2:45 AM, Florian Hussonnois <fhussonn...@gmail.com>
wrote:

> Hi all,
>
> I am using kafka_2.11-0.10.0.1, my understanding is that the producer API
> batches records per partition to send efficient requests. We can configure
> batch.size to increase the throughtput.
>
> However, in case of failure all records within the batch failed ? If that
> is true,  does that mean that increasing batch.size can also increase the
> number of duplicates in case of retries ?
>
> Thanks,
>
> Florian.
>



-- 
-- Guozhang

Reply via email to