Dear Krishna
What kind of NW problems ? And are you talking about
zookeeper.connet.timeout ? By defualt it's 6000
On Dec 9, 2017 10:49, "R Krishna" wrote:
This is a known issue for us in 0.10 due to network related problems with
ZK causing no leader exception and
Hi,
We are using "kafka_2.11-1.0.0" kafka version with default "offset" related
configurations.
Issue:
Consumer offsets are being deleted and we are not using auto commits at
consumer side.
Is there any configuration we need to add for consumer offset retention ??
Please help us.
Thanks,
About timestamps: embedding timestamps in the payload itself is not
really necessary IMHO. Each record has meta-data timestamp that provides
the exact same semantic. If you just copy data from one topic to
another, the timestamp can be preserved (using plain consumer/producer
and setting the
> How large is the record buffer? Is it configurable?
I seem to have just discovered this answer to this:
buffered.records.per.partition
On Sat, Dec 9, 2017 at 2:48 PM, Dmitry Minkovsky
wrote:
> Hi Matthias, yes that definitely helps. A few thoughts inline below.
>
>
Hi Matthias, yes that definitely helps. A few thoughts inline below.
Thank you!
On Fri, Dec 8, 2017 at 4:21 PM, Matthias J. Sax
wrote:
> Hard to give a generic answer.
>
> 1. We recommend to over-partitions your input topics to start with (to
> avoid that you need to add
And this is the first message in server.log after that the screw up started
[2017-12-09 03:10:49,947] ERROR [KafkaApi-1] Error when handling request