Guozhang,
Thanks for the reply. I figured it out after a while. Indeed, the global
default time based retention was tripping me. I was using older data for
testing and publishing messages with explicit timestamps. It took me a
while to figure out what was happening because kafka-topics.sh does
Hello Elias,
>From the error messages it does seem that brokers are truncating log
segments too fast that both the stream's fetcher (or more generally any
consumer fetcher) as well as the replica's fetcher cannot catch up,
resulting their fetch offset is smaller than the leader's smallest offset.
Suggestions?
On Thu, Jan 19, 2017 at 6:23 PM, Elias Levy
wrote:
> In the process of testing a Kafka Streams application I've come across a
> few issues that are baffling me.
>
> For testing I am executing a job on 20 nodes with four cores per node,
> each instance configured to use 4 threads, ag
In the process of testing a Kafka Streams application I've come across a
few issues that are baffling me.
For testing I am executing a job on 20 nodes with four cores per node, each
instance configured to use 4 threads, against a 5 node broker cluster
running 0.10.1.1.
Before execution kafka-stre