Hi Mike,
FYI, support for "max.poll.records" was added in
https://github.com/apache/kafka/pull/931 (KAFKA-3007) which was not present
in the streams tech preview release. It will however be in 0.10
Cheers,
Geoff
On Thu, Apr 7, 2016 at 4:58 AM, Michael D. Coon
wrote:
> One more thing I'm notici
One more thing I'm noticing in the logs.
I see periodic node disconnection messages due to "timeout". I set my
metadata.fetch.timeout.ms to 6, request.timeout.ms to 3 and timeout.ms
to 3 and those should be more than enough time waiting for metadata
responses. I also set my offset co
Guozhang,
Thanks for the advice; however, "max.poll.records" doesn't seem to be
supported since it's not affecting how many records are coming back from the
consumer.poll requests. However, I agree that the likely culprit in rebalancing
is the delay in processing new records. I'm going to try
Hi Michael,
Your issue seems like a more general one with the new Kafka Consumer
regarding unexpected rebalances: as for Kafka Streams, it's committing
behavior is synchronous, i.e. triggering "consumer.commitSync" of the
underlying new Kafka Consumer, which will fail if there is an ongoing
rebala
All,
I'm getting CommitFailedExceptions on a small prototype I built using
kafkaStreams. I'm not using the DSL, but the TopologyBuilder with several
processors chained together with a sink in between a few of them. When I try
committing through the ProcessorContext, I see exceptions being thr