Hi,
We also did version upgrade and then degraded to old due to some issue.
But that we done ~1 year before.
Do I need to follow this to solve this issue?
http://mail-archives.apache.org/mod_mbox/kafka-dev/201907.mbox/%3cjira.13246826.1563975667000.40318.1563975720...@atlassian.jira%3E
In this
On Wed, Nov 20, 2019 at 6:35 PM Edward Capriolo
wrote:
>
>
> On Wednesday, November 20, 2019, Matthias J. Sax
> wrote:
>
>> I am not sure what Spring does, but using Kafka Streams writing the
>> output and committing offset would be part of the same transaction.
>>
>> It seems Spring is doing
bumping this up with new update:
I've investigated another occurrence of this exception.
For analyzes, I used:
1) a memory dump that was taken from the broker
2) kafka log file
3) kafka state-change log
4) log, index and time-index files of a failed segment
5) Kafka source code, version 2.3.1
Hi!
I think we need to step back a little bit and understand what is what you
> are trying to achieve, please, will be beneficial to give you an accurate
> answer.
>
Sure, I'm working on my pet project that is a simple key-value database
replicated over Kafka.
I already implemented simple atomic
I have added this to my consumer config, and now it works fine.
receive.buffer.bytes=1048576
On Wed, Nov 13, 2019 at 10:41 AM Upendra Yadav
wrote:
> Hi,
>
> I m using consumer assign method and consume with 15000 poll time out to
> consume single partition data from another DC.
>
> Below are
Hi Team,
In my production setup, I’m getting below exception(for 4-5
__consumer_offsets partitions) while restarting kafka broker(s).
Before restart, my producers(sync) are very slow. for some messages its
taking 5 seconds to acknowledge.
But after restart, all messages wishing 2 milliseconds.