Hi.
Yeah, Spring-Kafka does processing messages sequentially, so the consumer
throughput would be capped by database latency per single process.
One possible solution is creating an intermediate topic (or altering source
topic) with much more partitions as Marina suggested.
I'd like to suggest
The way I see it - you can only do a few things - if you are sure there is no
room for perf optimization of the processing itself :
1. speed up your processing per consumer thread: which you already tried by
splitting your logic into a 2-step pipeline instead of 1-step, and delegating
the work
Hi
I am new to the Kafka world and running into this scale problem. I thought
of reaching out to the community if someone can help.
So the problem is I am trying to consume from a Kafka topic that can have a
peak of 12 million messages/hour. That topic is not under my control - it
has 7
This vote passes with 5 +1 votes (3 bindings) and no 0 or -1 votes.
+1 votes
PMC Members:
* Matthias J. Sax
* Gwen Shapira
* John Roesler
Committers:
* Sophie Blee-Goldman
Community:
* Jakub Scholz
0 votes
* None
-1 votes
* None
Vote thread: