able to deliver. There is many configs, so maybe there is a
way to tune your app to make it work with EOS?
-Matthias
On 9/3/25 10:50 AM, Victor Osorio wrote:
Hello everyone,
We’re currently using Kafka Streams to process transactional data with
*exactly-once semantics (EOS)*. However, for some
Hello everyone,
We're currently using Kafka Streams to process transactional data with
exactly-once semantics (EOS). However, for some of our workloads, we require
higher throughput, which makes EOS impractical.
To ensure data integrity, we rely on UncaughtExceptionHandle
topics source and sink (no
>>> additional repartition and changelog), and as I mentioned before we don't
>>> like to introduce manual validation in the flatMap (like if null
>>> statements) method,
>>> we would like to use automatic json schema validation feature f
>>
>>> Ideal solution has two topics, can we somehow do flatMap and change a key
>>> without changing creating the internal repartition topic? This will allow
>>> us to skip using the internal repartition topic as source for k table.
>>>
>>>
>
;> napisał(a):
>>
>>> Ideal solution has two topics, can we somehow do flatMap and change a key
>>> without changing creating the internal repartition topic? This will allow
>>> us to skip using the internal repartition topic as source for k table.
>>>
topic and from the repartition topic also do the validation of the JSON?
Regarding point 2 and 3:
I agree that the dependency on the name of the repartition is not a good idea.
The naming of the repartition topic is rather an implementation detail that
should not be leaked. You could improve o
rom the repartition topic also do the validation of the
>> JSON?
>>
>> Regarding point 2 and 3:
>> I agree that the dependency on the name of the repartition is not a good
>> idea. The naming of the repartition topic is rather an implementation detail
>>
Then, you can read the
>elements again from the output topic and write them into the table.
>
>Regarding point 1:
>Could you do the validation when you write to the elements to the output topic?
>
>Best,
>Bruno
>
>On 25.02.25 14:20, Paweł Szymczyk wrote:
>> Dear Kaf
again from the output topic and write
them into the table.
Regarding point 1:
Could you do the validation when you write to the elements to the output
topic?
Best,
Bruno
On 25.02.25 14:20, Paweł Szymczyk wrote:
Dear Kafka users,
The last few days I spent working with Kafka Streams on some
Dear Kafka users,
The last few days I spent working with Kafka Streams on some tasks which
looked very easy at first glance but finally I struggled with the Streams
Builder API and did something which I am not proud of. Please help me, I am
open to any suggestions.
On the input topic we have a
the appropriate change (more KS instance,
fewer threads, more memory, config change) should help.
-Matthias
On 1/9/25 7:30 PM, Martinus Elvin wrote:
Hello,
We have a kafka streams service that performs a left join (KStreams)
operation by their message key. The message size is 1 KB more or l
e have a kafka streams service that performs a left join (KStreams)
operation by their message key. The message size is 1 KB more or less.
The left topic has around two hundred thousand (200,000) messages per
second, while the right topic has around two thousand (2000) message
per second.
Each t
.
Both together should give you a starting point to understand what the
issue could be, and what the appropriate change (more KS instance, fewer
threads, more memory, config change) should help.
-Matthias
On 1/9/25 7:30 PM, Martinus Elvin wrote:
Hello,
We have a kafka streams service that
error when
building/starting the topology.
I've put a minimal reproduction below (against kafka-streams 3.7.0, please
excuse the Scala). This fails with:
org.apache.kafka.streams.errors.StreamsException: failed to initialize
processor KSTREAM-LEFTJOIN-
Hello,
We have a kafka streams service that performs a left join (KStreams)
operation by their message key. The message size is 1 KB more or less.
The left topic has around two hundred thousand (200,000) messages per
second, while the right topic has around two thousand (2000) message per
Updating Kafka Streams would be enough.
-Bill
On Wed, Dec 18, 2024 at 6:50 AM TheKaztek wrote:
> Would updating the kafka streams client library be enough ? Or should the
> cluster be updated to ?
>
> wt., 17 gru 2024 o 19:28 Bill Bejeck
> napisał(a):
>
> > Hi,
>
Would updating the kafka streams client library be enough ? Or should the
cluster be updated to ?
wt., 17 gru 2024 o 19:28 Bill Bejeck napisał(a):
> Hi,
>
> I think you could be hitting KAFKA-17635
> <https://issues.apache.org/jira/browse/KAFKA-17635> which has been fixed
>
Hi,
I think you could be hitting KAFKA-17635
<https://issues.apache.org/jira/browse/KAFKA-17635> which has been fixed in
Kafka Streams v 3.7.2 .
It's been released this week, is it possible to upgrade and try it out?
-Bill
On Tue, Dec 17, 2024 at 4:10 AM TheKaztek wrote:
>
Hi, we have a worrying problem in the project where we use kafka streams.
We had an incident where during heavy load on our app (dozens of millions
of records in 15 minutes span on an input topic of the stream) we decided
to add additional instances of the app (5 -> 10) and some of the
alre
ticed something similar or knows more about this.
After we updated our Spring Boot Kafka Streams application kafka-streams
dependency from 3.6.2 to 3.7.1, we noticed some failing tests. The expected
behavior of the streams-processor is, to join two input topics together and
produce a single output r
An: users@kafka.apache.org
Betreff: Re: Unexpected Tombstone records after kafka streams update 3.7.1
Sounds like https://issues.apache.org/jira/browse/KAFKA-16394
-Matthias
On 8/29/24 02:35, Vogel, Kevin, DP EPS, BN, extern, external wrote:
> Hello there,
>
> I searched the Apache Jira for a
knows more about this.
After we updated our Spring Boot Kafka Streams application kafka-streams
dependency from 3.6.2 to 3.7.1, we noticed some failing tests. The expected
behavior of the streams-processor is, to join two input topics together and
produce a single output record. Then the test inje
Hello there,
I searched the Apache Jira for a bug report on this topic but couldn't find
one. Maybe anyone else has noticed something similar or knows more about this.
After we updated our Spring Boot Kafka Streams application kafka-streams
dependency from 3.6.2 to 3.7.1, we noticed
iOS<https://aka.ms/o0ukef>
From: Matthias J. Sax
Sent: Thursday, June 20, 2024 9:25:10 PM
To: users@kafka.apache.org
Subject: Re: Kafka Streams branching return type affects bindings when using
multiple output bindings: intentional behaviour?
Jenny,
thanks for reach
mproved the branching return type from KStream<>[] to HashMap.
This changes the output binding when using multiple branches in the Kafka Streams binder
of Spring Cloud Stream - the bindings are possibly wrong because the map storing the
KStream branches is unordered.
Before KIP-418, i
Hello,
KIP-418 (>=2.8) improved the branching return type from KStream<>[] to HashMap.
This changes the output binding when using multiple branches in the Kafka
Streams binder of Spring Cloud Stream - the bindings are possibly wrong because
the map storing the KStream branches is
on count is the same for all input topic).
To work around this, you would need to rewrite the program to use either
`groupBy((k,v) -> k)` instead of `groupByKey()`, or do a
`.repartition().groupByKey()`.
Does this make sense?
-Matthias
On 5/16/24 2:11 AM, Kay Hannay wrote:
Hi,
we ha
Hi,
we have a Kafka streams application which merges (merge, groupByKey,
aggretgate) a few topics into one topic. The application is stateful, of
course. There are currently six instances of the application running in
parallel.
We had an issue where one new Topic for aggregation did have
Ah. Well this isn't anything new then since it's been the case since 2.6,
but the default task assignor in Kafka Streams will sometimes assign
partitions unevenly for a time if it's trying to move around stateful tasks
and there's no copy of that task's state on the l
ws and session windows. To
> retrieve
> > windows efficiently, I've established a remote query mechanism between
> > Kafka Streams instances. By leveraging the queryMetadataForKey method on
> > streams, I can retrieve the hostname where a specific key
What version did you upgrade from?
On Wed, May 8, 2024 at 10:32 PM Penumarthi Durga Prasad Chowdary <
prasad.penumar...@gmail.com> wrote:
> Hi Team,
> I'm utilizing Kafka Streams to handle data from Kafka topics, running
> multiple instances with the same applicat
Hi Team,
I'm utilizing Kafka Streams to handle data from Kafka topics, running
multiple instances with the same application ID. This enables distributed
processing of Kafka data across these instances. Furthermore, I've
implemented state stores with time windows and session windows. T
issue (which has most
likely been resolved by correcting the METADATA_MAX_AGE_CONFIG setting) from
surfacing.
Thank you.
Kind regards,
Venkatesh
From: Matthias J. Sax
Date: Friday, 5 April 2024 at 3:59 AM
To: users@kafka.apache.org
Subject: Re: [EXTERNAL] Re: Kafka Streams 3.5.1 based app seems
Hi Pushkar,
unfortunately, cross cluster processing is currently not possible with
Kafka Streams.
Best,
Bruno
On 4/11/24 4:13 PM, Pushkar Deole wrote:
Hi All,
We are using a streams application and currently the application uses a
common kafka cluster that is shared along with many other
common cluster but produce the
processed the data onto new cluster. Is it possible through kafka streams,
wherein the consumer in the streams is consuming from 1 cluster while the
sink happens onto another cluster?
Hi Mangat,
back to work now. I've configured out Streams applications to use
exacly-once semantics, but to no avail. Actually, after some. more
investigation I've come to suspect that the issue is somehow related
to rebalancing.
The initially shown topology lives inside a Quarkus Kaf
arting the topology.
I've put a minimal reproduction below (against kafka-streams 3.7.0, please
excuse the Scala). This fails with:
org.apache.kafka.streams.errors.StreamsException: failed to initialize
processor KSTREAM-LEFTJOIN-03
at
org.apache.kafka.streams.processor.internals.Pro
Perf tuning is always tricky... 350 rec/sec sounds pretty low though.
You would first need to figure out where the bottleneck is. Kafka
Streams exposes all kind of metrics:
https://kafka.apache.org/documentation/#kafka_streams_monitoring
Might be good to inspect them as a first step -- maybe
Hi All,
My streams application is not processing more than 350 records/sec on a
high load of 3milliom records produced every 2-3 minutes.
My scenarios are as below -
I am on Kafka and streams version of 3.5.1 .
My key-value pair is in protobuf format .
I do a groupbykey followed by TimeWindow of
and yourself for the
guidance on the stalling issue in the Kafka Streams client. After restoring the
default value for the METADATA_MAX_AGE_CONFIG, I haven’t seen the issue
happening. Heavy rebalancing (as mentioned before) continues to happen. I will
refer to the link which mentions about certa
Apologies for the delay, Bruno. Thank you so much for the excellent link and
for your inputs! Also, I would like to thank Matthias and yourself for the
guidance on the stalling issue in the Kafka Streams client. After restoring the
default value for the METADATA_MAX_AGE_CONFIG, I haven’t seen
ly-once
> > > semantics
> > > > already, was unsure whether it would help, and left it aside for once
> > > then.
> > > > I'll try that immediately when I get back to work.
> > > >
> > > > About snapshots and deserialization - I
ork.
> > >
> > > About snapshots and deserialization - I doubt that the issue is caused
> by
> > > deserialization failures because: when taking another (i.e. at a later
> > > point of time) snapshot of the exact same data, all messages fed into
would help, and left it aside for once
> then.
> > I'll try that immediately when I get back to work.
> >
> > About snapshots and deserialization - I doubt that the issue is caused by
> > deserialization failures because: when taking another (i.e. at a later
> > poin
- I doubt that the issue is caused by
> deserialization failures because: when taking another (i.e. at a later
> point of time) snapshot of the exact same data, all messages fed into the
> input topic pass the pipeline as expected.
>
> Logs of both Kafka and Kafka Streams show no signs
ed by
deserialization failures because: when taking another (i.e. at a later
point of time) snapshot of the exact same data, all messages fed into the
input topic pass the pipeline as expected.
Logs of both Kafka and Kafka Streams show no signs of notable issues as far
as I can tell, apart from these
Hey Karsten,
There could be several reasons this could happen.
1. Did you check the error logs? There are several reasons why the Kafka
stream app may drop incoming messages. Use exactly-once semantics to limit
such cases.
2. Are you sure there was no error when deserializing the records from
`fol
Hi,
thanks for getting back. I'll try and illustrate the issue.
I've got an input topic 'folderTopicName' fed by a database CDC system.
Messages then pass a series of FK left joins and are eventually sent to an
output topic like this ('agencies' and 'documents' being KTables):
strea
Hi,
That sounds worrisome!
Could you please provide us with a minimal example that shows the issue
you describe?
That would help a lot!
Best,
Bruno
On 3/25/24 4:07 PM, Karsten Stöckmann wrote:
Hi,
are there circumstances that might lead to messages silently (i.e. without
any logged warnin
Hi,
are there circumstances that might lead to messages silently (i.e. without
any logged warnings or errors) disappearing from a topology?
Specifically, I've got a rather simple topology doing a series of FK left
joins and notice severe message loss in case the application is fired up
for the fi
/19/24 4:14 AM, Venkatesh Nagarajan wrote:
Thanks very much for sharing the links and for your important inputs, Bruno!
We recommend to use as many stream threads as cores on the compute node where
the Kafka Streams client is run. How many Kafka Streams tasks do you have to
distribute over
Thanks very much for sharing the links and for your important inputs, Bruno!
> We recommend to use as many stream threads as cores on the compute node
> where the Kafka Streams client is run. How many Kafka Streams tasks do you
> have to distribute over the clients?
We use 1vCPU (p
Hi Venkatesh,
As you discovered, in Kafka Streams 3.5.1 there is no stop-the-world
rebalancing.
Static group member is helpful when Kafka Streams clients are restarted
as you pointed out.
> ERROR org.apache.kafka.streams.processor.internals.StandbyTask -
stream-thread [-StreamThrea
Just want to make a correction, Bruno - My understanding is that Kafka Streams
3.5.1 uses Incremental Cooperative Rebalancing which seems to help reduce the
impact of rebalancing caused by autoscaling etc.:
https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
Static
?
Thank you very much.
Kind regards,
Venkatesh
From: Bruno Cadonna
Date: Wednesday, 13 March 2024 at 8:29 PM
To: users@kafka.apache.org
Subject: Re: [EXTERNAL] Re: Kafka Streams 3.5.1 based app seems to get stalled
Hi Venkatesh,
Extending on what Matthias replied, a metadata refresh might
u have any suggestions on how to
reduce such rebalancing, that will be very helpful.
Thank you very much.
Kind regards,
Venkatesh
From: Matthias J. Sax
Date: Tuesday, 12 March 2024 at 1:31 pm
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Kafka Streams 3.5.1 based app seems to get stall
: Kafka Streams 3.5.1 based app seems to get stalled
Thanks very much for your important inputs, Matthias.
I will use the default METADATA_MAX_AGE_CONFIG. I set it to 5 hours when I saw
a lot of such rebalancing related messages in the MSK broker logs:
INFO [GroupCoordinator 2]: Preparing to
, that will be very helpful.
Thank you very much.
Kind regards,
Venkatesh
From: Matthias J. Sax
Date: Tuesday, 12 March 2024 at 1:31 pm
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Kafka Streams 3.5.1 based app seems to get stalled
Without detailed logs (maybe even DEBUG) hard to say.
But
metadata
refresh does not trigger a rebalance.
-Matthias
On 3/10/24 5:56 PM, Venkatesh Nagarajan wrote:
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
* Kafka: AWS MSK v3.6.0
* Consumes events from 6 topics
* Calls APIs to enrich events
* Sometimes
Case closed, behaviour is actually as expected. - The source topic contains
multiplied data that gets propagated into the join just as it should. I'm
leveraging a stream processor for deduplication now.
Best wishes
Karsten
Vikram Singh schrieb am Fr.,
23. Feb. 2024, 12:13:
> +Ajit Kharpude
>
>
+Ajit Kharpude
On Fri, Feb 23, 2024 at 1:14 PM Karsten Stöckmann <
karsten.stoeckm...@gmail.com> wrote:
> Hi,
>
> I am observing somewhat unexpected (from my point of view) behaviour
> while ke-key / re-partitioning operations in order to prepare a
> KTable-KTable join.
>
> Assume two (simplifie
Hi,
I am observing somewhat unexpected (from my point of view) behaviour
while ke-key / re-partitioning operations in order to prepare a
KTable-KTable join.
Assume two (simplified) source data structures from two respective topics:
class User {
Long id; // PK
String name;
}
class Attribute
Hi,
is anyone here familiar with Quarkus Kafka Streams applications? If so
- is there a way to control output topic configuration when streaming
aggregate data into a sink like so:
KTable aggregate = ...;
aggregate.toStream().to("topic", );
-> Can I programmatically (or by appli
I want to understand whether kafka streams can solve this efficiently.
Since the number of files is unbound how would kafka manage intermediate
topics for groupBy operation? How many partitions will it use etc? Can't
find this details in the docs. Also let's say chunk has a flag th
discussion with Partner Manager.
> On 01/24/2024 11:50 PM +08 Dharin Shah wrote:
>
>
> Hi Karsten,
>
> Before delving deeper into Kafka Streams, it's worth considering if direct
> aggregation in the database might be a more straightforward solution,
> unless there'
not yet familiar with size
> and performance requirements in Kafka as we are still somewhere at the
> beginning of implementing our indexing solution. Initial Debezium snapshots
> were quite fast from my point of view, resulting in overall broker disk
> usage of 35Gi on 3 replicas ea
n overall broker disk
usage of 35Gi on 3 replicas each. The intended Kafka Streams application is
based on Quarkus and its Streams extension. To my knowledge, it uses
RocksDB internally. Concerning State Store management - I haven't applied
any special configuration yet so I guess there are
Hi Karsten,
Before delving deeper into Kafka Streams, it's worth considering if direct
aggregation in the database might be a more straightforward solution,
unless there's a compelling reason to avoid it. Aggregating data at the
database level often leads to more efficient and ma
ation and
label values (i.e. aggregated as lists), what would be the most elegant
solution leveraging Kafka Streams? Note that customers do not necessisarily
have any communication or label at all, thus non-key joins are out of the
game as far as I understand.
Our initial (naive) solution was to r
m DSL that might look liketopic('chunks')
.groupByKey((fileId, chunk) -> fileId)
.sortBy((fileId, chunk) -> chunk.offset)
.aggregate((fileId, chunk) -> store.append(fileId, chunk));
I want to understand whether kafka streams can solve this efficiently. Since the number
no `transform()`
because keying must be preserved -- if you want to change the keying
you need to use `KTable#groupBy()` (data needs to be repartitioned if
you change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define proces
no `transform()`
because keying must be preserved -- if you want to change the keying
you need to use `KTable#groupBy()` (data needs to be repartitioned if
you change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define p
to change the keying you
need to use `KTable#groupBy()` (data needs to be repartitioned if you
change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define processors for KTable
and KGroupedStream like KStream#transform
Hello
Is there any way in Kafka Streams API to define processors for KTable
and KGroupedStream like KStream#transform? How to provide a custom
processor for KTable or KGroupedStream which could for instance provide
way to not downstream selected events?
--
Igor Maznitsa
email: rrg4
ion-handler via
>
> KafkaStreamssetUncaughtExceptionHandler(...)
>
>
> -Matthias
>
> On 10/2/23 12:11 PM, Debraj Manna wrote:
> > Are you suggesting to check the Kafka broker logs? I do not see any other
> > errors logs on the client / application side.
> >
> > On Fri, 29 Sep, 2023, 22:01 Matthi
Hello,
We have a Kafka Streams consumer application running Kafka Streams 3.4,
with our Kafka brokers running 2.6. This consumer application consumes from
two topics with 250 partitions each, and we co-partition them to ensure
each task is consuming from the same partitions in each topic. Some
the Kafka broker logs? I do not see any other
errors logs on the client / application side.
On Fri, 29 Sep, 2023, 22:01 Matthias J. Sax, wrote:
In general, Kafka Streams should keep running.
Can you inspect the logs to figure out why it's going into ERROR state
to begin with? Maybe you ne
Are you suggesting to check the Kafka broker logs? I do not see any other
errors logs on the client / application side.
On Fri, 29 Sep, 2023, 22:01 Matthias J. Sax, wrote:
> In general, Kafka Streams should keep running.
>
> Can you inspect the logs to figure out why it's going in
In general, Kafka Streams should keep running.
Can you inspect the logs to figure out why it's going into ERROR state
to begin with? Maybe you need to increase/change some timeouts/retries
configs.
The stack trace you shared, is a symptom, but not the root cause.
-Matthias
On 9/21/23
I am using Kafka broker 2.8.1 (from AWS MSK) with Kafka clients and Kafka
stream 3.5.1.
I am observing that whenever some rolling upgrade is done on AWS MSK our
stream application reaches an error state. I get the below exception on
trying to query the state store
caused by: java.lang.IllegalStat
t; Bruno
>
> On 9/20/23 12:22 AM, M M wrote:
> > Hello,
> >
> > This is my first time asking a question on a mailing list, so please
> > forgive me any inaccuracies.
> >
> > I am having a Kafka Streams application with a Punctuator.
> > Inside th
Kafka Streams application with a Punctuator.
Inside the punctuate() I have this code:
// fooStore is a state store
for (FooKey fooKey : fooKeysRepository.get(Type.FOO)) { Foo foo =
fooStore.get(fooKey); // foo != null
This returns a Foo which is not null.
FooKey has fields: [string(nullable
Hello,
This is my first time asking a question on a mailing list, so please
forgive me any inaccuracies.
I am having a Kafka Streams application with a Punctuator.
Inside the punctuate() I have this code:
// fooStore is a state store
for (FooKey fooKey : fooKeysRepository.get(Type.FOO)) { Foo
ameAndType(TABLE_NAME,
queryableStoreType).enableStaleStores()
as described in the blog post?
2. Since you are querying a stale store, could it be the the standby
hasn't caught up yet?
Best,
Bruno
On 9/6/23 8:32 AM, Igor Maznitsa wrote:
Hello
1. I am starting two Kafka Streams applications
he standby
hasn't caught up yet?
Best,
Bruno
On 9/6/23 8:32 AM, Igor Maznitsa wrote:
Hello
1. I am starting two Kafka Streams applications worked in same group
with num.standby.replicas=1
2. Application A has active TimeWindow data store and application B
has the standby version of the
are querying a stale store, could it be the the standby
hasn't caught up yet?
Best,
Bruno
On 9/6/23 8:32 AM, Igor Maznitsa wrote:
Hello
1. I am starting two Kafka Streams applications worked in same group
with num.standby.replicas=1
2. Application A has active TimeWindow data stor
Hello
1. I am starting two Kafka Streams applications worked in same group
with num.standby.replicas=1
2. Application A has active TimeWindow data store and application B has
the standby version of the data store
Is there any way to read the standby store time window data in bounds of
B
ed. That
is the reason why you get those incorrect alerts -- Kafka cannot know
that you stopped consuming from those topics. (That is what I tried to
explain -- seems I did a bad job...)
Changing the group.id is tricky because Kafka Streams uses it to
identify internal topic names (for repartito
hose incorrect alerts -- Kafka cannot know
> that you stopped consuming from those topics. (That is what I tried to
> explain -- seems I did a bad job...)
>
> Changing the group.id is tricky because Kafka Streams uses it to
> identify internal topic names (for repartiton and chagnelog topi
Kafka Streams uses it to
identify internal topic names (for repartiton and chagnelog topics), and
thus your app would start with newly created (and thus empty topics). --
You might want to restart the app with `auto.offset.reset = "earliest"`
and reprocess all available input to re-cr
@matthias
what are the alternatives to get rid of this issue? When the lag starts
increasing, we have alerts configured on our monitoring system in Datadog
which starts sending alerts and alarms to reliability teams. I know in
kafka the inactive consumer group is cleared up after 7 days however no
Well, it's kinda expected behavior. It's a split brain problem.
In the end, you use the same `application.id / group.id` and thus the
committed offsets for the removed topics are still in
`__consumer_offsets` topics and associated with the consumer group.
If a tool inspects lags and compares
Hi streams Dev community @matthias, @bruno
Any inputs on above issue? Is this a bug in the streams library wherein the
input topic removed from streams processor topology, the underlying
consumer group still reporting lag against those?
On Wed, Aug 9, 2023 at 4:38 PM Pushkar Deole wrote:
> Hi
Hi All,
I have a streams application with 3 instances with application-id set to
applicationV1. The application uses processor API with reading from source
topics, processing the data and writing to destination topic.
Currently it consumes from 6 source topics however we don't need to process
data
process them in that order"?
> >>
> >> The order of the events from the same source partition (partition before
> >> repartitioning) that have the same call ID (or more generally that end
> >> up in the same partition after repartitioning) will be prese
served but
Kafka does not guarantee the order of events from different source
partitions.
Best,
Bruno
On 7/9/23 2:45 PM, Pushkar Deole wrote:
Hi,
We have a kafka streams application that consumes from multiple topic
with
different keys. Before processing these events in the application, we
w
source
> partitions.
>
> Best,
> Bruno
>
> On 7/9/23 2:45 PM, Pushkar Deole wrote:
> > Hi,
> >
> > We have a kafka streams application that consumes from multiple topic
> with
> > different keys. Before processing these events in the application, we
>
Bruno
On 7/9/23 2:45 PM, Pushkar Deole wrote:
Hi,
We have a kafka streams application that consumes from multiple topic with
different keys. Before processing these events in the application, we want
to repartition those events on a single key that will ensure related events
are process
Hello, *Kafka dev community, @matthiasJsax*
Can you comment on below question? It is very important for us since we are
getting inconsistencies due to current design
On Sun, Jul 9, 2023 at 6:15 PM Pushkar Deole wrote:
> Hi,
>
> We have a kafka streams application that consumes from
1 - 100 of 2013 matches
Mail list logo