Yep that's the one. You can see here for a great example on the typical
flow https://www.confluent.io/blog/transactions-apache-kafka/.
On Sat, Jul 17, 2021 at 3:24 AM Pushkar Deole wrote:
> Hi Lerh Chuan Low,
>
> MAny thanks for your response. I get it now, that it provides exactly-once
>
Hi Lerh Chuan Low,
MAny thanks for your response. I get it now, that it provides exactly-once
semantics i.e it looks to user that it is processed exactly once.
Also, i am clear on the aspect about read_committed level so the
uncommitted transaction and hence uncommitted send won't be visible to
Pushkar,
My understanding is you can easily turn it on by using Kafka streams as
Chris mentioned. Otherwise you'd have to do it yourself - I don't think you
can get exactly once processing, but what you can do (which is also what
Kafka streams does) is exactly once schematics (You won't be able
Another acceptable solution is doing idempotent actions while if you re
read the message again you will check "did I process it already?" Or doing
upsert... and keep it in at least once semantics
בתאריך יום ו׳, 16 ביולי 2021, 19:10, מאת Ran Lupovich <
ranlupov...@gmail.com>:
> You need to do
You need to do atomic actions with processing and saving the
partition/offsets , while rebalance or assign or on initial start events
you read the offset from the outside store, there are documentation and
examples on the internet, what type of processing are you doing ?
בתאריך יום ו׳, 16 ביולי
Chris,
I am not sure how this solves the problem scenario that we are experiencing
in customer environment: the scenario is:
1. application consumed record and processed it
2. the processed record is produced on destination topic and ack is received
3. Before committing offset back to consumed
It is not possible out of the box, it is something you’ll have to write
yourself. Would the following work?
Consume -> Produce to primary topic-> get success ack back -> commit the
consume
Else if ack fails, produce to dead letter, then commit upon success
Else if dead letter ack fails, exit
Thanks Chris for the response!
The current application is quite evolved and currently using
consumer-producer model described above and we need to fix some bugs soon
for a customer. So, moving to kafka streams seems bigger work. That's why
looking at work around if same thing can be achieved with
Pushkar, in kafka development for customer consumer/producer you handle it.
However you can ensure the process stops (or sends message to dead letter)
before manually committing the consumer offset. On the produce side you can
turn on idempotence or transactions. But unless you are using Streams,
Hi All,
I am using a normal kafka consumer-producer in my microservice, with a
simple model of consume from source topic -> process the record -> produce
on destination topic.
I am mainly looking for exactly-once guarantee wherein the offset commit
to consumed topic and produce on destination
10 matches
Mail list logo