Another acceptable solution is doing idempotent actions while if you re
read the message again you will check "did I process it already?" Or doing
upsert... and keep it in at least once semantics

בתאריך יום ו׳, 16 ביולי 2021, 19:10, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> You need to do atomic actions with processing and saving the
> partition/offsets , while rebalance or assign or on initial start events
> you read the offset from the outside store, there are documentation and
> examples on the internet, what type of processing are you doing ?
>
> בתאריך יום ו׳, 16 ביולי 2021, 19:01, מאת Pushkar Deole ‏<
> pdeole2...@gmail.com>:
>
>> Chris,
>>
>> I am not sure how this solves the problem scenario that we are
>> experiencing
>> in customer environment: the scenario is:
>> 1. application consumed record and processed it
>> 2. the processed record is produced on destination topic and ack is
>> received
>> 3. Before committing offset back to consumed topic, the application pod
>> crashed or shut down by kubernetes or shut down due to some other issue
>>
>> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen <clar...@confluent.io.invalid
>> >
>> wrote:
>>
>> > It is not possible out of the box, it is something you’ll have to write
>> > yourself. Would the following work?
>> >
>> > Consume -> Produce to primary topic-> get success ack back -> commit the
>> > consume
>> >
>> > Else if ack fails, produce to dead letter, then commit upon success
>> >
>> > Else if dead letter ack fails, exit (and thus don’t commit)
>> >
>> > Does that help? Someone please feel free to slap my hand but seems
>> legit to
>> > me ;)
>> >
>> > Chris
>> >
>> >
>> >
>> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole <pdeole2...@gmail.com>
>> wrote:
>> >
>> > > Thanks Chris for the response!
>> > >
>> > > The current application is quite evolved and currently using
>> > >
>> > > consumer-producer model described above and we need to fix some bugs
>> soon
>> > >
>> > > for a customer. So, moving to kafka streams seems bigger work. That's
>> why
>> > >
>> > > looking at work around if same thing can be achieved with current
>> model
>> > >
>> > > using transactions that span across consumer offset commits and
>> producer
>> > >
>> > > send.
>> > >
>> > >
>> > >
>> > > We have made the producer idempotent and turned on transactions.
>> > >
>> > > However want to make offset commit to consumer and send from producer
>> to
>> > be
>> > >
>> > > atomic? Is that possible?
>> > >
>> > >
>> > >
>> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
>> > <clar...@confluent.io.invalid
>> > > >
>> > >
>> > > wrote:
>> > >
>> > >
>> > >
>> > > > Pushkar, in kafka development for customer consumer/producer you
>> handle
>> > > it.
>> > >
>> > > > However you can ensure the process stops (or sends message to dead
>> > > letter)
>> > >
>> > > > before manually committing the consumer offset. On the produce side
>> you
>> > > can
>> > >
>> > > > turn on idempotence or transactions. But unless you are using
>> Streams,
>> > > you
>> > >
>> > > > chain those together yoursef. Would kafka streams work for the
>> > operation
>> > >
>> > > > you’re looking to do?
>> > >
>> > > >
>> > >
>> > > > Best,
>> > >
>> > > > Chris
>> > >
>> > > >
>> > >
>> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole <pdeole2...@gmail.com>
>> > > wrote:
>> > >
>> > > >
>> > >
>> > > > > Hi All,
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > > I am using a normal kafka consumer-producer in my microservice,
>> with
>> > a
>> > >
>> > > > >
>> > >
>> > > > > simple model of consume from source topic -> process the record ->
>> > >
>> > > > produce
>> > >
>> > > > >
>> > >
>> > > > > on destination topic.
>> > >
>> > > > >
>> > >
>> > > > > I am mainly looking for exactly-once guarantee  wherein the offset
>> > > commit
>> > >
>> > > > >
>> > >
>> > > > > to consumed topic and produce on destination topic would both
>> happen
>> > >
>> > > > >
>> > >
>> > > > > atomically or none of them would happen.
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > > In case of failures of service instance, if consumer has consumed,
>> > >
>> > > > >
>> > >
>> > > > > processed record and produced on destination topic but offset not
>> yet
>> > >
>> > > > >
>> > >
>> > > > > committed back to source topic then produce should also not
>> happen on
>> > >
>> > > > >
>> > >
>> > > > > destination topic.
>> > >
>> > > > >
>> > >
>> > > > > Is this behavior i.e. exactly-once, across consumers and
>> producers,
>> > >
>> > > > >
>> > >
>> > > > > possible with transactional support in kafka?
>> > >
>> > > > >
>> > >
>> > > > > --
>> > >
>> > > >
>> > >
>> > > >
>> > >
>> > > > [image: Confluent] <https://www.confluent.io>
>> > >
>> > > > Chris Larsen
>> > >
>> > > > Sr Solutions Engineer
>> > >
>> > > > +1 847 274 3735 <+1+847+274+3735>
>> > >
>> > > > Follow us: [image: Blog]
>> > >
>> > > > <
>> > >
>> > > >
>> > >
>> >
>> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>> > >
>> > > > >[image:
>> > >
>> > > > Twitter] <https://twitter.com/ConfluentInc>[image: LinkedIn]
>> > >
>> > > > <https://www.linkedin.com/in/chrislarsen/>
>> > >
>> > > >
>> > >
>> > > --
>> >
>> >
>> > [image: Confluent] <https://www.confluent.io>
>> > Chris Larsen
>> > Sr Solutions Engineer
>> > +1 847 274 3735 <+1+847+274+3735>
>> > Follow us: [image: Blog]
>> > <
>> >
>> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>> > >[image:
>> > Twitter] <https://twitter.com/ConfluentInc>[image: LinkedIn]
>> > <https://www.linkedin.com/in/chrislarsen/>
>> >
>>
>

Reply via email to