Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-18 Thread Lerh Chuan Low
Yep that's the one. You can see here for a great example on the typical
flow https://www.confluent.io/blog/transactions-apache-kafka/.

On Sat, Jul 17, 2021 at 3:24 AM Pushkar Deole  wrote:

> Hi Lerh Chuan Low,
>
> MAny thanks for your response. I get it now, that it provides exactly-once
> semantics i.e it looks to user that it is processed exactly once.
> Also, i am clear on the aspect about read_committed level so the
> uncommitted transaction and hence uncommitted send won't be visible to
> consumers.
>
> However one last query i have is how to make sure that as part of the same
> transaction, i am also sending and also committing offsets. Which API
> should i look at: is this correct API :
> KafkaProducer.
> sendOffsetsToTransaction
>
> On Fri, Jul 16, 2021 at 9:57 PM Lerh Chuan Low 
> wrote:
>
> > Pushkar,
> >
> > My understanding is you can easily turn it on by using Kafka streams as
> > Chris mentioned. Otherwise you'd have to do it yourself - I don't think
> you
> > can get exactly once processing, but what you can do (which is also what
> > Kafka streams does) is exactly once schematics (You won't be able to get
> > every message processed exactly once in the system, but they could look
> > like they had been processed exactly once), The final piece of the puzzle
> > besides using idempotent producers and transactions is to set consumers
> of
> > the downstream topic to *read_committed: true*. So in your example the
> > messages would still have made it to the destination topic, however
> because
> > the transaction has not yet been completed, the downstream consumer would
> > ignore them.
> >
> > You can still only do exactly once processing up to the boundaries of
> > Kafka, that said. Wherever Kafka terminates you'd have to code it
> yourself.
> >
> >
> >
> > On Sat, Jul 17, 2021 at 2:01 AM Pushkar Deole 
> > wrote:
> >
> > > Chris,
> > >
> > > I am not sure how this solves the problem scenario that we are
> > experiencing
> > > in customer environment: the scenario is:
> > > 1. application consumed record and processed it
> > > 2. the processed record is produced on destination topic and ack is
> > > received
> > > 3. Before committing offset back to consumed topic, the application pod
> > > crashed or shut down by kubernetes or shut down due to some other issue
> > >
> > > On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen
> >  > > >
> > > wrote:
> > >
> > > > It is not possible out of the box, it is something you’ll have to
> write
> > > > yourself. Would the following work?
> > > >
> > > > Consume -> Produce to primary topic-> get success ack back -> commit
> > the
> > > > consume
> > > >
> > > > Else if ack fails, produce to dead letter, then commit upon success
> > > >
> > > > Else if dead letter ack fails, exit (and thus don’t commit)
> > > >
> > > > Does that help? Someone please feel free to slap my hand but seems
> > legit
> > > to
> > > > me ;)
> > > >
> > > > Chris
> > > >
> > > >
> > > >
> > > > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
> > > wrote:
> > > >
> > > > > Thanks Chris for the response!
> > > > >
> > > > > The current application is quite evolved and currently using
> > > > >
> > > > > consumer-producer model described above and we need to fix some
> bugs
> > > soon
> > > > >
> > > > > for a customer. So, moving to kafka streams seems bigger work.
> That's
> > > why
> > > > >
> > > > > looking at work around if same thing can be achieved with current
> > model
> > > > >
> > > > > using transactions that span across consumer offset commits and
> > > producer
> > > > >
> > > > > send.
> > > > >
> > > > >
> > > > >
> > > > > We have made the producer idempotent and turned on transactions.
> > > > >
> > > > > However want to make offset commit to consumer and send from
> producer
> > > to
> > > > be
> > > > >
> > > > > atomic? Is that possible?
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> > > >  > > > > >
> > > > >
> > > > > wrote:
> > > > >
> > > > >
> > > > >
> > > > > > Pushkar, in kafka development for customer consumer/producer you
> > > handle
> > > > > it.
> > > > >
> > > > > > However you can ensure the process stops (or sends message to
> dead
> > > > > letter)
> > > > >
> > > > > > before manually committing the consumer offset. On the produce
> side
> > > you
> > > > > can
> > > > >
> > > > > > turn on idempotence or transactions. But unless you are using
> > > Streams,
> > > > > you
> > > > >
> > > > > > chain those together yoursef. Would kafka streams work for the
> > > > operation
> > > > >
> > > > > > you’re looking to do?
> > > > >
> > > > > >
> > > > >
> > > > > > Best,
> > > > >
> > > > > > Chris
> > > > >
> > > > > >
> > > > >
> > > > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole <
> pdeole2...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > >
> > > > >
> > > > > > > Hi All,
> > > > >
> > > > > > >
> > > > >
> > > > > > >
> > > > >
> > > > > > >
> > > > >
> > > > > > > I am using a normal kafka consumer-producer 

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Hi Lerh Chuan Low,

MAny thanks for your response. I get it now, that it provides exactly-once
semantics i.e it looks to user that it is processed exactly once.
Also, i am clear on the aspect about read_committed level so the
uncommitted transaction and hence uncommitted send won't be visible to
consumers.

However one last query i have is how to make sure that as part of the same
transaction, i am also sending and also committing offsets. Which API
should i look at: is this correct API :
KafkaProducer.
sendOffsetsToTransaction

On Fri, Jul 16, 2021 at 9:57 PM Lerh Chuan Low  wrote:

> Pushkar,
>
> My understanding is you can easily turn it on by using Kafka streams as
> Chris mentioned. Otherwise you'd have to do it yourself - I don't think you
> can get exactly once processing, but what you can do (which is also what
> Kafka streams does) is exactly once schematics (You won't be able to get
> every message processed exactly once in the system, but they could look
> like they had been processed exactly once), The final piece of the puzzle
> besides using idempotent producers and transactions is to set consumers of
> the downstream topic to *read_committed: true*. So in your example the
> messages would still have made it to the destination topic, however because
> the transaction has not yet been completed, the downstream consumer would
> ignore them.
>
> You can still only do exactly once processing up to the boundaries of
> Kafka, that said. Wherever Kafka terminates you'd have to code it yourself.
>
>
>
> On Sat, Jul 17, 2021 at 2:01 AM Pushkar Deole 
> wrote:
>
> > Chris,
> >
> > I am not sure how this solves the problem scenario that we are
> experiencing
> > in customer environment: the scenario is:
> > 1. application consumed record and processed it
> > 2. the processed record is produced on destination topic and ack is
> > received
> > 3. Before committing offset back to consumed topic, the application pod
> > crashed or shut down by kubernetes or shut down due to some other issue
> >
> > On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen
>  > >
> > wrote:
> >
> > > It is not possible out of the box, it is something you’ll have to write
> > > yourself. Would the following work?
> > >
> > > Consume -> Produce to primary topic-> get success ack back -> commit
> the
> > > consume
> > >
> > > Else if ack fails, produce to dead letter, then commit upon success
> > >
> > > Else if dead letter ack fails, exit (and thus don’t commit)
> > >
> > > Does that help? Someone please feel free to slap my hand but seems
> legit
> > to
> > > me ;)
> > >
> > > Chris
> > >
> > >
> > >
> > > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
> > wrote:
> > >
> > > > Thanks Chris for the response!
> > > >
> > > > The current application is quite evolved and currently using
> > > >
> > > > consumer-producer model described above and we need to fix some bugs
> > soon
> > > >
> > > > for a customer. So, moving to kafka streams seems bigger work. That's
> > why
> > > >
> > > > looking at work around if same thing can be achieved with current
> model
> > > >
> > > > using transactions that span across consumer offset commits and
> > producer
> > > >
> > > > send.
> > > >
> > > >
> > > >
> > > > We have made the producer idempotent and turned on transactions.
> > > >
> > > > However want to make offset commit to consumer and send from producer
> > to
> > > be
> > > >
> > > > atomic? Is that possible?
> > > >
> > > >
> > > >
> > > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> > >  > > > >
> > > >
> > > > wrote:
> > > >
> > > >
> > > >
> > > > > Pushkar, in kafka development for customer consumer/producer you
> > handle
> > > > it.
> > > >
> > > > > However you can ensure the process stops (or sends message to dead
> > > > letter)
> > > >
> > > > > before manually committing the consumer offset. On the produce side
> > you
> > > > can
> > > >
> > > > > turn on idempotence or transactions. But unless you are using
> > Streams,
> > > > you
> > > >
> > > > > chain those together yoursef. Would kafka streams work for the
> > > operation
> > > >
> > > > > you’re looking to do?
> > > >
> > > > >
> > > >
> > > > > Best,
> > > >
> > > > > Chris
> > > >
> > > > >
> > > >
> > > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> > > > wrote:
> > > >
> > > > >
> > > >
> > > > > > Hi All,
> > > >
> > > > > >
> > > >
> > > > > >
> > > >
> > > > > >
> > > >
> > > > > > I am using a normal kafka consumer-producer in my microservice,
> > with
> > > a
> > > >
> > > > > >
> > > >
> > > > > > simple model of consume from source topic -> process the record
> ->
> > > >
> > > > > produce
> > > >
> > > > > >
> > > >
> > > > > > on destination topic.
> > > >
> > > > > >
> > > >
> > > > > > I am mainly looking for exactly-once guarantee  wherein the
> offset
> > > > commit
> > > >
> > > > > >
> > > >
> > > > > > to consumed topic and produce on destination topic would both
> > happen
> > > >
> > > > > >
> > > >
> > > > > > atomically or none of them would happ

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Lerh Chuan Low
Pushkar,

My understanding is you can easily turn it on by using Kafka streams as
Chris mentioned. Otherwise you'd have to do it yourself - I don't think you
can get exactly once processing, but what you can do (which is also what
Kafka streams does) is exactly once schematics (You won't be able to get
every message processed exactly once in the system, but they could look
like they had been processed exactly once), The final piece of the puzzle
besides using idempotent producers and transactions is to set consumers of
the downstream topic to *read_committed: true*. So in your example the
messages would still have made it to the destination topic, however because
the transaction has not yet been completed, the downstream consumer would
ignore them.

You can still only do exactly once processing up to the boundaries of
Kafka, that said. Wherever Kafka terminates you'd have to code it yourself.



On Sat, Jul 17, 2021 at 2:01 AM Pushkar Deole  wrote:

> Chris,
>
> I am not sure how this solves the problem scenario that we are experiencing
> in customer environment: the scenario is:
> 1. application consumed record and processed it
> 2. the processed record is produced on destination topic and ack is
> received
> 3. Before committing offset back to consumed topic, the application pod
> crashed or shut down by kubernetes or shut down due to some other issue
>
> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen  >
> wrote:
>
> > It is not possible out of the box, it is something you’ll have to write
> > yourself. Would the following work?
> >
> > Consume -> Produce to primary topic-> get success ack back -> commit the
> > consume
> >
> > Else if ack fails, produce to dead letter, then commit upon success
> >
> > Else if dead letter ack fails, exit (and thus don’t commit)
> >
> > Does that help? Someone please feel free to slap my hand but seems legit
> to
> > me ;)
> >
> > Chris
> >
> >
> >
> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
> wrote:
> >
> > > Thanks Chris for the response!
> > >
> > > The current application is quite evolved and currently using
> > >
> > > consumer-producer model described above and we need to fix some bugs
> soon
> > >
> > > for a customer. So, moving to kafka streams seems bigger work. That's
> why
> > >
> > > looking at work around if same thing can be achieved with current model
> > >
> > > using transactions that span across consumer offset commits and
> producer
> > >
> > > send.
> > >
> > >
> > >
> > > We have made the producer idempotent and turned on transactions.
> > >
> > > However want to make offset commit to consumer and send from producer
> to
> > be
> > >
> > > atomic? Is that possible?
> > >
> > >
> > >
> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> >  > > >
> > >
> > > wrote:
> > >
> > >
> > >
> > > > Pushkar, in kafka development for customer consumer/producer you
> handle
> > > it.
> > >
> > > > However you can ensure the process stops (or sends message to dead
> > > letter)
> > >
> > > > before manually committing the consumer offset. On the produce side
> you
> > > can
> > >
> > > > turn on idempotence or transactions. But unless you are using
> Streams,
> > > you
> > >
> > > > chain those together yoursef. Would kafka streams work for the
> > operation
> > >
> > > > you’re looking to do?
> > >
> > > >
> > >
> > > > Best,
> > >
> > > > Chris
> > >
> > > >
> > >
> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> > > wrote:
> > >
> > > >
> > >
> > > > > Hi All,
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > I am using a normal kafka consumer-producer in my microservice,
> with
> > a
> > >
> > > > >
> > >
> > > > > simple model of consume from source topic -> process the record ->
> > >
> > > > produce
> > >
> > > > >
> > >
> > > > > on destination topic.
> > >
> > > > >
> > >
> > > > > I am mainly looking for exactly-once guarantee  wherein the offset
> > > commit
> > >
> > > > >
> > >
> > > > > to consumed topic and produce on destination topic would both
> happen
> > >
> > > > >
> > >
> > > > > atomically or none of them would happen.
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > In case of failures of service instance, if consumer has consumed,
> > >
> > > > >
> > >
> > > > > processed record and produced on destination topic but offset not
> yet
> > >
> > > > >
> > >
> > > > > committed back to source topic then produce should also not happen
> on
> > >
> > > > >
> > >
> > > > > destination topic.
> > >
> > > > >
> > >
> > > > > Is this behavior i.e. exactly-once, across consumers and producers,
> > >
> > > > >
> > >
> > > > > possible with transactional support in kafka?
> > >
> > > > >
> > >
> > > > > --
> > >
> > > >
> > >
> > > >
> > >
> > > > [image: Confluent] 
> > >
> > > > Chris Larsen
> > >
> > > > Sr Solutions Engineer
> > >
> > > > +1 847 274 3735 <+1+847+274+3735>
> > >
> > > > Follow us: [image: Blog]
> > >
> > > > <
> > >
> > > >
> > >
> >
> https://www.confluent.io/blog?utm_so

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
Another acceptable solution is doing idempotent actions while if you re
read the message again you will check "did I process it already?" Or doing
upsert... and keep it in at least once semantics

בתאריך יום ו׳, 16 ביולי 2021, 19:10, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> You need to do atomic actions with processing and saving the
> partition/offsets , while rebalance or assign or on initial start events
> you read the offset from the outside store, there are documentation and
> examples on the internet, what type of processing are you doing ?
>
> בתאריך יום ו׳, 16 ביולי 2021, 19:01, מאת Pushkar Deole ‏<
> pdeole2...@gmail.com>:
>
>> Chris,
>>
>> I am not sure how this solves the problem scenario that we are
>> experiencing
>> in customer environment: the scenario is:
>> 1. application consumed record and processed it
>> 2. the processed record is produced on destination topic and ack is
>> received
>> 3. Before committing offset back to consumed topic, the application pod
>> crashed or shut down by kubernetes or shut down due to some other issue
>>
>> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen > >
>> wrote:
>>
>> > It is not possible out of the box, it is something you’ll have to write
>> > yourself. Would the following work?
>> >
>> > Consume -> Produce to primary topic-> get success ack back -> commit the
>> > consume
>> >
>> > Else if ack fails, produce to dead letter, then commit upon success
>> >
>> > Else if dead letter ack fails, exit (and thus don’t commit)
>> >
>> > Does that help? Someone please feel free to slap my hand but seems
>> legit to
>> > me ;)
>> >
>> > Chris
>> >
>> >
>> >
>> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
>> wrote:
>> >
>> > > Thanks Chris for the response!
>> > >
>> > > The current application is quite evolved and currently using
>> > >
>> > > consumer-producer model described above and we need to fix some bugs
>> soon
>> > >
>> > > for a customer. So, moving to kafka streams seems bigger work. That's
>> why
>> > >
>> > > looking at work around if same thing can be achieved with current
>> model
>> > >
>> > > using transactions that span across consumer offset commits and
>> producer
>> > >
>> > > send.
>> > >
>> > >
>> > >
>> > > We have made the producer idempotent and turned on transactions.
>> > >
>> > > However want to make offset commit to consumer and send from producer
>> to
>> > be
>> > >
>> > > atomic? Is that possible?
>> > >
>> > >
>> > >
>> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
>> > > > > >
>> > >
>> > > wrote:
>> > >
>> > >
>> > >
>> > > > Pushkar, in kafka development for customer consumer/producer you
>> handle
>> > > it.
>> > >
>> > > > However you can ensure the process stops (or sends message to dead
>> > > letter)
>> > >
>> > > > before manually committing the consumer offset. On the produce side
>> you
>> > > can
>> > >
>> > > > turn on idempotence or transactions. But unless you are using
>> Streams,
>> > > you
>> > >
>> > > > chain those together yoursef. Would kafka streams work for the
>> > operation
>> > >
>> > > > you’re looking to do?
>> > >
>> > > >
>> > >
>> > > > Best,
>> > >
>> > > > Chris
>> > >
>> > > >
>> > >
>> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
>> > > wrote:
>> > >
>> > > >
>> > >
>> > > > > Hi All,
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > > I am using a normal kafka consumer-producer in my microservice,
>> with
>> > a
>> > >
>> > > > >
>> > >
>> > > > > simple model of consume from source topic -> process the record ->
>> > >
>> > > > produce
>> > >
>> > > > >
>> > >
>> > > > > on destination topic.
>> > >
>> > > > >
>> > >
>> > > > > I am mainly looking for exactly-once guarantee  wherein the offset
>> > > commit
>> > >
>> > > > >
>> > >
>> > > > > to consumed topic and produce on destination topic would both
>> happen
>> > >
>> > > > >
>> > >
>> > > > > atomically or none of them would happen.
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > > In case of failures of service instance, if consumer has consumed,
>> > >
>> > > > >
>> > >
>> > > > > processed record and produced on destination topic but offset not
>> yet
>> > >
>> > > > >
>> > >
>> > > > > committed back to source topic then produce should also not
>> happen on
>> > >
>> > > > >
>> > >
>> > > > > destination topic.
>> > >
>> > > > >
>> > >
>> > > > > Is this behavior i.e. exactly-once, across consumers and
>> producers,
>> > >
>> > > > >
>> > >
>> > > > > possible with transactional support in kafka?
>> > >
>> > > > >
>> > >
>> > > > > --
>> > >
>> > > >
>> > >
>> > > >
>> > >
>> > > > [image: Confluent] 
>> > >
>> > > > Chris Larsen
>> > >
>> > > > Sr Solutions Engineer
>> > >
>> > > > +1 847 274 3735 <+1+847+274+3735>
>> > >
>> > > > Follow us: [image: Blog]
>> > >
>> > > > <
>> > >
>> > > >
>> > >
>> >
>> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>> > >
>> > > > >[image

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
You need to do atomic actions with processing and saving the
partition/offsets , while rebalance or assign or on initial start events
you read the offset from the outside store, there are documentation and
examples on the internet, what type of processing are you doing ?

בתאריך יום ו׳, 16 ביולי 2021, 19:01, מאת Pushkar Deole ‏<
pdeole2...@gmail.com>:

> Chris,
>
> I am not sure how this solves the problem scenario that we are experiencing
> in customer environment: the scenario is:
> 1. application consumed record and processed it
> 2. the processed record is produced on destination topic and ack is
> received
> 3. Before committing offset back to consumed topic, the application pod
> crashed or shut down by kubernetes or shut down due to some other issue
>
> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen  >
> wrote:
>
> > It is not possible out of the box, it is something you’ll have to write
> > yourself. Would the following work?
> >
> > Consume -> Produce to primary topic-> get success ack back -> commit the
> > consume
> >
> > Else if ack fails, produce to dead letter, then commit upon success
> >
> > Else if dead letter ack fails, exit (and thus don’t commit)
> >
> > Does that help? Someone please feel free to slap my hand but seems legit
> to
> > me ;)
> >
> > Chris
> >
> >
> >
> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
> wrote:
> >
> > > Thanks Chris for the response!
> > >
> > > The current application is quite evolved and currently using
> > >
> > > consumer-producer model described above and we need to fix some bugs
> soon
> > >
> > > for a customer. So, moving to kafka streams seems bigger work. That's
> why
> > >
> > > looking at work around if same thing can be achieved with current model
> > >
> > > using transactions that span across consumer offset commits and
> producer
> > >
> > > send.
> > >
> > >
> > >
> > > We have made the producer idempotent and turned on transactions.
> > >
> > > However want to make offset commit to consumer and send from producer
> to
> > be
> > >
> > > atomic? Is that possible?
> > >
> > >
> > >
> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> >  > > >
> > >
> > > wrote:
> > >
> > >
> > >
> > > > Pushkar, in kafka development for customer consumer/producer you
> handle
> > > it.
> > >
> > > > However you can ensure the process stops (or sends message to dead
> > > letter)
> > >
> > > > before manually committing the consumer offset. On the produce side
> you
> > > can
> > >
> > > > turn on idempotence or transactions. But unless you are using
> Streams,
> > > you
> > >
> > > > chain those together yoursef. Would kafka streams work for the
> > operation
> > >
> > > > you’re looking to do?
> > >
> > > >
> > >
> > > > Best,
> > >
> > > > Chris
> > >
> > > >
> > >
> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> > > wrote:
> > >
> > > >
> > >
> > > > > Hi All,
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > I am using a normal kafka consumer-producer in my microservice,
> with
> > a
> > >
> > > > >
> > >
> > > > > simple model of consume from source topic -> process the record ->
> > >
> > > > produce
> > >
> > > > >
> > >
> > > > > on destination topic.
> > >
> > > > >
> > >
> > > > > I am mainly looking for exactly-once guarantee  wherein the offset
> > > commit
> > >
> > > > >
> > >
> > > > > to consumed topic and produce on destination topic would both
> happen
> > >
> > > > >
> > >
> > > > > atomically or none of them would happen.
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > In case of failures of service instance, if consumer has consumed,
> > >
> > > > >
> > >
> > > > > processed record and produced on destination topic but offset not
> yet
> > >
> > > > >
> > >
> > > > > committed back to source topic then produce should also not happen
> on
> > >
> > > > >
> > >
> > > > > destination topic.
> > >
> > > > >
> > >
> > > > > Is this behavior i.e. exactly-once, across consumers and producers,
> > >
> > > > >
> > >
> > > > > possible with transactional support in kafka?
> > >
> > > > >
> > >
> > > > > --
> > >
> > > >
> > >
> > > >
> > >
> > > > [image: Confluent] 
> > >
> > > > Chris Larsen
> > >
> > > > Sr Solutions Engineer
> > >
> > > > +1 847 274 3735 <+1+847+274+3735>
> > >
> > > > Follow us: [image: Blog]
> > >
> > > > <
> > >
> > > >
> > >
> >
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> > >
> > > > >[image:
> > >
> > > > Twitter] [image: LinkedIn]
> > >
> > > > 
> > >
> > > >
> > >
> > > --
> >
> >
> > [image: Confluent] 
> > Chris Larsen
> > Sr Solutions Engineer
> > +1 847 274 3735 <+1+847+274+3735>
> > Follow us: [image: Blog]
> > <
> >
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> > >[image:
> > Twitter] 

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Chris,

I am not sure how this solves the problem scenario that we are experiencing
in customer environment: the scenario is:
1. application consumed record and processed it
2. the processed record is produced on destination topic and ack is received
3. Before committing offset back to consumed topic, the application pod
crashed or shut down by kubernetes or shut down due to some other issue

On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen 
wrote:

> It is not possible out of the box, it is something you’ll have to write
> yourself. Would the following work?
>
> Consume -> Produce to primary topic-> get success ack back -> commit the
> consume
>
> Else if ack fails, produce to dead letter, then commit upon success
>
> Else if dead letter ack fails, exit (and thus don’t commit)
>
> Does that help? Someone please feel free to slap my hand but seems legit to
> me ;)
>
> Chris
>
>
>
> On Fri, Jul 16, 2021 at 10:48 Pushkar Deole  wrote:
>
> > Thanks Chris for the response!
> >
> > The current application is quite evolved and currently using
> >
> > consumer-producer model described above and we need to fix some bugs soon
> >
> > for a customer. So, moving to kafka streams seems bigger work. That's why
> >
> > looking at work around if same thing can be achieved with current model
> >
> > using transactions that span across consumer offset commits and producer
> >
> > send.
> >
> >
> >
> > We have made the producer idempotent and turned on transactions.
> >
> > However want to make offset commit to consumer and send from producer to
> be
> >
> > atomic? Is that possible?
> >
> >
> >
> > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
>  > >
> >
> > wrote:
> >
> >
> >
> > > Pushkar, in kafka development for customer consumer/producer you handle
> > it.
> >
> > > However you can ensure the process stops (or sends message to dead
> > letter)
> >
> > > before manually committing the consumer offset. On the produce side you
> > can
> >
> > > turn on idempotence or transactions. But unless you are using Streams,
> > you
> >
> > > chain those together yoursef. Would kafka streams work for the
> operation
> >
> > > you’re looking to do?
> >
> > >
> >
> > > Best,
> >
> > > Chris
> >
> > >
> >
> > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> > wrote:
> >
> > >
> >
> > > > Hi All,
> >
> > > >
> >
> > > >
> >
> > > >
> >
> > > > I am using a normal kafka consumer-producer in my microservice, with
> a
> >
> > > >
> >
> > > > simple model of consume from source topic -> process the record ->
> >
> > > produce
> >
> > > >
> >
> > > > on destination topic.
> >
> > > >
> >
> > > > I am mainly looking for exactly-once guarantee  wherein the offset
> > commit
> >
> > > >
> >
> > > > to consumed topic and produce on destination topic would both happen
> >
> > > >
> >
> > > > atomically or none of them would happen.
> >
> > > >
> >
> > > >
> >
> > > >
> >
> > > > In case of failures of service instance, if consumer has consumed,
> >
> > > >
> >
> > > > processed record and produced on destination topic but offset not yet
> >
> > > >
> >
> > > > committed back to source topic then produce should also not happen on
> >
> > > >
> >
> > > > destination topic.
> >
> > > >
> >
> > > > Is this behavior i.e. exactly-once, across consumers and producers,
> >
> > > >
> >
> > > > possible with transactional support in kafka?
> >
> > > >
> >
> > > > --
> >
> > >
> >
> > >
> >
> > > [image: Confluent] 
> >
> > > Chris Larsen
> >
> > > Sr Solutions Engineer
> >
> > > +1 847 274 3735 <+1+847+274+3735>
> >
> > > Follow us: [image: Blog]
> >
> > > <
> >
> > >
> >
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> >
> > > >[image:
> >
> > > Twitter] [image: LinkedIn]
> >
> > > 
> >
> > >
> >
> > --
>
>
> [image: Confluent] 
> Chris Larsen
> Sr Solutions Engineer
> +1 847 274 3735 <+1+847+274+3735>
> Follow us: [image: Blog]
> <
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> >[image:
> Twitter] [image: LinkedIn]
> 
>


Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Chris Larsen
It is not possible out of the box, it is something you’ll have to write
yourself. Would the following work?

Consume -> Produce to primary topic-> get success ack back -> commit the
consume

Else if ack fails, produce to dead letter, then commit upon success

Else if dead letter ack fails, exit (and thus don’t commit)

Does that help? Someone please feel free to slap my hand but seems legit to
me ;)

Chris



On Fri, Jul 16, 2021 at 10:48 Pushkar Deole  wrote:

> Thanks Chris for the response!
>
> The current application is quite evolved and currently using
>
> consumer-producer model described above and we need to fix some bugs soon
>
> for a customer. So, moving to kafka streams seems bigger work. That's why
>
> looking at work around if same thing can be achieved with current model
>
> using transactions that span across consumer offset commits and producer
>
> send.
>
>
>
> We have made the producer idempotent and turned on transactions.
>
> However want to make offset commit to consumer and send from producer to be
>
> atomic? Is that possible?
>
>
>
> On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen  >
>
> wrote:
>
>
>
> > Pushkar, in kafka development for customer consumer/producer you handle
> it.
>
> > However you can ensure the process stops (or sends message to dead
> letter)
>
> > before manually committing the consumer offset. On the produce side you
> can
>
> > turn on idempotence or transactions. But unless you are using Streams,
> you
>
> > chain those together yoursef. Would kafka streams work for the operation
>
> > you’re looking to do?
>
> >
>
> > Best,
>
> > Chris
>
> >
>
> > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> wrote:
>
> >
>
> > > Hi All,
>
> > >
>
> > >
>
> > >
>
> > > I am using a normal kafka consumer-producer in my microservice, with a
>
> > >
>
> > > simple model of consume from source topic -> process the record ->
>
> > produce
>
> > >
>
> > > on destination topic.
>
> > >
>
> > > I am mainly looking for exactly-once guarantee  wherein the offset
> commit
>
> > >
>
> > > to consumed topic and produce on destination topic would both happen
>
> > >
>
> > > atomically or none of them would happen.
>
> > >
>
> > >
>
> > >
>
> > > In case of failures of service instance, if consumer has consumed,
>
> > >
>
> > > processed record and produced on destination topic but offset not yet
>
> > >
>
> > > committed back to source topic then produce should also not happen on
>
> > >
>
> > > destination topic.
>
> > >
>
> > > Is this behavior i.e. exactly-once, across consumers and producers,
>
> > >
>
> > > possible with transactional support in kafka?
>
> > >
>
> > > --
>
> >
>
> >
>
> > [image: Confluent] 
>
> > Chris Larsen
>
> > Sr Solutions Engineer
>
> > +1 847 274 3735 <+1+847+274+3735>
>
> > Follow us: [image: Blog]
>
> > <
>
> >
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>
> > >[image:
>
> > Twitter] [image: LinkedIn]
>
> > 
>
> >
>
> --


[image: Confluent] 
Chris Larsen
Sr Solutions Engineer
+1 847 274 3735 <+1+847+274+3735>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]



Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Thanks Chris for the response!
The current application is quite evolved and currently using
consumer-producer model described above and we need to fix some bugs soon
for a customer. So, moving to kafka streams seems bigger work. That's why
looking at work around if same thing can be achieved with current model
using transactions that span across consumer offset commits and producer
send.

We have made the producer idempotent and turned on transactions.
However want to make offset commit to consumer and send from producer to be
atomic? Is that possible?

On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen 
wrote:

> Pushkar, in kafka development for customer consumer/producer you handle it.
> However you can ensure the process stops (or sends message to dead letter)
> before manually committing the consumer offset. On the produce side you can
> turn on idempotence or transactions. But unless you are using Streams, you
> chain those together yoursef. Would kafka streams work for the operation
> you’re looking to do?
>
> Best,
> Chris
>
> On Fri, Jul 16, 2021 at 08:30 Pushkar Deole  wrote:
>
> > Hi All,
> >
> >
> >
> > I am using a normal kafka consumer-producer in my microservice, with a
> >
> > simple model of consume from source topic -> process the record ->
> produce
> >
> > on destination topic.
> >
> > I am mainly looking for exactly-once guarantee  wherein the offset commit
> >
> > to consumed topic and produce on destination topic would both happen
> >
> > atomically or none of them would happen.
> >
> >
> >
> > In case of failures of service instance, if consumer has consumed,
> >
> > processed record and produced on destination topic but offset not yet
> >
> > committed back to source topic then produce should also not happen on
> >
> > destination topic.
> >
> > Is this behavior i.e. exactly-once, across consumers and producers,
> >
> > possible with transactional support in kafka?
> >
> > --
>
>
> [image: Confluent] 
> Chris Larsen
> Sr Solutions Engineer
> +1 847 274 3735 <+1+847+274+3735>
> Follow us: [image: Blog]
> <
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> >[image:
> Twitter] [image: LinkedIn]
> 
>


Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Chris Larsen
Pushkar, in kafka development for customer consumer/producer you handle it.
However you can ensure the process stops (or sends message to dead letter)
before manually committing the consumer offset. On the produce side you can
turn on idempotence or transactions. But unless you are using Streams, you
chain those together yoursef. Would kafka streams work for the operation
you’re looking to do?

Best,
Chris

On Fri, Jul 16, 2021 at 08:30 Pushkar Deole  wrote:

> Hi All,
>
>
>
> I am using a normal kafka consumer-producer in my microservice, with a
>
> simple model of consume from source topic -> process the record -> produce
>
> on destination topic.
>
> I am mainly looking for exactly-once guarantee  wherein the offset commit
>
> to consumed topic and produce on destination topic would both happen
>
> atomically or none of them would happen.
>
>
>
> In case of failures of service instance, if consumer has consumed,
>
> processed record and produced on destination topic but offset not yet
>
> committed back to source topic then produce should also not happen on
>
> destination topic.
>
> Is this behavior i.e. exactly-once, across consumers and producers,
>
> possible with transactional support in kafka?
>
> --


[image: Confluent] 
Chris Larsen
Sr Solutions Engineer
+1 847 274 3735 <+1+847+274+3735>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]