Hi John

Please find my inline response below

 Regards and Thanks
Deepak Raghav



On Tue, Sep 1, 2020 at 8:22 PM John Roesler <vvcep...@apache.org> wrote:

> Hi Deepak,
>
> It sounds like you're saying that the exception handler is
> correctly indicating that Streams should "Continue", and
> that if you stop the app after handling an exceptional
> record but before the next commit, Streams re-processes the
> record?
> *Deepak Raghav : Your understaing is correct, but what I have seen is that
> if it got a valid record after getting some exceptional record(bad message)
> and app got stopped,  after restart it doesn't not read those bad message
> again in handle method.   *



> If that's what you're seeing, then it's how the system is
> designed to operate. We don't commit after every record
> because the overhead would be enormous. If you can't
> tolerate seeing duplicates in your error file, then it
> sounds like the simplest thing for you would be to maintain
> an index of the records you have already saved off so that
> you can gracefully handle re-processing. E.g., you might
> have a separate file per topic-partition that you update
> after appending to your error log to indicate the highest
> offset you've handled. Then, you can read it from the
> exception handler to see if the record you're handling is
> already logged. Just an idea.
>


> * Deepak : Thanks for suggesting the approach. *




> I hope this helps,
> -John
>
> On Tue, 2020-09-01 at 16:36 +0530, Deepak Raghav wrote:
> > Hi Team
> >
> > Just a reminder.
> > Can you please help me with this?
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav <deepakragha...@gmail.com>
> > wrote:
> >
> > > Hi Team
> > >
> > > I have created a CustomExceptionHandler class by
> > > implementing DeserializationExceptionHandler interface to handle the
> > > exception during deserialization time.
> > >
> > > But the problem with this approach is that if there is some exception
> > > raised for some record and after that stream is stopped and
> > > restarted again, it reads those bad messages again.
> > >
> > > I am storing those bad messages in some file in the filesystem and with
> > > this approach, duplicate messages are getting appended in the file
> when the
> > > stream is started since those bad message's offset are not getting
> > > increased.
> > >
> > > Please let me know if I missed anything.
> > >
> > > Regards and Thanks
> > > Deepak Raghav
> > >
> > >
>
>

Reply via email to