Hi Pushkar,

Is the error you are talking about, one that is thrown by Kafka Streams or by your application? If it is thrown by Kafka Streams, could you please post the error?

I do not completely understand what you are trying to achieve, but maybe max.task.idle.ms [1] is the configuration you are looking for.

I can assure you that enable.auto.commit is false in Kafka Streams. What you probably mean is that Kafka Streams periodically commits the offsets. The commit interval can be controlled with commit.interval.ms [2].


Best,
Bruno


[1] https://kafka.apache.org/documentation/#max.task.idle.ms
[2] https://kafka.apache.org/documentation/#commit.interval.ms

On 21.09.20 12:38, Pushkar Deole wrote:
Hi,

I would like to know how to handle following scenarios while processing
events in a kafka streams application:

1. the streams application needs data from a globalKtable which loads it
from a topic that is populated by some other service/application. So, if
the streams application starts getting events from input source topic
however it doesn't find required data in GlobalKTable since that other
application/service hasn't yet loaded that data then the Kafka streams
application gets error while processing the event and application handles
the exception by logging  an error and it goes onto processing other
events. Since auto.commit is true, the polling will go on fetching next
batch and probably it will set the offset of previous batch, causing loss
of events that had an exception while processing.

I want to halt the processing here if an error occurs while processing the
event, so instead of going on to the next event, the processing should keep
trying previous event until application level error is resolved. How can I
achieve this?

Reply via email to