TimoutExceptions during commitSync and commited

2020-09-22 Thread Sergio Soria
Hi, We have a Consumer that occasionally catches a TimeoutException when trying to commit an offset after polling. Since it's a ReatriableException the Consumer tries to roll back and read from the last committed offset. However when trying to fetch the last committed offset with committed(),

Consumer TimeoutException

2020-09-22 Thread Navneeth Krishnan
Hi All, I'm frequently getting the below error in one of the application consumers. >From the error what I can infer is, the offset commit failed due to timeout after 30 seconds. One suggestion was to increase the timeout but I think it will just extend the time period. What should be the good way

How to create dynamic listeners by reading configuration details in plugin.conf file

2020-09-22 Thread srinivasa bs
The set of topics to listen to should be configured in a plugin.conf file, instead of ConsumerListener, there should be multiple such listeners all dynamically started up based on the configurationin the plugin.conf file

How to create dynamic listeners by reading configuration details in plugin.conf file

2020-09-22 Thread srinivasa bs
The set of topics to listen to should be configured in a plugin.conf file, instead of ConsumerListener, there should be multiple such listeners all dynamically started up based on the configurationin the plugin.conf file

Re: Trigger topic compaction before uploading to S3

2020-09-22 Thread Ricardo Ferreira
These properties can't be triggered programatically. Kafka uses an internal thread pool called "Log Cleaner Thread" that does the job asynchronously of deleting old segments ("delete") and deleting repeated records ("compact"). Whatever the S3 connector picks up is already compacted and/or deleted.

Re: Not able to connect to bootstrap server when one broker down

2020-09-22 Thread Prateek Rajput
Hi Manoj, Thanks but we caught the issue, it was coming most probably because the wrong jar was being picked up from hdfs and was being set in oozie classpath at runtime. In our code, kafka-client is on 2.3 but while running MR job 0.8.2.0 jar was being picked up. We caught it after seeing the prod

Trigger topic compaction before uploading to S3

2020-09-22 Thread Daniel Kraus
Hi, I have a KStreams app that outputs a KTable to a topic with cleanup policy "compact,delete". I have the Confluent S3 Connector to store this table in S3 where I do further analysis with hive. Now my question is, if there's a way to trigger log compaction right before the S3 Connector reads t

Re: Kafka streams - how to handle application level exception in event processing

2020-09-22 Thread Pushkar Deole
Thank you Gilles..will take a look.. Bruno, thanks for your elaborate explanation as well... however it basically exposes my application to certain issues.. e.g. the application deals with agent states of a call center, and where the order of processing is important. So when agent is logged in th

Re: Kafka streams - how to handle application level exception in event processing

2020-09-22 Thread Gilles Philippart
Hi Pushkar, Uber has written about how they deal with failures and reprocessing here, it might help you achieve what you describe: https://eng.uber.com/reliable-reprocessing/. Unfortunately, there isn't much written documentation about those patterns. There's also a good talk from Confluent's Ant

Re: Two MirrorMakers 2 for two DCs

2020-09-22 Thread Oleg Osipov
Yes, I use connect-mirror-maker.sh. On 2020/09/21 22:12:13, Ryanne Dolan wrote: > Oleg, yes you can run multiple MM2s for multiple DCs, and generally that's > what you want to do. Are you using Connect to run MM2, or the > connect-mirror-maker.sh driver? > > Ryanne > > On Mon, Sep 21, 2020,

Re: Kafka streams - how to handle application level exception in event processing

2020-09-22 Thread Bruno Cadonna
Hi Pushkar, I think there is a misunderstanding. If a consumer polls from a partition, it will always poll the next event independently whether the offset was committed or not. Committed offsets are used for fault tolerance, i.e., when a consumer crashes, the consumer that takes over the work