Hi,
We have a Consumer that occasionally catches a TimeoutException when
trying to commit an offset after polling. Since it's a
ReatriableException the Consumer tries to roll back and read from the
last committed offset. However when trying to fetch the last committed
offset with committed(),
Hi All,
I'm frequently getting the below error in one of the application consumers.
>From the error what I can infer is, the offset commit failed due to timeout
after 30 seconds. One suggestion was to increase the timeout but I think it
will just extend the time period. What should be the good way
The set of topics to listen to should be configured in a plugin.conf file,
instead of ConsumerListener, there should be multiple such listeners all
dynamically started up based on the configurationin the plugin.conf file
The set of topics to listen to should be configured in a plugin.conf file,
instead of ConsumerListener, there should be multiple such listeners all
dynamically started up based on the configurationin the plugin.conf file
These properties can't be triggered programatically. Kafka uses an
internal thread pool called "Log Cleaner Thread" that does the job
asynchronously of deleting old segments ("delete") and deleting
repeated records ("compact").
Whatever the S3 connector picks up is already compacted and/or deleted.
Hi Manoj,
Thanks but we caught the issue, it was coming most probably because the
wrong jar was being picked up from hdfs and was being set in oozie
classpath at runtime. In our code, kafka-client is on 2.3 but while running
MR job 0.8.2.0 jar was being picked up. We caught it after seeing the
prod
Hi,
I have a KStreams app that outputs a KTable
to a topic with cleanup policy "compact,delete".
I have the Confluent S3 Connector to store this
table in S3 where I do further analysis with hive.
Now my question is, if there's a way to trigger
log compaction right before the S3 Connector
reads t
Thank you Gilles..will take a look..
Bruno, thanks for your elaborate explanation as well... however it
basically exposes my application to certain issues..
e.g. the application deals with agent states of a call center, and where
the order of processing is important. So when agent is logged in th
Hi Pushkar,
Uber has written about how they deal with failures and reprocessing here,
it might help you achieve what you describe:
https://eng.uber.com/reliable-reprocessing/.
Unfortunately, there isn't much written documentation about those patterns.
There's also a good talk from Confluent's Ant
Yes, I use connect-mirror-maker.sh.
On 2020/09/21 22:12:13, Ryanne Dolan wrote:
> Oleg, yes you can run multiple MM2s for multiple DCs, and generally that's
> what you want to do. Are you using Connect to run MM2, or the
> connect-mirror-maker.sh driver?
>
> Ryanne
>
> On Mon, Sep 21, 2020,
Hi Pushkar,
I think there is a misunderstanding. If a consumer polls from a
partition, it will always poll the next event independently whether the
offset was committed or not. Committed offsets are used for fault
tolerance, i.e., when a consumer crashes, the consumer that takes over
the work
11 matches
Mail list logo