Hi,
I'm working on a custom implementation of a sink connector for Kafka
Connect framework. I'm testing the connector for fault tolerance by killing
the worker process  and restarting the connector through the Rest API and
occasionally I notice that some tasks don't receive anymore messages from
the internal consumers. I don't get any errors from the log and the tasks
seem to be initialised correctly but some of them just don't process
messages anymore. Normally when I restart again the connector, the tasks
read all the messages skipped before. I'm executing Kafka Connect in
distributed mode.

Could it be a problem of the cleanup function invoked when closing the
connector causing a leak in consumer connections with the broker? Any ideas?

And also, from the documentation I read that the connector save the offset
of the tasks in a special topic in Kafka (the one specified via
offset.storage.topic) but it is empty even though the connector process
messages. Is it normal?

Thanks,
Matteo

Reply via email to