Hi Team

Recently, I had seen strange behavior in kafka-connect. We have source
connector with single task only, which reads from S3 bucket and copy to
kafka topic.We have two worker node in a cluster, so at any point of time a
task can be assigned to single worker node.

I saw in logs that both the worker node were processing/ reading the data
from S3 bucket, which should be impossible since we have configured that a
single task should be created and read the data.

Is it possible in any scenario that due to worker process restarting
multiple times or registering/ de-registering the connector, a task can be
left assigned in both the worker node.

Note : I have seen this only one time, after that it was never reproduced.

Regards and Thanks
Deepak Raghav

Reply via email to