In case anyone else runs into this issue:
Turning on TRACE level logs revealed that the config topic we were using
was auto-created to have 12 partitions. As stated in the Kafka Connect User
Guide (http://docs.confluent.io/3.1.2/connect/userguide.html), the internal
topic used to store configs (co
Aside from the logs you already have, the best suggestion I have is to
enable trace level logging and try to reproduce -- there are some trace
level logs in the KafkaBasedLog class that this uses which might reveal
something. But it could be an issue in the consumer as well -- it sounds
like it is
I'm also running into this issue whenever I try to scale up from 1 worker
to multiple. I found that I can sometimes hack around this by
(1) waiting for the second worker to come up and start spewing out these
log messages and then
(2) sending a request to the REST API to update one of my connectors
The message
> Wasn't unable to resume work after last rebalance
means that you previous iterations of the rebalance were somehow behind/out
of sync with other members of the group, i.e. they had not read up to the
same point in the config topic so it wouldn't be safe for this worker (or
possibly
Hi people,
I've just deployed my Kafka Streams / Connect (I only use a connect sink to
mongodb) application on a cluster of four instances (4 containers on 2
machines) and now it seems to get into a sort of rebalancing loop, and I
don't get much in mongodb, I've got a little bit of data at the beg