Hi Williams,
Thank you for quick reply.
I am new to Kafka and stream processing. Wanted to implement solution through
docker. It is expected to contain Zookeeper, Kafka, Kafkka-Connect and
Cassandra.
Tried example from Getting started with the Kafka Connect Cassandra Source
Unable to proceed
Hi Williams,
Thank you for quick reply.
Wanted to implement solution through docker. It is expected to contain
Zookeeper, Kafka, Kafkka-Connect and Cassandra.
Tried example in Getting started with the Kafka Connect Cassandra Source
Getting below error while starting kafka.
Thanks and regards
Hi,
Found the reason that the latest schema-registry module version isn't
compatible with our internal kafka version 0.11 and hence the above error
message.
Investigating if the previous version, 4.1.0-SNAPSHOT would work with our
internal version as per this change -
whats in docker? kafka? kafka-connnect? Did you try setting up outside of a
container first?
-B
On Thu, May 3, 2018 at 2:22 PM, Jagannath Bilgi
wrote:
> Hi Team,
> Trying to load data from Cassandra to Kafka using kafka-connect. Tried
> results from Google search.
Hi Team,
Trying to load data from Cassandra to Kafka using kafka-connect. Tried results
from Google search. However unable to complete successfully.
Could you please help me in resolving this.
Note : trying to deploy using Docker.
Thanks and regards
Jagannath S Bilgi
Hi,
I'm trying to build kafka-connect-hdfs separately by following this FAQ -
https://github.com/confluentinc/kafka-connect-hdfs/wiki/FAQ
While compiling schema-registry, I get the following error:
[*INFO*] -
[*ERROR*] COMPILATION
Kafka Streams itself is backward compatible to 0.10.2.1 brokers.
However, the embedded cluster you are using is not part of public API
and the 0.10.2.1 embedded cluster might have a different API than the
1.1 embedded cluster. Thus, you would need to rewrite your tests.
-Matthias
On 4/21/18
Hi Johnathan.
Yes I decreased the retention on all topics simultaneously. I realized my
mistake later when I saw the cluster overloaded :)
I wasn't 100% sure so I looked it up, but it looks to me like
log.cleaner.threads and log.cleaner.io.max.bytes.per.second only apply when a
topic is using
Howdy Vincent.
Sounds like a painful situation! I have experienced similar drama with
Kafka so maybe I can offer some advice.
You said you decreased the retention time on 4 topics. I wonder, was this
done on all 4 topics at the same time?
Depending on broker and partition config, that can be