Hi,

I'm new to kafka (really newbie) and I'm trying to set this connector on my
local machine which is a macOS Mojava 10.14.6.

I've downloaded the connector and put the contents on folder:
/usr/local/share/kafka/plugins
and update the plugin.path on file
/usr/local/etc/kafka/connect-standalone.properties to:
/usr/local/share/kafka/plugins

I'm launching the connector like this:
/usr/local/Cellar/kafka/2.3.1/bin/connect-standalone
/usr/local/etc/kafka/connect-standalone.properties
/Users/miguel.silvestre/meetups-to-s3.json

However I'm always getting the error bellow.
Any idea on what am I doing wrong?

Thank you
Miguel Silvestre

PS. I need a sink connector that reads json from kafka topics and writes to
s3 on parquet files. I need to read several topics and the files are going
to the same bucket on different paths. Do you anything that can do the
task? It seems that secor is having building issues right now.



[2019-11-12 16:24:19,322] INFO Kafka Connect started
(org.apache.kafka.connect.runtime.Connect:56)
[2019-11-12 16:24:19,325] ERROR Failed to create job for
/Users/miguel.silvestre/meetups-to-s3.json
(org.apache.kafka.connect.cli.ConnectStandalone:110)
[2019-11-12 16:24:19,326] ERROR Stopping after connector error
(org.apache.kafka.connect.cli.ConnectStandalone:121)
java.util.concurrent.ExecutionException:
org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector
config {"s3.part.size"="5242880",, "partition.duration.ms"="3600000",, "
s3.bucket.name"="test-connector",,
"value.converter.schemas.enable"="false",, "timezone"="UTC",, },=,
"partitioner.class"="io.confluent.connect.storage.partitioner.TimeBasedPartitioner",,
"path.format"="'date'=YYYY-MM-dd/'hour'=HH",, "rotate.interval.ms"="60000",,
"name"="meetups-to-s3",, "flush.size"="100000",,
"key.converter.schemas.enable"="false",,
"value.converter"="org.apache.kafka.connect.json.JsonConverter",,
"topics"="test",, "tasks"=[], "config"={,
"connector.class"="io.confluent.connect.s3.S3SinkConnector",,
"format.class"="io.confluent.connect.s3.format.json.JsonFormat",,
"tasks.max"="1",, "s3.region"="eu-west-1",,
"key.converter"="org.apache.kafka.connect.json.JsonConverter",,
"timestamp.extractor"="Record", "locale"="en",,
"schema.compatibility"="NONE",, {=,
"storage.class"="io.confluent.connect.s3.storage.S3Storage",, }=} contains
no connector type
at
org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at
org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at
org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:118)
Caused by:
org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector
config {"s3.part.size"="5242880",, "partition.duration.ms"="3600000",, "
s3.bucket.name"="test-connector",,
"value.converter.schemas.enable"="false",, "timezone"="UTC",, },=,
"partitioner.class"="io.confluent.connect.storage.partitioner.TimeBasedPartitioner",,
"path.format"="'date'=YYYY-MM-dd/'hour'=HH",, "rotate.interval.ms"="60000",,
"name"="meetups-to-s3",, "flush.size"="100000",,
"key.converter.schemas.enable"="false",,
"value.converter"="org.apache.kafka.connect.json.JsonConverter",,
"topics"="test",, "tasks"=[], "config"={,
"connector.class"="io.confluent.connect.s3.S3SinkConnector",,
"format.class"="io.confluent.connect.s3.format.json.JsonFormat",,
"tasks.max"="1",, "s3.region"="eu-west-1",,
"key.converter"="org.apache.kafka.connect.json.JsonConverter",,
"timestamp.extractor"="Record", "locale"="en",,
"schema.compatibility"="NONE",, {=,
"storage.class"="io.confluent.connect.s3.storage.S3Storage",, }=} contains
no connector type
at
org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:287)
at
org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:192)
at
org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:115)
[2019-11-12 16:24:19,329] INFO Kafka Connect stopping
(org.apache.kafka.connect.runtime.Connect:66)
--
Miguel Silvestre

Reply via email to