Re,
Thank you very much for your help! I wandered around for several days. It
worked by adding to a clickhouse connector creation http request:
* "consumer.override.ssl.truststore.location":
"/opt/kafka/kafka.ca.pem",
"consumer.override.ssl.truststore.type": "PEM",
"consumer.over
Hi Domantas,
For sink connectors, you'll need to add all SSL-related properties either
to your Connect worker file prefixed with "consumer.", or to your
individual connector files prefixed with "consumer.override.".
If you're using the DLQ feature, you'll also need to do the same but with
"admin.
HI,
I have issue connecting to to kafka with SSL, tried a lot of options and
stuck, maybe someone can suggest what is wrong here, on clickhouse side
everything is fine:
[root@server.tld01 config]# cat connect-distributed.properties | grep -v
^# | grep -v '^$'
group.id=connect-cluster
key.convert
My employer has Kafka for communicating some systems and I need to integrate my
system too.
I'm new to this subject, so I'll cut to the chase: Are there any Kafka
connectors to use in Opentext Documentum xCP 2.2?
Kind regards,
Elber
Esta mensagem e seus anexos são destinados exclusivamente a
Hey there,
I'm wondering if anyone has the need to use an embedded Kafka connector
module. The goal we want to achieve is to avoid letting customers maintain
a separate component when they stream data from their Kafka cluster to our
service, so that they just need to provide the cl
. \
outputMode('append'). \
option("truncate", "false"). \
foreachBatch(SendToBigQuery). \
This will use Spark-BigQuery API which isd pretty efficient .
I was looking at the Kafka connector for BigQuery and it appears that some
d
ually get message the other task (the one that fails)
doesn't acknowledge..
-Dave
On 5/13/20, 10:42 PM, "wangl...@geekplus.com.cn"
wrote:
I want to know how kafka connector under distributed mode balance its task?
For example I have two connector instanc
I want to know how kafka connector under distributed mode balance its task?
For example I have two connector instance: 192.168.10.2:8083, 192.168.10.3:8083
If one killed, the task can be transfered to another automatically without any
data loss?
When i use restful API curl "192.168.10.x
Dear teams,
We use Kafka-1.1.0 connector to load data,And start a connector using
rest api by application.
Before we start a connector,we'll check it was not existed and then
create it. This was encapsulated in a Quartz job.And each connector
had a job.
We use spring resttemplate as below
Thanks robin
Usually how many connectors can load on one worker to have independent kafka
connect cluster
Thanks
Lakshman
Sent from my iPhone
> On 6 Dec 2018, at 9:49 PM, Robin Moffatt wrote:
>
> If the properties are not available per-connector, then you will have to
> set them on the wor
Hello
We have eight countries, each country have three connectors .total 24
connectors but we have one worker cluster , every time kafka connect worker
need to restart
In this case how we can manage , pls suggest
Regards
Lakshman
> On 6 Dec 2018, at 9:49 PM, Robin Moffatt wrote:
>
> If t
If the properties are not available per-connector, then you will have to
set them on the worker and have independent Kafka Connect clusters
delineated by connector requirements. So long as you configure the ports
not to clash, there's no reason these can't exist on the same host.
--
Robin Moffa
Hello! I have question. We have cluster with several connect workers. And we
have many different connectors. We need to set for each connector its own
settings, max.in.flight.requests.per.connection , partitioner.class, acks. But
I have difficulties. How can I do that? Thanks
Hi,
Is there a way to configure a Kafka connector (or a Kafka Connect
server/cluster) to:
1. Receive a maximum of 1MB of data per second from a source Kafka
cluster, or
2. Receive a maximum of 1000 records per second from a source Kafka
cluster, or
3. Receive a maximum of 1MB of
yments?
>
> Thanks,
> Dave
>
>
> On 5/26/17, 1:44 PM, "Dave Hamilton" wrote:
>
> We are currently using the Kafka S3 connector to ship Avro data to S3.
> We made a change to one of our Avro schemas and have noticed consumer
> throughput on the Kafka
throughput
on the Kafka connector drop considerably. I am wondering if there is anything
we can do to avoid such issues when we update schemas in the future?
This is what I believe is happening:
· The avro producer application is running on 12 instances. They
are re
change to one of our Avro schemas and have noticed consumer throughput
on the Kafka connector drop considerably. I am wondering if there is anything
we can do to avoid such issues when we update schemas in the future?
This is what I believe is happening:
· The avr
We are currently using the Kafka S3 connector to ship Avro data to S3. We made
a change to one of our Avro schemas and have noticed consumer throughput on the
Kafka connector drop considerably. I am wondering if there is anything we can
do to avoid such issues when we update schemas in the
where it is getting stuck? You could also try increasing the log
level for the framework to DEBUG or even TRACE.
-Ewen
On Mon, Mar 27, 2017 at 6:22 AM, VIVEK KUMAR MISHRA 13BIT0066 <
vivekkumar.mishra2...@vit.ac.in> wrote:
> Hi All,
>
> I am creating kafka connector for mongodb a
nd N3
have different names.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Apr 6, 2017 at 4:26 PM, Tushar Sudhakar Jee
wrote:
> Hello Sir/Ma'am,
> I was trying to write a simple case of using kafka connector. My s
Hello Sir/Ma'am,
I was trying to write a simple case of using kafka connector. My setup
involves using three nodes N1,N2 and N3.
N1 is the source and N2, N3 are the sink nodes in my case.
I am writing data to a text file(say input.txt) on Node N1 and using the
standalone kafka connector I wi
Hi All,
I am creating kafka connector for mongodb as a source .My connector is
starting and connecting with kafka but it is not committing any offset.
This is output after starting connector.
[root@localhost kafka_2.11-0.10.1.1]# bin/connect-standalone.sh
config/connect-standalone.properties
Chris,
I think you meant to link to https://github.com/wepay/kafka-connect-bigquery
:)
- Samuel
On Mon, Aug 22, 2016 at 4:24 PM, Chris Egerton wrote:
> Hi there,
>
> We've recently open-sourced a BigQuery sink connector and would like to
> request that it be added to th
Hi there,
We've recently open-sourced a BigQuery sink connector and would like to
request that it be added to the Kafka Connector Hub (
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Connector+Hub). The
project can be found at https://github.com/wepay/kafka-connect-biquery, an
14/06/2016 07:35
To: users@kafka.apache.org<mailto:users@kafka.apache.org>
Subject: Re: Running kafka connector application
Kanagha,
I'm not sure about that particular connector, but normally the build script
would provide support for collecting the necessary dependencies. Then all
you
There's no API for connectors to shut themselves down because that doesn't
really fit the streaming model that Kafka Connect works with -- it isn't a
batch processing system. If you want to shut down a connector, you'd
normally accomplish this via the REST API.
Technically you *could* accomplish t
Hi everyone,
I would like to know if there is a way to shutdown a connector
programmatically ?
On my project we have developped a sink-connector to write messages into
GZIP files for testing purposes. We would like to stop the connector after
no message is received for an elapsed time
Thanks,
to start Kafka Connector via JMX?
>
> Thanks,
> Abhinav
>
--
Thanks,
Ewen
Kanagha,
I'm not sure about that particular connector, but normally the build script
would provide support for collecting the necessary dependencies. Then all
you need to do is add something like /path/to/connector-and-deps/* to your
classpath and it shouldn't be affected by versions in the pom.xm
Hi Everyone,
Is there a way to start Kafka Connector via JMX?
Thanks,
Abhinav
Hi,
I'm running the TwitterProducer task as per
https://github.com/Eneco/kafka-connect-twitter
connect-standalone /connect-source-standalone.properties
/twitter-source.properties
I see that I have to set the CLASSPATH to include all the dependent jars
that the target connector jar is dependent
key.converter and value.converter are namespace prefixes in this case.
These settings are used by the JsonConverter
https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L53
If schemas are enabled, all JSON messages are sent using an
In config/connect-standalone.properties and
config/connect-distributed.properties, there are the following
configuration entries:
> key.converter.schemas.enable=false
> value.converter.schemas.enable=false
But there is no Java source code which uses these two configuration
entries. I am talking a
Thnx, that looks like a good option. I'm a bit concerned about
running/monitoring an additional external app, an in-stream solution
(Connector or Streams plugin) would be preferable.
But mirroring may be good enough until we eventually upgrade to 0.10.
On Thu, May 5, 2016 at 10:57 AM, tao xiao w
You can use the built-in mirror maker to mirror data from one Kafka to the
other. http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
On Thu, 5 May 2016 at 10:47 Dean Arnold wrote:
> I'm developing a Streams plugin for Kafka 0.10, to be run in a dev sandbox,
> but pull data from a
I'm developing a Streams plugin for Kafka 0.10, to be run in a dev sandbox,
but pull data from a production 0.9 Kafka deployment. Is there a source
connector that can be used from the 0.10 sandbox to connect to the 0.9
cluster ? Given the number of changes/features in 0.10, such a connector
would
Thank you, Surendra.
I've added your connector to the Connector Hub page:
http://www.confluent.io/developers/connectors
On Fri, Apr 22, 2016 at 10:11 PM, Surendra , Manchikanti
wrote:
> Hi Jay,
>
> Thanks!! Can you please share the contact person to include this in
> Confluent Coneector Hub pag
Hi Jay,
Thanks!! Can you please share the contact person to include this in
Confluent Coneector Hub page.
Regards,
Surendra M
-- Surendra Manchikanti
On Fri, Apr 22, 2016 at 4:32 PM, Jay Kreps wrote:
> This is great!
>
> -Jay
>
> On Fri, Apr 22, 2016 at 2:28 PM, Surendra , Manchikanti <
> sur
This is great!
-Jay
On Fri, Apr 22, 2016 at 2:28 PM, Surendra , Manchikanti <
surendra.manchika...@gmail.com> wrote:
> Hi,
>
> I have implemented KafkaConnector for Solr, Please find the below github
> link.
>
> https://github.com/msurendra/kafka-connect-solr
>
> The initial release having SolrS
Hi,
I have implemented KafkaConnector for Solr, Please find the below github
link.
https://github.com/msurendra/kafka-connect-solr
The initial release having SolrSinkConnector Only, SolrSourceConnector
under development will add it soon.
Regards,
Surendra M
40 matches
Mail list logo