Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-08 Thread Chris Toomey
Write your own implementation of the JDBC sink connector and use the avro serializer to convert the kafka record into a connect record that your connector takes and writes to DB via JDBC. On Fri, May 8, 2020 at 7:38 PM wangl...@geekplus.com.cn < wangl...@geekplus.com.cn> wrote: > > Using debeziu

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Chris Toomey
What exactly is your understanding of what's happening when you say "the pipeline will be blocked by the group coordinator for up to " max.poll.interval.ms""? Please explain that. There's no universal recipe for "long-running jobs", there's just particular issues you might be encountering and sugg

Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-08 Thread wangl...@geekplus.com.cn
Using debezium to parse binlog, using avro serialization and send to kafka. Need to consume the avro serialized binlog data and wirite to target database I want to use self-written java code instead of kafka jdbc sink connector. How can i reference the schema registry, convert a kafka mess

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Ali Nazemian
Hi Chris, I am not sure where I said about the "automatic partition reassignment", but what I know here is the side effect of increasing "max.poll.interval.ms" is if the consumer hangs for whatever reason the pipeline will be blocked by the group coordinator for up to "max.poll.interval.ms". So I

Re: Re: Is it possible to send avro serialized data to kafka using kafka-console-producer.sh

2020-05-08 Thread wangl...@geekplus.com.cn
The kafka-avro-console-producer is only in conflunet kafka. But i am using apache kafka. Seems apache kafka kafka-console-producer is not able to send avro serialazed data to kafka, but kafka-console-consumer can read avro serialized data. I have tried, confl

Re: JDBC SINK SCHEMA

2020-05-08 Thread Chris Toomey
You have to either 1) use one of the Confluent serializers when you publish to the topic, so that the schema (or reference to it) is included, or 2) write and use a custom converter

RE: Change RF factor...

2020-05-08 Thread Rajib Deb
Thanks Manoj, but now I am getting an error Partitions reassignment failed due to Expected JSON OBJECT, RECEIVED null Thanks Rajib -Original Message- From: manoj.agraw...@cognizant.com Sent: Friday, May 8, 2020 12:26 PM To: users@kafka.apache.org Subject: Re: Change RF factor... [**EXT

Re: Change RF factor...

2020-05-08 Thread Manoj.Agrawal2
You can use below command To generate the json file ./bin/kafka-reassign-partitions.sh --zookeeper zookeeper_hoost:2181 --generate --topics-to-move-json-file test.json --broker-list 10,20,30 <-- list of broker id To execute the reassign partition ./bin/kafka-reassign-partitions.sh --zooke

RE: Change RF factor...

2020-05-08 Thread Rajib Deb
It has three brokers {"version":1,"partitions":[{"topic":"te_re","partitions":0,"replicas":[1546332950,1546332908]}]} Thanks -Original Message- From: manoj.agraw...@cognizant.com Sent: Friday, May 8, 2020 12:18 PM To: users@kafka.apache.org Subject: Re: Change RF factor... [**EXTERNAL

Re: Error in kafka streams: The producer attempted to use a producer id which is not currently assigned to its transactional id

2020-05-08 Thread Matthias J. Sax
>> So does this issue relate to transactions which are used only when >> exactly_once guarantee is set? Correct. On 5/8/20 6:28 AM, Pushkar Deole wrote: > Hello Matthias, > > By the way, this error seems to be occurring in only one of the services. > There is another service which is also using

Re: Change RF factor...

2020-05-08 Thread Manoj.Agrawal2
How many broker you have on this cluster and what is content of -- increase-replication-factor.json On 5/8/20, 12:16 PM, "Rajib Deb" wrote: [External] Hi I have by mistake created a topic with replication factor of 1. I am trying to increase the replication, but I get the below erro

Change RF factor...

2020-05-08 Thread Rajib Deb
Hi I have by mistake created a topic with replication factor of 1. I am trying to increase the replication, but I get the below error. Can anyone please let me know if I am doing anything wrong. The Topic is created with single partition(te_re-0). ./kafka-reassign-partitions.sh --zookeeper

Re: log.message.timestamp.difference.max.ms and future timestamps?

2020-05-08 Thread Andrew Otto
Thank you, good to know! On Fri, May 8, 2020 at 2:53 PM Matthias J. Sax wrote: > >> What happens if the message timestamp is in the future? > > If the difference if larger than > `log.message.timestamp.difference.max.ms` the write will be rejected. > > This timestamp difference works both ways.

Re: log.message.timestamp.difference.max.ms and future timestamps?

2020-05-08 Thread Matthias J. Sax
>> What happens if the message timestamp is in the future? If the difference if larger than `log.message.timestamp.difference.max.ms` the write will be rejected. This timestamp difference works both ways. -Matthias On 4/16/20 9:39 AM, Andrew Otto wrote: > log.message.timestamp.difference.max.

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Chris Toomey
I interpreted your post as saying "when our consumer gets stuck, Kafka's automatic partition reassignment kicks in and that's problematic for us." Hence I suggested not using the automatic partition assignment, which per my interpretation would address your issue. Chris On Fri, May 8, 2020 at 2:1

I have a wired issue

2020-05-08 Thread ????
Hi there I have a cluster with 5 brokers, they have same hardware(40Core, 128G MEM, 36T DISK),  JVM version(1.8.0_144) and start parameters  there is one broker always have a long time STW while young gc, two or three times per day. I am not sure this issue caused by Kafka or JVM itself.

Re: Kafka Connect SMT to insert key into message

2020-05-08 Thread Andrew Schofield
Hi, I think you're right. There's a ValueToKey transformation, but not a KeyToValue. I think that would be a better fit than adding to InsertField because you always use the concrete transformations InsertField$Key and InsertField$Value, and they have the same configuration. Probably a fairly simp

Re: Kafka: Messages disappearing from topics, largestTime=0

2020-05-08 Thread JP MB
Hi guys, I just wanted to inform you that we solved our issue, it was indeed related to the volume switching process and some permission mess. Thanks, everyone for the efforts on finding the root cause. Now as a question for a possible improvement, should kafka ever admit largestTime to be 0 in an

JDBC SINK SCHEMA

2020-05-08 Thread vishnu murali
Hey Guys, I am *using Apache **2.5 *not confluent. i am trying to send data from topic to database using jdbc sink connector. we need to send that data with the appropriate schema also. i am *not using confluent version* of kafka. so can anyone explain how can i do this ?

Re: Error in kafka streams: The producer attempted to use a producer id which is not currently assigned to its transactional id

2020-05-08 Thread Pushkar Deole
Hello Matthias, By the way, this error seems to be occurring in only one of the services. There is another service which is also using kafka streams to consumer from source, uses processors and then a sink to the output topic, however that service is running fine. The difference is this other serv

Re: JDBC Sink Connector

2020-05-08 Thread vishnu murali
Thank you so much Robin It helped me a lot to define sink connector with upsert mode and it is very helpful. For that schema related question i am not getting proper understanding. Because i am using Normal Apache kafka,i don't know whether those schema registry ,kql,avro serializers are present

Re: Is it possible to send avro serialized data to kafka using kafka-console-producer.sh

2020-05-08 Thread Miguel Silvestre
Hi, check kafka-avro-console-producer ./bin/kafka-avro-console-producer \ --broker-list localhost:9092 --topic test \ --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' https://docs.confluent.io/3.0.0/quickstart.html -- Migu

Is it possible to send avro serialized data to kafka using kafka-console-producer.sh

2020-05-08 Thread wangl...@geekplus.com.cn
I can consume avro serialized data from kafka like this: bin/kafka-console-consumer.sh --bootstrap-server xxx:9092 --topic xxx --property print.key=true --formatter io.confluent.kafka.formatter.AvroMessageFormatter --property schema.registry.url=http://xxx:8088 It is possible to send avro se

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Ali Nazemian
Thanks, Chris. So what is causing the consumer to get stuck is a side effect of the built-in partition assignment in Kafka and by overriding that behaviour I should be able to address the long-running job issue, is that right? Can you please elaborate more on this? Regards, Ali On Fri, May 8, 202

Re: can kafka state stores be used as a application level cache by application to modify it from outside the stream topology?

2020-05-08 Thread Pushkar Deole
Hello John, Matthias Sorry for bothering you, however this is now getting crazier. Initially I was under the impression that the cache being hold by application is in the form of key/value where key is the instance of agentId (e.g. 10) and value will hold other attributes (and their respective val