RE: When Kafka stores group information in zookeeper?

2018-10-22 Thread 赖剑清
Yeah, it works. It all depends on the address of the bootstrap-server: address of zookeeper starts old consumer and kafka-brokers' address with new consumer. Thank you very much! >-Original Message- >From: Peter Bukowinski [mailto:pmb...@gmail.com] >Sent: Tuesday, October 23, 2018

Re: When Kafka stores group information in zookeeper?

2018-10-22 Thread Peter Bukowinski
It all depends on which type of consumer you are using. If you use an old (original) consumer, you must specify one or more zookeepers since group management info is stored in zookeeper. If you use a new consumer, group management is handled by the kafka cluster itself so you must specify one

When Kafka stores group information in zookeeper?

2018-10-22 Thread 赖剑清
Hi, Kafka users: I tried to gain the information of topic-consumer groups using kafka-consumer-groups.sh. And I found commands below receive different infos: ./kafka-consumer-groups.sh --list --zookeeper localhost:2181 ./kafka-consumer-groups.sh --list --new-consumer --bootstrap-server

Re: AVRO Schema with optional field ("type": "record")

2018-10-22 Thread Jacob Sheck
The idl2schemata tool in avro-tools can help: [user@host]$ cat test.idl protocol EmployeeProtocol { record Employee { string name; long employeeId; } record EmployeeWithUnion{ union {null, Employee} optionalEmployee = null; } } [user@host]$ java -jar avro-tools-1.8.2.jar

Connect offsets not persisting when worker is restarted

2018-10-22 Thread Daniel Wilson
Are connector offsets persisted across worker restarts? I had thought they would be since they are written to the connect-offsets topic. I have code set up to save the offset as a field in my SourceTask, which works fine. I can restart the task and the offsets seem to be persisted, though I don't

Re: kafka client 1.1.0 broker compatibility

2018-10-22 Thread Matthias J. Sax
If Logstash's internal client is 1.1.0, it is be compatible with Kafka brokers 2.0.0. Note: Brokers are always backward compatible to older clients (your case). Additionally, since, 0.10.0.0 release, broker are also forward compatible to newer clients. -Matthias On 10/22/18 4:06 AM, Dayananda

Re: Kafka Streams "processed" definition

2018-10-22 Thread Matthias J. Sax
Same reply as to your other email asking the same question: > A message is considered processed, if all state updates are done and all > output messages are written. > > Note, that this notion of "processed" is based on sub-topologies, but > not the full topology. > > Hope this helps. > > >

Re: Kafka Deplopyment Using Kubernetes (on Cloud) - settings for log.dirs

2018-10-22 Thread M. Manna
Yes I have. But it lacks detailed explanation - even the document which I have obtained through a webinar. Log.dirs impact, imho, is the most important and relevant item to elaborate for such publications. And I mean, by using diagrams and details that apache Kafka and Confluent have done so far

Re: Kafka Deplopyment Using Kubernetes (on Cloud) - settings for log.dirs

2018-10-22 Thread Mark Anderson
Have you reviewed https://www.confluent.io/blog/getting-started-apache-kafka-kubernetes/ as a starting point? On Mon, 22 Oct 2018, 18:07 M. Manna, wrote: > Thanks a lot for your prompt answer. This is what I was expecting. > > So, if we had three pods where volumes are mapped as the following >

Re: Kafka Deplopyment Using Kubernetes (on Cloud) - settings for log.dirs

2018-10-22 Thread M. Manna
Thanks a lot for your prompt answer. This is what I was expecting. So, if we had three pods where volumes are mapped as the following Pod 1 = (log.dirs=/some/directory1) Pod 2 = (log.dirs=/some/directory2) Pod 3 = (log.dirs=/some/directory3) if something bad happens to Pod 3 and goes down,

Re: Kafka Deplopyment Using Kubernetes (on Cloud) - settings for log.dirs

2018-10-22 Thread Svante Karlsson
Different directories, they cannot share path. A broker will delete everything under the log directory that it does not know about Den mån 22 okt. 2018 kl 17:47 skrev M. Manna : > Hello, > > We are thinking of rolling out Kafka on Kubernetes deployed on public cloud > (AWS or GCP, or other). We

Kafka Deplopyment Using Kubernetes (on Cloud) - settings for log.dirs

2018-10-22 Thread M. Manna
Hello, We are thinking of rolling out Kafka on Kubernetes deployed on public cloud (AWS or GCP, or other). We were hoping if someone could provide some suggestion or insight here. What we are trying to understand is how logs.dir property is affected when we run Pods in a specific worker node? if

??????RE: New increased partitions could not be rebalance, until stop allconsumers and start them

2018-10-22 Thread ????????????
Thank you very much. it's very clear and useful. Ruiping Li -- -- ??: "??"; : 2018??10??19??(??) 10:09 ??: "users@kafka.apache.org"; : RE: New increased partitions could not be rebalance, until stop allconsumers and

Re: New increased partitions could not be rebalance, until stop allconsumers and start them

2018-10-22 Thread ????????????
Thanks a lot. This config works. Ruiping Li -- -- ??: "hacker win7"; : 2018??10??18??(??) 6:29 ??: "users"; : Re: New increased partitions could not be rebalance, until stop allconsumers and start them You can add the

Kafka Streams "processed" definition

2018-10-22 Thread Tobias Johansson
Hi, I wonder for when in Kafka Streams applications, a message is considered processed. I can't find any documentation on when the consumer ACK is done. Regards, Tobias Tobias Johansson Java Developer NetEnt | Better Gaming™ T: +46 73 987 28 63, M: +46 73 987 28 63

kafka client 1.1.0 broker compatibility

2018-10-22 Thread Dayananda S
Looking for an updated broker compatibility chart for Kafka client 1.1.0. Basically kafka input plugin for Logstash uses kafka client 1.1.0 and I wanted to check if this plugin supports kafka broker v2(latest version) Disclaimer