Re: Help Needed: Leadership Issue upon Kafka Upgrade (ZooKeeper 3.4.9)

2018-05-11 Thread Ted Yu
Was there any previous connectivity issue to 1.1.1.143:3888 before the upgrade ? I assume you have verified that connectivity between broker and 1.1.1.143 is Okay. Which zookeeper release are you running ? Cheers On Fri, May 11, 2018 at 3:16 PM, Raghav

Help Needed: Leadership Issue upon Kafka Upgrade (ZooKeeper 3.4.9)

2018-05-11 Thread Raghav
Hi We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both are hosted on the same 3 VMs. Before Restart 1. We were on Kafka 0.10.2.1 After Restart 1. We moved to Kafka 1.1 We observe that Kafkas report leadership issues, and for lot of partitions Leader is -1. I see some logs

Consumer/Producer different kerberos.service.name same jvm

2018-05-11 Thread Zieger, Antoine
Hi, I am trying to transfer data between two kerberized kafka clusters. In the same JVM I instanciate Consumer - Service name s1 - Broker address: host1:1234 Producer - Service name s2 - Broker address: host2:5678 So as you see consumer and

Re: Producer#commitTransaction() Not Being Called if New Records Aren't Processed by StreamTask

2018-05-11 Thread Guozhang Wang
Hello David, Thanks for reporting your observations. I agree with you that we should improve on stream task committing mechanism. In fact, there is already a JIRA opened but for another motivation: https://issues.apache.org/jira/browse/KAFKA-5510 Feel free to take on this JIRA and submit a PR.

LogCleaner thread failing after upgrading to 2.11-1.1.0

2018-05-11 Thread M. Manna
Hello, Our cluster is going down one-by-one during log cleanup. This is after we have done full upgrade from 2.10-0.10.2.1. This is the log we receive: [2018-05-11 17:12:21,652] WARN [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error in response for fetch request (type=FetchRequest,

Re: Consulting ReadOnlyKeyValueStore from Processor can lead to deadlock

2018-05-11 Thread Guozhang Wang
Hello Steven, thanks for pointing it out. I think both of the mentioned issues worth be improving: 1. The read-write lock documentation for caching enabled stores. 2. When CACHE_MAX_BYTES_BUFFERING_CONFIG is set to 0, we should automatically disable the dummy caching layer in all stores as it is

Re: How many are supported by kafka cluster

2018-05-11 Thread Matthias J. Sax
This might be interesting: https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ Not sure what you mean by "streams" exactly but for brokers the number of partitions are the dominating factor, not the number of topics. -Matthias On 5/11/18 2:01 AM,

Re: What is the performance impact of setting max.poll.records=1

2018-05-11 Thread Matthias J. Sax
`max.poll.records` only configures how many records are returned from poll(). Internally, the consumer buffers a batch or records and only if this batch is empty, if will do a new fetch request within poll(). -Matthias On 5/10/18 10:46 PM, Mads Tandrup wrote: > Hi > > I forgot to metion that

Re: Kafka consumer slowly consume the data

2018-05-11 Thread Ted Yu
bq. load on the node is increasing tremendously Can you do some profiling to see where the bottleneck was ? You can pastebin some stack traces. Which Kafka release do you use ? Thanks On Fri, May 11, 2018 at 6:41 AM, Karthick Kumar wrote: > Hi, > > I'm using tomcat

Kafka TotalTimeMs time interval

2018-05-11 Thread Shyam P R
Hi All, I have a question regarding Kafka TotalTimeMs metrics and similar metrics. What is the time interval for which the metrics is calculated. For example let us take the MBean kafka.network:name=TotalTimeMs,request=Produce,type=RequestMetrics as an example. In this metric, there is a Mean and

Re: Kafka consumer slowly consume the data

2018-05-11 Thread Karthick Kumar
Hi, I'm using tomcat nodes for Kafka producer and consumer, Recently I faced some issues with it. Normally the producer and consumer counts matched in the tomcat nodes. After some time the produced data is consumed with delay, I'm not sure where to check. The data which was delayed is dumped

Re: No records on consumer poll when upgraded to 2.11-1.1.0

2018-05-11 Thread M. Manna
I just noticed that my max.poll.interval.ms was higher than request.timeout.ms for consumer.properties. So I removed them to be the default OOB. But even then it's not working. It looks like addition 2s on consumer.poll call is giving me data after the upgrade. I am not sure why this is

How many are supported by kafka cluster

2018-05-11 Thread Sathish Anandapu
Hi, I would like to know about *How many streams can we support before noticing kafka cluster degradation and need to scale up the cluster*. How can I get information related to above query, on which basis I will get that information -- *Thanks & Regards,* *Sathish Kumar A*

Re: Removing the Kafka DEBUG logs in catalina.out using log4j.properties

2018-05-11 Thread Karthick Kumar
Hi, I'm using tomcat node as a Kafka consumer, It prints the INFO, DEBUG and ERROR logs. When I analyzed in log file debug logs are taking more space. So i'm having disk space issue. I'm using *log4j.properties* for managing the logs, Now I want to remove the DEBUG logs from my logger file.