Re: postgresql consumer

2014-10-18 Thread Sa Li
Hi, all I've just made a 3-node kafka cluster (9 brokers, 3 for each node), the performance test is OK. Now I am using tridentKafkaSpout, and being able to getting data from producer, see BrokerHosts zk = new ZkHosts("10.100.70.128:2181"); TridentKafkaConfig spoutConf = new TridentKafkaCon

how to do disaster recovery for kafka 0.8 cluster with consumers that uses high-level consumer api?

2014-10-18 Thread Yu Yang
Hi all, We have a kafka 0.8.1 cluster. We implemented a consumers for the topics on the Kafka 0.8 cluster using high-level consumer api. We observed that if the Kafka cluster was down and got rebooted and the consumer was running, the consumer will fail to read a few topic partitions due to negati

Re: Too much log for kafka.common.KafkaException

2014-10-18 Thread xingcan
Ewen, Thanks for your prompt reply. I've read the issue. However, I think it seems to be another problem. During the logs' growing time, all consumers were kept down. And all nodes (including brokers and consumers) are disconnected due to the network problem. Besides, our new born cluster got no

Re: Too much log for kafka.common.KafkaException

2014-10-18 Thread Ewen Cheslack-Postava
This looks very similar to the error and stacktrace I see when reproducing https://issues.apache.org/jira/browse/KAFKA-1196 -- that's an overflow where the data returned in a FetchResponse exceeds 2GB. (It triggers the error you're seeing because FetchResponse's size overflows to become negative, w

Too much log for kafka.common.KafkaException

2014-10-18 Thread xingcan
Hi, all Recently, I upgrade my Kafka cluster to 0.8.1.1 and set replication with num.replica.fetchers=5. Last night, there's something wrong with the network. Soon, I found the server.log files (not data log!) on every node reached 4GB in an hour. I am not sure if it's my inappropriate configurat