Auto offset commit failed

2016-08-30 Thread yuanjia8...@163.com
Hi All, My kafka cluster is kafka0.10.0. I have found two reasons for Auto offset commit failed in the log file. One is Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member, and another is Commit offsets failed with retriable exc

Re: Re: kafka-consumer-groups.sh delete group with new-consumer

2016-08-14 Thread yuanjia8...@163.com
offsets have all been expired, the group registry will be deleted. Guozhang On Thu, Aug 11, 2016 at 8:55 PM, yuanjia8...@163.com wrote: > Hi all, > When I use kafka-consumer-groups.sh to delete new-consumer's group, > the shell note that "Option [delete] is not valid

kafka-consumer-groups.sh delete group with new-consumer

2016-08-11 Thread yuanjia8...@163.com
Hi all, When I use kafka-consumer-groups.sh to delete new-consumer's group, the shell note that "Option [delete] is not valid with [new-consumer]. Note that there's no need to delete group metadata for the new consumer as it is automatically deleted when the last member leaves." How to u

Re: Re: KafkaConsumer position block

2016-07-25 Thread yuanjia8...@163.com
et the enable delete config right, and re-try to see if this issue is re-producible. Guozhang On Thu, Jul 21, 2016 at 11:52 PM, yuanjia8...@163.com wrote: > Hi Guozhang, > > I want to get the latest offset, code as follows: > consumer.assign(topicPartitionList); > consumer.se

Re: Re: KafkaConsumer position block

2016-07-21 Thread yuanjia8...@163.com
r position block Hello Yuanjia, Could you share your code example on calling consumer.position()? Is the partition that you are getting the offset from assigned to the consumer? Guozhang On Wed, Jul 20, 2016 at 11:50 PM, yuanjia8...@163.com wrote: > Hi, > With kafka-clients

KafkaConsumer position block

2016-07-20 Thread yuanjia8...@163.com
Hi, With kafka-clients-0.10.0.0, I use KafkaConsumer.position() to get the offset, the process block in ConsumerNetworkClient.awaitMetadataUpdate. Block until the meadata has been refreshed. My questions are: 1. Why the metadata not refresh? 2. Could it use timeout or throw ex

one broker id exist in topic ISR but does not exist in broker ids

2016-05-04 Thread yuanjia8...@163.com
Hi all, I have the problem that broker id 1 exist in one topic's ISR and is the only one, but id 1 do not exist in zookeeper path /broker/ids. Any idea why it's happening? Is it split-brain? Thanks. LiYuanJia

Re: Optimize the performance of inserting data to Cassandra with Kafka and Spark Streaming

2016-02-17 Thread yuanjia8...@163.com
Hi Jerry, 1. Make sure that 1000 messages have been sent to kafka, before consuming. 2. If you don't care the sequence between messages, you can use mutiple partition and use more comsumers. LiYuanJia From: Jerry Wong Date: 2016-02-17 05:33 To: users Subject: Optimize the performance