Re: replication factor increased automatically !

2015-11-23 Thread sunil kalva
hi sorry missed the header in my prev mail kafka_2.11-0.8.2.1/bin/kafka-topics.sh --describe --zookeeper zookeeperhost:2181 --topic testkafka Topic:testkafka PartitionCount:10 ReplicationFactor:8 Configs: Topic: testkafka Partition: 0 Leader: 2 Replicas: 5,10,1,9,2,7,3,8 Isr: 9,8,2

Re: replication factor increased automatically !

2015-11-23 Thread sunil kalva
Topic: testkafka Partition: 0 Leader: 5 Replicas: 5,10,1,9,2,7,3,8 Isr: 5,10,1,9,7,3,8 Topic: testkafka Partition: 1 Leader: 10 Replicas: 10,1,6,9,2,3,8,4 Isr: 10,6,9,3,8,4 Topic: testkafka Partition: 2 Leader: 4 Replicas: 5,10,1,9,2,7,3,4 Isr: 10,4 Topic: testkafka Partition: 3 Leader: 5

Re: replication factor increased automatically !

2015-11-23 Thread sunil kalva
We found the issue, it was with our UI tool which had a bug in partition reassignment tool. Nothing todo with kafka. Sorry for spamming. t Sunil On Mon, Nov 23, 2015 at 4:13 PM, sunil kalva wrote: > hi > sorry missed the header in my prev mail > >

Re: message order problem

2015-11-23 Thread Yonghui Zhao
The ack mode is default value 0. - 0, which means that the producer never waits for an acknowledgement from the broker (the same behavior as 0.7). This option provides the lowest latency but the weakest durability guarantees (some data will be lost when a server fails). And msg1 and

cannot decode consumer metadata response

2015-11-23 Thread Fredo Lee
i have four kafka nodes with broker id: 1,2,3,4 my kafka version is 0.8.2.2 i wrote a consumer client according to kafka protocol i killed one kafka node and wanted to reget coordinator broker. i got this binary string: 0,0,0,0,0,0,0,1,0,4,98,108,99,115,0,0,0,1,0,0,0,7,0,0,0,0,0,0,0,

Re: cannot decode consumer metadata response

2015-11-23 Thread Fredo Lee
this situation occurs when i killed some nodes from kafka nodes cluster. for the reason that the error code is zeor, i cannot find out the real reason. 2015-11-23 20:45 GMT+08:00 Fredo Lee : > i have four kafka nodes with broker id: 1,2,3,4 > my kafka version is 0.8.2.2

Fwd: cannot decode consumer metadata response

2015-11-23 Thread Fredo Lee
this situation occurs when i killed some nodes from kafka nodes cluster. for the reason that the error code is zeor, i cannot find out the real reason -- Forwarded message -- From: Fredo Lee Date: 2015-11-23 20:45 GMT+08:00 Subject: cannot decode

Re: replication factor increased automatically !

2015-11-23 Thread Raju Bairishetti
Hello Sunil, Are you able to see 8 replicas(i.e. in ISR) for all the partitions? Can you paste the output of *--describe *topic command? kafka-topics --zookeeper *-describe* On Mon, Nov 23, 2015 at 4:29 PM, sunil kalva wrote: > Hi > We have a 10 node kafka

Re: replication factor increased automatically !

2015-11-23 Thread Prabhjot Bharaj
Hi Sunil, Please paste the full output, including the first line of the output of kafka-topics.sh. e.g. root@x.x.x.x:~# kafka-topics.sh --describe --zookeeper localhost:2182 --topic part_2_repl_3 *Topic:part_2_repl_3 PartitionCount:2 ReplicationFactor:3 Configs:* Topic: part_2_repl_3

replication factor increased automatically !

2015-11-23 Thread sunil kalva
Hi We have a 10 node kafka cluster, and we have created a topic with replication factor "3" . It was running fine for 10 days, after that the replication factor is increased to "8" with out running any command to increase the replication. We have verified in server logs and also confirmed with the

Re: offset consume in JAVA api

2015-11-23 Thread Yonghui Zhao
Thanks Guozhang. In our scenario, each one group only has 1 consumer, so it is safe to do this. 2015-11-21 8:21 GMT+08:00 Guozhang Wang : > Yonghui, > > You can use ZookeeperConsumerConnector.commitOffsets() to commit arbitrary > offsets, but be careful using it to seek

Re: All brokers are running but some partitions' leader is -1

2015-11-23 Thread Qi Xu
Loop another guy from our team. On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu wrote: > Hi folks, > We have a 10 node cluster and have several topics. Each topic has about > 256 partitions with 3 replica factor. Now we run into an issue that in some > topic, a few partition (< 10)'s

Re: 0.9.0.0 RC4

2015-11-23 Thread Jun Rao
+1 Thanks, Jun On Fri, Nov 20, 2015 at 5:21 PM, Jun Rao wrote: > This is the fourth candidate for release of Apache Kafka 0.9.0.0. This a > major release that includes (1) authentication (through SSL and SASL) and > authorization, (2) a new java consumer, (3) a Kafka

Re: 0.9.0.0 RC4

2015-11-23 Thread Jun Rao
Thanks everyone for voting. The following are the results of the votes. +1 binding = 4 votes (Neha Narkhede, Sriharsha Chintalapani, Guozhang Wang, Jun Rao) +1 non-binding = 3 votes -1 = 0 votes 0 = 0 votes The vote passes. I will release artifacts to maven central, update the dist svn and

Re: 0.9.0.0 RC4

2015-11-23 Thread Hawin Jiang
We are waiting for this. On Mon, Nov 23, 2015 at 8:49 PM, Jun Rao wrote: > Thanks everyone for voting. > > The following are the results of the votes. > > +1 binding = 4 votes (Neha Narkhede, Sriharsha Chintalapani, Guozhang Wang, > Jun Rao) > +1 non-binding = 3 votes > -1 =

Re: message order problem

2015-11-23 Thread Yonghui Zhao
Thanks Guozhang, Most params are not set in our config . So max retries should be 3 by default In your explanation: a. msg1 sent, produce() returned. b. msg2 sent. c. msg1 failed, and retried. d. msg2 acked. e. msg1 acked. But is acks is 0, "the retries configuration will not take effect

Re: Is re-partition hitless process?

2015-11-23 Thread Gwen Shapira
By re-partition you mean adding partitions to an existing topics? There are two things to note in that case: 1. It is "hitless" because all it does is create new partitions where future records can go, it does not actually move data around. 2. You could be "hit" if your consumer code assumes that

Re: All brokers are running but some partitions' leader is -1

2015-11-23 Thread Prabhjot Bharaj
Hi, With the information provided, these are the steps I can think of (based on the experience I had with kafka):- 1. do a describe on the topic. See if the partitions and replicas are evenly distributed amongst all. If not, you might want to try the 'Reassign Partitions Tool' -

Re: 0.9.0.0 RC4

2015-11-23 Thread Ismael Juma
+1 (non-binding). Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80, quick start on source artifact and Scala 2.11 binary artifact. On Sat, Nov 21, 2015 at 1:21 AM, Jun Rao wrote: > This is the fourth candidate for release of Apache Kafka 0.9.0.0.

Re: Resetted cluster - kafka.common.KafkaException: Should not set log end offset on partition [test,0]'s local replica 1

2015-11-23 Thread Zoltan Fedor
Sorry, I think I found the issue. The host files on the two servers have referenced to the hostnames wrong... Apologies for the question. On Mon, Nov 23, 2015 at 10:12 AM, Zoltan Fedor wrote: > Hi, > I have a two-node Kafka cluster where I had lost one of the nodes, so

Re: 0.9.0.0 RC4

2015-11-23 Thread Ismael Juma
On Mon, Nov 23, 2015 at 4:15 PM, hsy...@gmail.com wrote: > In http://kafka.apache.org/090/documentation.html#newconsumerconfigs > partition.assignment.strategy should string, not a list of string? > List is correct, I believe, see how it's used in the code: List assignors =

Resetted cluster - kafka.common.KafkaException: Should not set log end offset on partition [test,0]'s local replica 1

2015-11-23 Thread Zoltan Fedor
Hi, I have a two-node Kafka cluster where I had lost one of the nodes, so I had to re-clone it from the other and since then I have problems with the original node. I went and reseted the whole cluster, deleted the kafka data folders, removed /config, /admin, /brokers, /consumers, /controller and

Re: 0.9.0.0 RC4

2015-11-23 Thread hsy...@gmail.com
In http://kafka.apache.org/090/documentation.html#newconsumerconfigs partition.assignment.strategy should string, not a list of string? On Fri, Nov 20, 2015 at 5:21 PM, Jun Rao wrote: > This is the fourth candidate for release of Apache Kafka 0.9.0.0. This a > major release

Subscription to the mailing list

2015-11-23 Thread Deville Deville
Hi. Please I want to subscribe to the mailing list. Please add me I have some question. Thanks in advance.

Re: Commit offsets only work for subscribe(), not assign()

2015-11-23 Thread Guozhang Wang
Siyuan, When you say "I didn't see anything from the time the process was killed" did you mean your consumer did not get any data after it is restarted, or that the processed messages before killing the consumer does not show up again? Guozhang On Mon, Nov 23, 2015 at 8:13 AM, hsy...@gmail.com

kafka java consumer not consuming messsages produced by remote client

2015-11-23 Thread Kudumula, Surender
Hi all I have a remote java producer producing messages to the same topic where my local kafka java client is subscribed to but it doesn't consume any messages whereas if I ran the consumer from command line I can consume the messages produced by remote client. The following command line

Re: 0.9.0.0 RC4

2015-11-23 Thread Jun Rao
I updated the release notes. Since this doesn't affect the release artifacts to be voted upon, we don't have to do another RC. Please vote by 6pm PT today. Thanks, Jun On Mon, Nov 23, 2015 at 8:43 AM, Guozhang Wang wrote: > I think we should update the release notes to

Re: 0.9.0.0 RC4

2015-11-23 Thread Neha Narkhede
+1 (binding). Verified source and binary artifacts, ran unit tests. On Mon, Nov 23, 2015 at 9:32 AM, Jun Rao wrote: > I updated the release notes. Since this doesn't affect the release > artifacts to be voted upon, we don't have to do another RC. > > Please vote by 6pm PT

Is re-partition hitless process?

2015-11-23 Thread Dillian Murphey
Can I do this on a production system and not have downtime? I'm using kafkamanager to make this easier, but it's just running the re-partition task.

Re: Commit offsets only work for subscribe(), not assign()

2015-11-23 Thread hsy...@gmail.com
Hey Jason, The test I did is very simple, I was using manual assignment with it's own groupid and clientid. I first started a process to consume data, then produce some data, then kill the process, continue produce more data and start the process again, I didn't see anything from the time the