Zookeeper hostname/ip change

2016-09-25 Thread brenfield111
I need to change the hostnames and ips for the Zookeeper ensemble
serving my Kafka cluster.

Will Kafka carry on as usual, along with it's existing ZK nodes, after
making the config changes?

Thanks


Re: Zookeeper hostname/ip change

2016-09-25 Thread Ali Akhtar
Perhaps if you add 1 node, take down existing node, etc?

On Sun, Sep 25, 2016 at 10:37 PM, brenfield111 
wrote:

> I need to change the hostnames and ips for the Zookeeper ensemble
> serving my Kafka cluster.
>
> Will Kafka carry on as usual, along with it's existing ZK nodes, after
> making the config changes?
>
> Thanks
>


Where kafka stores consumer offsets in version 0.9.0.0?

2016-09-25 Thread Zhuo Chen
hi, I use kafka 0.9.0.0 in CDH 5.6, and my application consume kafka
message by following code.

-
  String topicName = "tpCz01";
  Properties props = new Properties();

  props.put("bootstrap.servers", "192.168.59.121:9092,
192.168.59.122:9092,192.168.59.123:9092");
  props.put("group.id", "grpCz");
  props.put("enable.auto.commit", "true");
  props.put("auto.commit.interval.ms", "1000");
  props.put("session.timeout.ms", "3");
  props.put("key.deserializer",
 "org.apache.kafka.common.serialization.StringDeserializer");
  props.put("value.deserializer",
 "org.apache.kafka.common.serialization.StringDeserializer");

  KafkaConsumer consumer = new KafkaConsumer
 (props);

  //Kafka Consumer subscribes list of topics here.
  consumer.subscribe(Arrays.asList(topicName));
  //..
-

when application run once, I can get consumer offset by
*kafka-run-class kafka.tools.ConsumerOffsetChecke, *but can not find the
offset in zookeeper path /consumer/x.

I wonder where kafka  stores consumer offset in version 0.9.0.0, Is there
anything wrong I did.
any help would be appreciated. thank you~


RE: producer can't push msg sometimes with 1 broker recoved

2016-09-25 Thread FEI Aggie
Kamal,
Thanks for your response. I tried testing with metadata.max.age.ms reduced to 
10s, but the behavior not changed, and producer still can't find the live 
broker.

I did more testing and find the rule (Topic is created with 
"--replication-factor 2 --partitions 1" in following case):
node 1   node 2
down(lead)   down (replica)   
down(replica) up   (lead)  producer send fail !!!

down(lead)   down (replica)   
up  (lead)   down (replica) producer send ok !!!

If the only node with original lead partition up, everything is fine.
If the only node with original replica partition up, producer can't connect to 
broker alive (always try to connect to the original lead broker, node 1 in my 
case).

Kafka can't recover for this situation? Anyone has clue for this?

Thanks!
Aggie
-Original Message-
From: Kamal C [mailto:kamaltar...@gmail.com] 
Sent: Saturday, September 24, 2016 1:37 PM
To: users@kafka.apache.org
Subject: Re: producer can't push msg sometimes with 1 broker recoved

Reduce the metadata refresh interval 'metadata.max.age.ms' from 5 min to your 
desired time interval.
This may reduce the time window of non-availability broker.

-- Kamal


RE: producer can't push msg sometimes with 1 broker recoved

2016-09-25 Thread FEI Aggie
@kant, we also use 3rd party software when needed.

-Original Message-
From: kant kodali [mailto:kanth...@gmail.com] 
Sent: Saturday, September 24, 2016 1:50 PM
To: users@kafka.apache.org
Subject: Re: producer can't push msg sometimes with 1 broker recoved

@Fei Just curious why you guys are interested in using Kafka. I thought 
alcatel-lucent usually create their own software no?
 





On Fri, Sep 23, 2016 10:36 PM, Kamal C kamaltar...@gmail.com
wrote:
Reduce the metadata refresh interval 'metadata.max.age.ms' from 5 min to

your desired time interval.

This may reduce the time window of non-availability broker.




-- Kamal