Fetching offset timestamp with new KafkaConsumer

2015-11-30 Thread Wang, Howard
Hi,

I use the new KafkaConsumer from just released Kafka (kafka-clients) 0.9.0.0 to 
manually commit offsets (consumer.commitAsync()).  I have a use case where I 
want to know the meta data related to the offset such as the timestamp of the 
last offset.

In Kafka 0.8.* java API, there is an offset fetch API using OffsetFetchRequest. 
I could not find something similar in Kafka 0.9. Will there be ssomething  in 
KafkaConsumer?


Also, I am wondering where are kafka offsets stored internally on zookeeper? I 
tried to scan the zookeeper zNodes hierarchy and could not find a zNode related 
to the offset . I keep my KafkaConsumer running with a pre-set group id and 
client id when I did zookeeper search.


Thanks.

Howard

--
 Howard Wang
Engineering - Big Data and Personalization
Washington Post Media

1150 15th St NW, Washington, DC 20071
p. 202-334-9195
Email: howard.w...@washpost.com


Re: Is 0.9 new consumer API compatible with 0.8.x.x broker

2015-11-30 Thread Wang, Howard
Thanks.

I just found the new KafkaConsumer does have two API functions
assignment() and
committed(TopicPartition partition). With these 2 functions, we¹ll be able
to retrieve the timestamp of last offset regardless whether offset storage
is using ZK or offset topic.

Howard



-- 
 Howard Wang
Engineering ­ Big Data and Personalization
Washington Post Media


1150 15th St NW, Washington, DC 20071
p. 202-334-9195 
Email: howard.w...@washpost.com





On 11/30/15, 3:55 PM, "hsy...@gmail.com"  wrote:

>Is 0.9 new consumer API compatible with 0.8.x.x broker



Kafka 0.9 Consumer Group

2016-01-11 Thread Wang, Howard
Hi,

I have a question regarding the Kafka 0.9 Consumer Group . I manually commit 
offsets using the  Kafka 0.9 Consumer created with a consumer group.

However, after my app restarted totally from scratch, the consumer group seems 
to lose all the offsets. Is that true that the consumer offsets are transient 
and will be gone after the consumer group has no member and gets deleted?

Thanks.

Howard
--
 Howard Wang
Engineering - Big Data and Personalization
Washington Post Media

1150 15th St NW, Washington, DC 20071
p. 202-334-9195
Email: howard.w...@washpost.com


Re: Kafka 0.9 Consumer Group

2016-01-11 Thread Wang, Howard
Hi Jason,

I used the kafka-consumer-groups.sh to check my consumer group :
~/GitHub/kafka/bin/kafka-consumer-groups.sh  --bootstrap-server‹group  test.group --describe   --new-consumer .

I ran this command several times after my app was shut down. I always get
"Consumer group `test.group` does not exist or is rebalancing.² response.



I did set the enable.auto.commit to false. Below is how I set my
KafkaConsumer. 

Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
myAppConfig.getKafkaBroker());
props.put(ConsumerConfig.GROUP_ID_CONFIG,
myAppConfig.getKafkaConsumerGroup());
props.put(ConsumerConfig.CLIENT_ID_CONFIG,
myAppConfig.getRandomNodeId());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,
myAppConfig.getKafkaConsumerSessionTimeout());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer");
consumer = new KafkaConsumer<>(props);




Thanks.

Howard

On 1/11/16, 12:55 PM, "Jason Gustafson"  wrote:

>Sorry, wrong property, I meant enable.auto.commit.
>
>-Jason
>
>On Mon, Jan 11, 2016 at 9:52 AM, Jason Gustafson 
>wrote:
>
>> Hi Howard,
>>
>> The offsets are persisted in the __consumer_offsets topic indefinitely.
>> Since you're using manual commit, have you ensured that
>>auto.offset.reset
>> is disabled? It might also help if you provide a little more detail on
>>how
>> you're verifying that offsets were lost.
>>
>> -Jason
>>
>> On Mon, Jan 11, 2016 at 7:42 AM, Wang, Howard 
>> wrote:
>>
>>> Hi,
>>>
>>> I have a question regarding the Kafka 0.9 Consumer Group . I manually
>>> commit offsets using the  Kafka 0.9 Consumer created with a consumer
>>>group.
>>>
>>> However, after my app restarted totally from scratch, the consumer
>>>group
>>> seems to lose all the offsets. Is that true that the consumer offsets
>>>are
>>> transient and will be gone after the consumer group has no member and
>>>gets
>>> deleted?
>>>
>>> Thanks.
>>>
>>> Howard
>>> --
>>>  Howard Wang
>>> Engineering - Big Data and Personalization
>>> Washington Post Media
>>>
>>> 1150 15th St NW, Washington, DC 20071
>>> p. 202-334-9195
>>> Email: howard.w...@washpost.com
>>>
>>
>>



Listener for Lead broker change for topic partition in the 0.9.* consumer

2016-04-01 Thread Wang, Howard
Hi,

I have a use case where I need to be notified about the change of lead broker 
for my topic partitions. I’m using the new API 0.9.0. Is there any way of doing 
this in 0.9* API?

Thanks.

Howard



Using KafkaConsumer to consume the internal office __consumer_offsets in Kafka 10, using Java

2017-02-01 Thread Wang, Howard
Hi,

I have a use case where I need to programmatically consume the internal office  
__consumer_offsets in Kafka 10 for data recovery.

I am able to use the console consumer to consume the internal topic. But I 
wonder if I can use  KafkaConsumer class from my Java program to do the same 
thing. And if yes, how?

The console consumer which works for me looks like the following:
echo "exclude.internal.topics=false" > 
/tmp/consumer.config;~/tools/kafka_2.11-0.10.0.1/bin/kafka-console-consumer.sh 
--zookeeper zk- --topic   __consumer_offsets --consumer.config 
/tmp/consumer.config --formatter 
"kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter”


Thanks.

Howard