Re: How to achieve Failover or HighAvailable in SimpleConsumer?
I guess it is called SimpleConsumer for a reason. Simple consumer is really simple and does not support any failure recovery. You might need to implement you own logic, it is probably not trivial though. As a reference, high level consumer uses Zookeeper ephemeral path to monitor the liveliness of consumers. Also, you might want to take a look at KafkaConsumer in the latest trunk, it is designed to replace most simple consume use cases. If you are going to implement your own auto recovery logic, maybe it is better to use KafkaConsumer instead of SimpleConsumer. Thanks, Jiangjie (Becket) Qin On 7/5/15, 7:37 PM, luo.fucong bayinam...@gmail.com wrote: Hi all: The failover or re-balancing support seems only exist in High Level Consumer. But we have some special considerations that we have to go with the SimpleConsumer. I googled the problem but there are no answers. When the SimpleConsumer went down(due to some hardware errors or others unexpected), what can I do to resume the consuming **automatically**?
Sending a list of ProducerRecords
Hi, Currently attempting to use v 0.8.2 and write my own producer, and I was wondering if it is still possible/beneficial to be able to send a list of ProducerRecords as opposed to only a single one at a time? Examining the API it seems like org.apache.kafka.clients.producer.KafkaProducer does not have the ability to send multiple producerRecords using one send call. However, in the producer under javaapi, it's possible to send a list of keyedMessages. Can someone explain the difference or point me in the right direction? Much appreciated. -- Jiefu Gong University of California, Berkeley | Class of 2017 B.A Computer Science | College of Letters and Sciences jg...@berkeley.edu elise...@berkeley.edu | (925) 400-3427
Re: Origin of product name Kafka
I just found the answer in Quora: http://www.quora.com/What-is-the-relation-between-Kafka-the-writer-and-Apache-Kafka-the-distributed-messaging-system 在 2015年7月6日,下午8:30,jakob.vollenwei...@bkw.ch 写道: Hi Admin, I'm just wondering from where the product name Kafka originated, particularly since the general groove in Kafka's famous novels is often marked by a senseless, disorienting, often menacing complexity. Thanks, Jakob
Re: Origin of product name Kafka
Nice :) I always thought its a reference to the Kafkaesque process of building data pipelines in a large organization :) On Mon, Jul 6, 2015 at 6:52 PM, luo.fucong bayinam...@gmail.com wrote: I just found the answer in Quora: http://www.quora.com/What-is-the-relation-between-Kafka-the-writer-and-Apache-Kafka-the-distributed-messaging-system 在 2015年7月6日,下午8:30,jakob.vollenwei...@bkw.ch 写道: Hi Admin, I'm just wondering from where the product name Kafka originated, particularly since the general groove in Kafka's famous novels is often marked by a senseless, disorienting, often menacing complexity. Thanks, Jakob
Re: Origin of product name Kafka
:), Maybe we should put this in to Kafka FAQ? On 7/6/15, 7:25 PM, Gwen Shapira gshap...@cloudera.com wrote: Nice :) I always thought its a reference to the Kafkaesque process of building data pipelines in a large organization :) On Mon, Jul 6, 2015 at 6:52 PM, luo.fucong bayinam...@gmail.com wrote: I just found the answer in Quora: http://www.quora.com/What-is-the-relation-between-Kafka-the-writer-and-Ap ache-Kafka-the-distributed-messaging-system 在 2015年7月6日,下午8:30,jakob.vollenwei...@bkw.ch 写道: Hi Admin, I'm just wondering from where the product name Kafka originated, particularly since the general groove in Kafka's famous novels is often marked by a senseless, disorienting, often menacing complexity. Thanks, Jakob
Re: retention.ms is not respected after upgrade to 0.8.2
I can't see anything obvious wrong in those configs or the code (after just a brief look). Are you sure the system on which you are running Kafka has its date/time correctly set? -Jaikiran On Monday 29 June 2015 12:06 PM, Krzysztof Zarzycki wrote: Greetings! I have problem with Kafka. I had a cluster of 3 brokers in version 0.8.1. I have a very important topic with raw events, that had a config retention.ms={365 days in ms} . It all worked, fine, data was not being deleted. But now I upgraded all brokers to 0.8.2 and suddenly brokers delete the data! They don't respect retention.ms. I have no other settings around retention set: global log.retention.hours is set to default 168, log.retention.bytes is not set. Some more info: 1. I tried looking into ZK config and it looks fine: $ get /kafka081/config/topics/my_topic {version:1,config:{retention.ms:3153600}} cZxid = 0xa0006412c ctime = Tue Mar 31 15:02:20 CEST 2015 mZxid = 0x116bb7 mtime = Fri Jun 26 22:28:40 CEST 2015 pZxid = 0xa0006412c cversion = 0 dataVersion = 2 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 53 numChildren = 0 2. I tried to overwrite retention.ms once again for my topic. Didn't help. 3. I looked into logs of the broker and found that it indeed deletes the data, but it doesn't print *why* (based on what rule) it deletes the data: [2015-06-29 07:35:40,861] INFO Deleting segment 89226232 from log my_topic-1. (kafka.log.Log) [2015-06-29 07:35:40,993] INFO Deleting index /var/lib/kafka/kafka-logs-1/my_topic-1/89226232.index.deleted (k afka.log.OffsetIndex) Please help me, I have no idea what to do about it. Any hint on at least how to debug a problem would be great! Cheers, Krzysztof Zarzycki
Origin of product name Kafka
Hi Admin, I'm just wondering from where the product name Kafka originated, particularly since the general groove in Kafka's famous novels is often marked by a senseless, disorienting, often menacing complexity. Thanks, Jakob