Message Timeout

2014-06-27 Thread Klaus Schaefers
Hi, I am new to Kafka and I have a question how Kafka handles scenarios where no consumer is available. Can I configure Kafka in such a way that the messages will be dropped after x seconds? Otherwise I would be afraid that the queues would overflow... Cheers, Klaus -- -- Klaus Schaefers

Re: Message Timeout

2014-06-27 Thread cac...@gmail.com
Message retention in Kafka is disconnected from message consumption. Messages are all persisted to disk and the queues do not need to fit in RAM unlike some other systems. There are configuration values that control maximum log size in terms of MB and the duration of retention which is typically

Re: Question on message content, compression, multiple messages per kafka message?

2014-06-27 Thread Chris Hogue
As a data point on one system, while Snappy compression is significantly better than gzip, for our system isn't wasn't enough to offset the decompress/compress on the broker. No matter how fast the compression is, doing that on the broker will always be slower than not. We went the route the

Re: Experiences with larger message sizes

2014-06-27 Thread Luke Forehand
I am using gzip compression. Too big is really difficult to define because it always depends (for example what can your hardware handle), but I would say no more than a few megabytes. Having said that we are still successfully using 50MB size in production for some things, but it comes at a

Re: Apache Kafka NYC Users Group!

2014-06-27 Thread Jay Kreps
This is great! -Jay On Thu, Jun 26, 2014 at 5:47 PM, Joe Stein joe.st...@stealth.ly wrote: Hi folks, I just started a new Meetup specifically for Apache Kafka in NYC (everyone is welcome of course) http://www.meetup.com/Apache-Kafka-NYC/ For the last couple of years we have been piggy

Re: Message Timeout

2014-06-27 Thread Neha Narkhede
You can control retention using log.retention.hours, log.retention.minutes or log.retention.bytes. On Fri, Jun 27, 2014 at 2:06 AM, cac...@gmail.com cac...@gmail.com wrote: Message retention in Kafka is disconnected from message consumption. Messages are all persisted to disk and the queues

RE: Failed to send messages after 3 tries

2014-06-27 Thread England, Michael
Neha, Apologies for the slow response. I was out yesterday. To answer your questions -- Is the LeaderNotAvailableException repeatable? Yes. I happens whenever I send a message to that topic. -- Are you running Kafka in the cloud? No. Does this problem indicate that the topic is corrupt?

Partition reassign Kafka 0.8.1.1

2014-06-27 Thread Kashyap Mhaisekar
Hi, I was testing out Kafka 0.8.1.1 and found that i get the following exception during partition re-assignment : *./kafka-reassign-partitions.sh --path-to-json-file ritest.json --zookeeper localhost:2181* *Partitions reassignment failed due to Partition reassignment currently in progress for

Re: Partition reassign Kafka 0.8.1.1

2014-06-27 Thread Guozhang Wang
Hi Kashyap, If a previous reassignment is still on going the current one cannot proceed, did you trigger another reassignment before this one? Guozhang On Fri, Jun 27, 2014 at 10:05 AM, Kashyap Mhaisekar kashya...@gmail.com wrote: Hi, I was testing out Kafka 0.8.1.1 and found that i get the

Re: Partition reassign Kafka 0.8.1.1

2014-06-27 Thread Neha Narkhede
*Partitions reassignment failed due to Partition reassignment currently in progress for Map(). The map is empty, so this seems like a bug. Please can you file a JIRA and also attach your server's log4j to it? On Fri, Jun 27, 2014 at 10:05 AM, Kashyap Mhaisekar kashya...@gmail.com wrote: Hi,

Re: Failed to send messages after 3 tries

2014-06-27 Thread Neha Narkhede
I'm not so sure what is causing those exceptions. When you send data, do you see any errors in the server logs? Could you send it around? On Fri, Jun 27, 2014 at 10:00 AM, England, Michael mengl...@homeadvisor.com wrote: Neha, Apologies for the slow response. I was out yesterday. To

Re: message stuck, possible problem setting fetch.message.max.bytes

2014-06-27 Thread Neha Narkhede
but I found one message (5.1MB in size) which is clogging my pipeline up Have you ensured that the fetch.message.max.bytes on the consumer config is set to 5.1 MB? On Thu, Jun 26, 2014 at 6:14 PM, Louis Clark sfgypsy...@gmail.com wrote: in the consumer.properties file, I've got (default?):

Re: message stuck, possible problem setting fetch.message.max.bytes

2014-06-27 Thread Louis Clark
I believe so. I have set fetch.message.max.bytes=10485760 in both the consumer.properties and the server.properties config files, then restarted kafka - same problem. I'm following up on some of Guozhang's other suggestions now. One thing I'm confused about (I should read the docs again) is

Re: Intercept broker operation in Kafka

2014-06-27 Thread Jay Kreps
Hey Ravi, I think what you want is available via log4j and jmx. Log4j is pluggable you can plug in any java code at runtime you want to handle the log events. JMX can be called in any way you like too. -Jay On Mon, Jun 23, 2014 at 11:51 PM, ravi singh rrs120...@gmail.com wrote: Primarily we

Re: Apache Kafka NYC Users Group!

2014-06-27 Thread Jude
Excellant decision Sent from my iPhone On Jun 27, 2014, at 12:06 PM, Jay Kreps jay.kr...@gmail.com wrote: This is great! -Jay On Thu, Jun 26, 2014 at 5:47 PM, Joe Stein joe.st...@stealth.ly wrote: Hi folks, I just started a new Meetup specifically for Apache Kafka in NYC (everyone

RE: Failed to send messages after 3 tries

2014-06-27 Thread England, Michael
Neha, In state-change.log I see lots of logging from when I last started up kafka, and nothing after that. I do see a bunch of errors of the form: [2014-06-25 13:21:37,124] ERROR Controller 1 epoch 11 initiated state change for partition [lead.indexer,37] from OfflinePartition to

kafka 0.8.1.1 log.retention.minutes NOT being honored

2014-06-27 Thread Virendra Pratap Singh
Running a mixed 2 broker cluster. Mixed as in one of the broker1 is running 0.8.0 and broker2 one 0.8.1.1 (from the apache release link. Directly using the tar ball, no local build used). I have set the log.retention.minutes=10. However the broker is not honoring the setting. I see its not

Re: message stuck, possible problem setting fetch.message.max.bytes

2014-06-27 Thread Louis Clark
thanks for the help. For others who happen upon this thread, the problem was indeed on the consumer side. Spark (0.9.1) needs a bit of help setting the Kafka properties for big messages. // setup Kafka with manual parameters to allow big messaging //see

kafka producer pulling from custom restAPI

2014-06-27 Thread sonali.parthasarathy
Hi, I have a quick question. Say I have a custom REST API with data in JSON format. Can the Kafka Producer read directly from the REST API and write to a topic? If so, can you point me to the code base? Thanks, Sonali Sonali Parthasarathy RD Developer, Data Insights Accenture Technology Labs

Re: kafka producer pulling from custom restAPI

2014-06-27 Thread Steve Morin
The answer is no, it doesn't work that way. You would have to write a process to consume from the API back end and have that back end write to Kafka On Jun 27, 2014, at 19:35, sonali.parthasara...@accenture.com wrote: Hi, I have a quick question. Say I have a custom REST API with data in