Dealing with large messages

2015-10-05 Thread Pradeep Gollakota
Fellow Kafkaers, We have a pretty heavyweight legacy event logging system for batch processing. We're now sending the events into Kafka now for realtime analytics. But we have some pretty large messages (> 40 MB). I'm wondering if any of you have use cases where you have to send large messages

Re: Dealing with large messages

2015-10-05 Thread Rahul Jain
In addition to the config changes mentioned in that post, you may also have to change producer config if you are using the new producer. Specifically, *max.request.size* and *request.timeout.ms * have to be increased to allow the producer to send large messages. On 6

Re: Dealing with large messages

2015-10-05 Thread James Cheng
Here’s an article that Gwen wrote earlier this year on handling large messages in Kafka. http://ingest.tips/2015/01/21/handling-large-messages-kafka/ -James > On Oct 5, 2015, at 11:20 AM, Pradeep Gollakota wrote: > > Fellow Kafkaers, > > We have a pretty heavyweight

Re: Experiences with corrupted messages

2015-10-05 Thread Alexey Sverdelov
Hi Marina, this is how I "fixed" this problem: http://stackoverflow.com/questions/32904383/apache-kafka-with-high-level-consumer-skip-corrupted-messages/32945841 This is a workaround and I hope it will be fixed in some of next Kafka releases. Have a nice day, Alexey On Fri, Oct 2, 2015 at 2:57

Re: Offset rollover/overflow?

2015-10-05 Thread Grant Henke
I can't be sure of how every client will handle it, it is probably not likely, and there could potentially be unforeseen issues. That said, given that offsets are stored in a (signed) Long. I would suspect that it would rollover to negative values and increment from there. That means instead of

How to verify offsets topic exists?

2015-10-05 Thread Stevo Slavić
Hello Apache Kafka community, In my integration tests, with single 0.8.2.2 broker, for newly created topic with single partition, after determining through topic metadata request that partition has lead broker assigned, when I try to reset offset for given consumer group, I first try to discover

Re: How to verify offsets topic exists?

2015-10-05 Thread Grant Henke
Hi Stevo, There are a couple of options to verify the topic exists: 1. Consume from a topic with "offsets.storage=kafka". If its not created already, this should create it. 2. List and describe the topic using the Kafka topics script. Ex: bin/kafka-topics.sh --zookeeper localhost:2181