Re: LeaderNotAvailableException

2014-08-13 Thread Ryan Williams
Ok, will do. Still with .8.1 on the same instance, after being reset has been running for about 48 hours now without a recurrence yet. On Wed, Aug 13, 2014 at 10:20 PM, Neha Narkhede wrote: > Due to KAFKA-1393, the server probably never ended up completely creating > the replicas. Let us know h

Re: LeaderNotAvailableException

2014-08-13 Thread Neha Narkhede
Due to KAFKA-1393, the server probably never ended up completely creating the replicas. Let us know how 0.8.1.1 goes. Thanks, Neha On Tue, Aug 12, 2014 at 10:12 AM, Ryan Williams wrote: > Using version 0.8.1. > > Looking to update to 0.8.1.1 now probably. > > > On Tue, Aug 12, 2014 at 9:25 AM,

RE: Blocking Recursive parsing from kafka.consumer.TopicCount$.constructTopicCount

2014-08-13 Thread Jagbir Hooda
Hi Jun, The parser is being used by kafka/core/src/main/scala/kafka/consumer/TopicCount.scala:56 As per your suggestion I've filed the JIRA https://issues.apache.org/jira/browse/KAFKA-1595 Thanks for looking into it. jsh > Date: Wed, 13 Aug 2014 08:22:22 -0700 > Subject: Re: Blocking Recursive pa

Re: Strange topic-corruption issue?

2014-08-13 Thread Steve Miller
Sure. I ran: /opt/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files .log --deep-iteration and got (in addition to the same non-secutive offsets error): [ ... ] offset: 1320 position: 344293 isvalid: true payloadsize: 208 magic: 0 compresscodec: N

Re: Using the kafka dissector in wireshark/tshark 1.12

2014-08-13 Thread Neha Narkhede
Thanks for sharing this, Steve! On Tue, Aug 12, 2014 at 11:03 AM, Steve Miller wrote: >I'd seen references to there being a Kafka protocol dissector built > into wireshark/tshark 1.12, but what I could find on that was a bit light > on the specifics as to how to get it to do anything -- at

Re: Correct way to handle ConsumerTimeoutException

2014-08-13 Thread Neha Narkhede
I am using consumer.timeout.ms to force a consumer jump out hasNext call, which will throw ConsumerTimeoutException. Yes, this is the downside of the blocking iterator approach. If you want to pull data in batches and process messages, the iterator is not the best API as it can block at any time l

Re: Most common kafka client comsumer implementations?

2014-08-13 Thread Neha Narkhede
option1 would take a throughput hit as you are trying to commit one message at a time. Option 2 is pretty widely used at LinkedIn and am pretty sure at several other places as well. Option 3 is essentially what the high level consumer does under the covers already. It prefetches data in batches fro

Re: Consuming messages from Kafka and pushing on to a JMS queue

2014-08-13 Thread Neha Narkhede
The power of the consumption APIs and the general performance offered by Kafka is only useful if you can send the data to Kafka and use the consumer APIs. Apache Storm will not solve the problem if you are trying to avoid using the Kafka consumer APIs. I would rethink the architecture you currently

Re: Blocking Recursive parsing from kafka.consumer.TopicCount$.constructTopicCount

2014-08-13 Thread Jun Rao
Are you using Scala JSON in your consumer application? Yes, we probably need to switch off Scala JSON since it's being deprecated. Could you file a jira and put the link there? Thanks, Jun On Tue, Aug 12, 2014 at 11:14 PM, Jagbir Hooda wrote: > > Date: Tue, 12 Aug 2014 16:35:35 -0700 > > Sub

Re: Strange topic-corruption issue?

2014-08-13 Thread Jun Rao
Interesting, could you run DumpLogSegments with and w/o deep-iteration and send the output around offset 1327? Thanks, Jun On Tue, Aug 12, 2014 at 5:42 PM, Steve Miller wrote: > [ "Aha!", you say, "now I know why this guy's been doing so much tshark > stuff!" (-: ] > >Hi. I'm running int

Consuming messages from Kafka and pushing on to a JMS queue

2014-08-13 Thread Andrew Longwill
Hi, We have an application that currently uses Camel to forward a JMS message from HornetQ to multiple consumers (JMS queues). The one challenge we have is handling failure of one of the consumers. Kafka seems to provide a solution here by allowing different consumers to keep track of their own o

Re: Most common kafka client comsumer implementations?

2014-08-13 Thread Anand Nalya
Hi Jim, In one of the applications, we implemented option #1: messageList = getNext(1000) process(messageList) commit() In case of failure, this resulted in duplicate processing for at most 1000 records per partition. Regards, Anand On 1 August 2014 20:35, Jim wrote: > Thanks Guozhang, > >