Re: NotEnoughReplication

2016-12-12 Thread Mohit Anchlia
It looks like replicas never catch up even when there is no load. Am I
missing something?

On Sat, Dec 10, 2016 at 8:09 PM, Mohit Anchlia 
wrote:

> Does Kafka automatically replicate the under replicated partitions?
>
> I looked at these metrics through jmxterm the Value of
> Underreplicatedpartition came out to be 0. What are the additional places
> or metrics to look? There seems to be lack of documentation on Kafka
> administration when it comes to situations like these.
>
> On Sat, Dec 10, 2016 at 6:49 PM, Ewen Cheslack-Postava 
> wrote:
>
>> This error doesn't necessarily mean that a broker is down, it can also
>> mean
>> that too many replicas for that topic partition have fallen behind the
>> leader. This indicates your replication is lagging for some reason.
>>
>> You'll want to be monitoring some of the metrics listed here:
>> http://kafka.apache.org/documentation.html#monitoring to help you
>> understand a) when this occurs (e.g. # of under replicated partitions
>> being
>> a critical one) and b) what the cause might be (e.g. saturating network,
>> requests processing slow due to some other resource contention, etc).
>>
>> -Ewen
>>
>> On Fri, Dec 9, 2016 at 5:20 PM, Mohit Anchlia 
>> wrote:
>>
>> > What's the best way to fix NotEnoughReplication given all the nodes are
>> up
>> > and running? Zookeeper did go down momentarily. We are on Kafka 0.10
>> >
>> > org.apache.kafka.common.errors.NotEnoughReplicasException: Number of
>> > insync
>> > replicas for partition [__consumer_offsets,20] is [1], below required
>> > minimum [2]
>> >
>>
>>
>>
>> --
>> Thanks,
>> Ewen
>>
>
>


Re: NotEnoughReplication

2016-12-10 Thread Mohit Anchlia
Does Kafka automatically replicate the under replicated partitions?

I looked at these metrics through jmxterm the Value of
Underreplicatedpartition came out to be 0. What are the additional places
or metrics to look? There seems to be lack of documentation on Kafka
administration when it comes to situations like these.

On Sat, Dec 10, 2016 at 6:49 PM, Ewen Cheslack-Postava 
wrote:

> This error doesn't necessarily mean that a broker is down, it can also mean
> that too many replicas for that topic partition have fallen behind the
> leader. This indicates your replication is lagging for some reason.
>
> You'll want to be monitoring some of the metrics listed here:
> http://kafka.apache.org/documentation.html#monitoring to help you
> understand a) when this occurs (e.g. # of under replicated partitions being
> a critical one) and b) what the cause might be (e.g. saturating network,
> requests processing slow due to some other resource contention, etc).
>
> -Ewen
>
> On Fri, Dec 9, 2016 at 5:20 PM, Mohit Anchlia 
> wrote:
>
> > What's the best way to fix NotEnoughReplication given all the nodes are
> up
> > and running? Zookeeper did go down momentarily. We are on Kafka 0.10
> >
> > org.apache.kafka.common.errors.NotEnoughReplicasException: Number of
> > insync
> > replicas for partition [__consumer_offsets,20] is [1], below required
> > minimum [2]
> >
>
>
>
> --
> Thanks,
> Ewen
>


NotEnoughReplication

2016-12-09 Thread Mohit Anchlia
What's the best way to fix NotEnoughReplication given all the nodes are up
and running? Zookeeper did go down momentarily. We are on Kafka 0.10

org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync
replicas for partition [__consumer_offsets,20] is [1], below required
minimum [2]


Re: Consumer poll - no results

2016-12-07 Thread Mohit Anchlia
Is auto.offset.reset honored just the first time consumer starts and
polling? In other words everytime consumer starts does it start from the
beginning even if it has already read those messages?

On Wed, Dec 7, 2016 at 1:43 AM, Harald Kirsch 
wrote:

> Have you defined
>
> auto.offset.reset: earliest
>
> or otherwise made sure (KafkaConsumer.position()) that the consumer does
> not just wait for *new* messages to arrive?
>
> Harald.
>
>
>
> On 06.12.2016 20:11, Mohit Anchlia wrote:
>
>> I see this message in the logs:
>>
>> [2016-12-06 13:54:16,586] INFO [GroupCoordinator 0]: Preparing to
>> restabilize group DemoConsumer with old generation 3
>> (kafka.coordinator.GroupCoordinator)
>>
>>
>>
>> On Tue, Dec 6, 2016 at 10:53 AM, Mohit Anchlia 
>> wrote:
>>
>> I have a consumer polling a topic of Kafka 0.10. Even though the topic has
>>> messages the consumer poll is not fetching the message. The thread dump
>>> reveals:
>>>
>>> "main" #1 prio=5 os_prio=0 tid=0x7f3ba4008800 nid=0x798 runnable
>>> [0x7f3baa6c3000]
>>>java.lang.Thread.State: RUNNABLE
>>> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>>> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>>> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.
>>> java:93)
>>> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>>> - locked <0x0006c6d1f8b8> (a sun.nio.ch.Util$3)
>>> - locked <0x0006c6d1f8a8> (a java.util.Collections$
>>> UnmodifiableSet)
>>> - locked <0x0006c6d1f0b8> (a sun.nio.ch.EPollSelectorImpl)
>>> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>>> at org.apache.kafka.common.network.Selector.select(
>>> Selector.java:470)
>>> at org.apache.kafka.common.network.Selector.poll(
>>> Selector.java:286)
>>> at org.apache.kafka.clients.NetworkClient.poll(
>>> NetworkClient.java:260)
>>> at org.apache.kafka.clients.consumer.internals.
>>> ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
>>> - locked <0x0006c6d1eff8> (a org.apache.kafka.clients.
>>> consumer.internals.ConsumerNetworkClient)
>>> at org.apache.kafka.clients.consumer.KafkaConsumer.
>>> pollOnce(KafkaConsumer.java:1031)
>>>
>>>
>>


Re: Consumer poll - no results

2016-12-06 Thread Mohit Anchlia
I see this message in the logs:

[2016-12-06 13:54:16,586] INFO [GroupCoordinator 0]: Preparing to
restabilize group DemoConsumer with old generation 3
(kafka.coordinator.GroupCoordinator)



On Tue, Dec 6, 2016 at 10:53 AM, Mohit Anchlia 
wrote:

> I have a consumer polling a topic of Kafka 0.10. Even though the topic has
> messages the consumer poll is not fetching the message. The thread dump
> reveals:
>
> "main" #1 prio=5 os_prio=0 tid=0x7f3ba4008800 nid=0x798 runnable
> [0x7f3baa6c3000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.
> java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0006c6d1f8b8> (a sun.nio.ch.Util$3)
> - locked <0x0006c6d1f8a8> (a java.util.Collections$
> UnmodifiableSet)
> - locked <0x0006c6d1f0b8> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(
> Selector.java:470)
> at org.apache.kafka.common.network.Selector.poll(
> Selector.java:286)
> at org.apache.kafka.clients.NetworkClient.poll(
> NetworkClient.java:260)
> at org.apache.kafka.clients.consumer.internals.
> ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> - locked <0x0006c6d1eff8> (a org.apache.kafka.clients.
> consumer.internals.ConsumerNetworkClient)
> at org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1031)
>


Consumer poll - no results

2016-12-06 Thread Mohit Anchlia
I have a consumer polling a topic of Kafka 0.10. Even though the topic has
messages the consumer poll is not fetching the message. The thread dump
reveals:

"main" #1 prio=5 os_prio=0 tid=0x7f3ba4008800 nid=0x798 runnable
[0x7f3baa6c3000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x0006c6d1f8b8> (a sun.nio.ch.Util$3)
- locked <0x0006c6d1f8a8> (a
java.util.Collections$UnmodifiableSet)
- locked <0x0006c6d1f0b8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at
org.apache.kafka.common.network.Selector.select(Selector.java:470)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
- locked <0x0006c6d1eff8> (a
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1031)


Re: Kafka Streaming

2016-11-28 Thread Mohit Anchlia
I just cloned 3.1x and tried to run a test. I am still seeing rocksdb error:

Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError:
C:\Users\manchlia\AppData\Local\Temp\librocksdbjni108789031344273.dll:
Can't find dependent libraries


On Mon, Oct 24, 2016 at 11:26 AM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> It's a client issues... But CP 3.1 should be our in about 2 weeks...
> Of course, you can use Kafka 0.10.1.0 for now. It was released last
> week and does contain the fix.
>
> - -Matthias
>
> On 10/24/16 9:19 AM, Mohit Anchlia wrote:
> > Would this be an issue if I connect to a remote Kafka instance
> > running on the Linux box? Or is this a client issue. What's rockdb
> > used for to keep state?
> >
> > On Mon, Oct 24, 2016 at 12:08 AM, Matthias J. Sax
> >  wrote:
> >
> > Kafka 0.10.1.0 which was release last week does contain the fix
> > already. The fix will be in CP 3.1 coming up soon!
> >
> > (sorry that I did mix up versions in a previous email)
> >
> > -Matthias
> >
> > On 10/23/16 12:10 PM, Mohit Anchlia wrote:
> >>>> So if I get it right I will not have this fix until 4
> >>>> months? Should I just create my own example with the next
> >>>> version of Kafka?
> >>>>
> >>>> On Sat, Oct 22, 2016 at 9:04 PM, Matthias J. Sax
> >>>>  wrote:
> >>>>
> >>>> Current version is 3.0.1 CP 3.1 should be release the next
> >>>> weeks
> >>>>
> >>>> So CP 3.2 should be there is about 4 month (Kafka follows a
> >>>> time base release cycle of 4 month and CP usually aligns with
> >>>> Kafka releases)
> >>>>
> >>>> -Matthias
> >>>>
> >>>>
> >>>> On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> >>>>>>> Any idea of when 3.2 is coming?
> >>>>>>>
> >>>>>>> On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax
> >>>>>>>  wrote:
> >>>>>>>
> >>>>>>> No problem. Asking questions is the purpose of mailing
> >>>>>>> lists. :)
> >>>>>>>
> >>>>>>> The issue will be fixed in next version of examples
> >>>>>>> branch.
> >>>>>>>
> >>>>>>> Examples branch is build with CP dependency and not
> >>>>>>> with Kafka dependency. CP-3.2 is not available yet;
> >>>>>>> only Kafka 0.10.1.0. Nevertheless, they should work
> >>>>>>> with Kafka dependency, too. I never tried it, but you
> >>>>>>> should give it a shot...
> >>>>>>>
> >>>>>>> But you should use example master branch because of
> >>>>>>> API changes from 0.10.0.x to 0.10.1 (and thus, changing
> >>>>>>> CP-3.1 to 0.10.1.0 will not be compatible and not
> >>>>>>> compile, while changing CP-3.2-SNAPSHOT to 0.10.1.0
> >>>>>>> should work -- hopefully ;) )
> >>>>>>>
> >>>>>>>
> >>>>>>> -Matthias
> >>>>>>>
> >>>>>>> On 10/20/16 4:02 PM, Mohit Anchlia wrote:
> >>>>>>>>>> So this issue I am seeing is fixed in the next
> >>>>>>>>>> version of example branch? Can I change my pom to
> >>>>>>>>>> point it the higher version of Kafka if that is
> >>>>>>>>>> the issue? Or do I need to wait until new branch
> >>>>>>>>>> is made available? Sorry lot of questions :)
> >>>>>>>>>>
> >>>>>>>>>> On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax
> >>>>>>>>>>  wrote:
> >>>>>>>>>>
> >>>>>>>>>> The branch is 0.10.0.1 and not 0.10.1.0 (sorry
> >>>>>>>>>> for so many zeros and ones -- super easy to mix
> >>>>>>>>>> up)
> >>>>>>>>>>
> >>>>>>>>>> However, examples master branch uses
> >>>>>>>>>> CP-3.1-SNAPSHOT (ie, Kafka 0.10.1.0) -- there
> >>>>>>>>>> will be a 0.10.1 examples branch, after

Re: Kafka Streaming

2016-10-24 Thread Mohit Anchlia
Would this be an issue if I connect to a remote Kafka instance running on
the Linux box? Or is this a client issue. What's rockdb used for to keep
state?

On Mon, Oct 24, 2016 at 12:08 AM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Kafka 0.10.1.0 which was release last week does contain the fix
> already. The fix will be in CP 3.1 coming up soon!
>
> (sorry that I did mix up versions in a previous email)
>
> - -Matthias
>
> On 10/23/16 12:10 PM, Mohit Anchlia wrote:
> > So if I get it right I will not have this fix until 4 months?
> > Should I just create my own example with the next version of
> > Kafka?
> >
> > On Sat, Oct 22, 2016 at 9:04 PM, Matthias J. Sax
> >  wrote:
> >
> > Current version is 3.0.1 CP 3.1 should be release the next weeks
> >
> > So CP 3.2 should be there is about 4 month (Kafka follows a time
> > base release cycle of 4 month and CP usually aligns with Kafka
> > releases)
> >
> > -Matthias
> >
> >
> > On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> >>>> Any idea of when 3.2 is coming?
> >>>>
> >>>> On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax
> >>>>  wrote:
> >>>>
> >>>> No problem. Asking questions is the purpose of mailing lists.
> >>>> :)
> >>>>
> >>>> The issue will be fixed in next version of examples branch.
> >>>>
> >>>> Examples branch is build with CP dependency and not with
> >>>> Kafka dependency. CP-3.2 is not available yet; only Kafka
> >>>> 0.10.1.0. Nevertheless, they should work with Kafka
> >>>> dependency, too. I never tried it, but you should give it a
> >>>> shot...
> >>>>
> >>>> But you should use example master branch because of API
> >>>> changes from 0.10.0.x to 0.10.1 (and thus, changing CP-3.1 to
> >>>> 0.10.1.0 will not be compatible and not compile, while
> >>>> changing CP-3.2-SNAPSHOT to 0.10.1.0 should work -- hopefully
> >>>> ;) )
> >>>>
> >>>>
> >>>> -Matthias
> >>>>
> >>>> On 10/20/16 4:02 PM, Mohit Anchlia wrote:
> >>>>>>> So this issue I am seeing is fixed in the next version
> >>>>>>> of example branch? Can I change my pom to point it the
> >>>>>>> higher version of Kafka if that is the issue? Or do I
> >>>>>>> need to wait until new branch is made available? Sorry
> >>>>>>> lot of questions :)
> >>>>>>>
> >>>>>>> On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax
> >>>>>>>  wrote:
> >>>>>>>
> >>>>>>> The branch is 0.10.0.1 and not 0.10.1.0 (sorry for so
> >>>>>>> many zeros and ones -- super easy to mix up)
> >>>>>>>
> >>>>>>> However, examples master branch uses CP-3.1-SNAPSHOT
> >>>>>>> (ie, Kafka 0.10.1.0) -- there will be a 0.10.1 examples
> >>>>>>> branch, after CP-3.1 was released
> >>>>>>>
> >>>>>>>
> >>>>>>> -Matthias
> >>>>>>>
> >>>>>>> On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> >>>>>>>>>> I just now cloned this repo. It seems to be using
> >>>>>>>>>> 10.1
> >>>>>>>>>>
> >>>>>>>>>> https://github.com/confluentinc/examples and
> >>>>>>>>>> running examples in
> >>>>>>>>>> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-
> cp-
> >
> >>>>>>>>>>
> 3.0
> >>>>
> >>>>>>>>>>
> > .1/
> >>>>>>>
> >>>>>>>>>>
> >>>> kafka-streams
> >>>>>>>>>>
> >>>>>>>>>> On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll
> >>>>>>>>>>  wrote:
> >>>>>>>>>>
> >>>>>>>>>>> I suspect you are running Kafka 0.10.0.x on
> >>>>>>>>>>> Windows? If so, this is a known issue that is
> >>>>>>>>>>> fixed in Kafka 0.10

Re: Kafka Streaming

2016-10-23 Thread Mohit Anchlia
So if I get it right I will not have this fix until 4 months? Should I just
create my own example with the next version of Kafka?

On Sat, Oct 22, 2016 at 9:04 PM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Current version is 3.0.1
> CP 3.1 should be release the next weeks
>
> So CP 3.2 should be there is about 4 month (Kafka follows a time base
> release cycle of 4 month and CP usually aligns with Kafka releases)
>
> - -Matthias
>
>
> On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> > Any idea of when 3.2 is coming?
> >
> > On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax
> >  wrote:
> >
> > No problem. Asking questions is the purpose of mailing lists. :)
> >
> > The issue will be fixed in next version of examples branch.
> >
> > Examples branch is build with CP dependency and not with Kafka
> > dependency. CP-3.2 is not available yet; only Kafka 0.10.1.0.
> > Nevertheless, they should work with Kafka dependency, too. I never
> > tried it, but you should give it a shot...
> >
> > But you should use example master branch because of API changes
> > from 0.10.0.x to 0.10.1 (and thus, changing CP-3.1 to 0.10.1.0 will
> > not be compatible and not compile, while changing CP-3.2-SNAPSHOT
> > to 0.10.1.0 should work -- hopefully ;) )
> >
> >
> > -Matthias
> >
> > On 10/20/16 4:02 PM, Mohit Anchlia wrote:
> >>>> So this issue I am seeing is fixed in the next version of
> >>>> example branch? Can I change my pom to point it the higher
> >>>> version of Kafka if that is the issue? Or do I need to wait
> >>>> until new branch is made available? Sorry lot of questions
> >>>> :)
> >>>>
> >>>> On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax
> >>>>  wrote:
> >>>>
> >>>> The branch is 0.10.0.1 and not 0.10.1.0 (sorry for so many
> >>>> zeros and ones -- super easy to mix up)
> >>>>
> >>>> However, examples master branch uses CP-3.1-SNAPSHOT (ie,
> >>>> Kafka 0.10.1.0) -- there will be a 0.10.1 examples branch,
> >>>> after CP-3.1 was released
> >>>>
> >>>>
> >>>> -Matthias
> >>>>
> >>>> On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> >>>>>>> I just now cloned this repo. It seems to be using 10.1
> >>>>>>>
> >>>>>>> https://github.com/confluentinc/examples and running
> >>>>>>> examples in
> >>>>>>> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-
> 3.0
> >
> >>>>>>>
> .1/
> >>>>
> >>>>>>>
> > kafka-streams
> >>>>>>>
> >>>>>>> On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll
> >>>>>>>  wrote:
> >>>>>>>
> >>>>>>>> I suspect you are running Kafka 0.10.0.x on Windows?
> >>>>>>>> If so, this is a known issue that is fixed in Kafka
> >>>>>>>> 0.10.1 that was just released today.
> >>>>>>>>
> >>>>>>>> Also: which examples are you referring to?  And, to
> >>>>>>>> confirm: which git branch / Kafka version / OS in
> >>>>>>>> case my guess above was wrong.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Thursday, October 20, 2016, Mohit Anchlia
> >>>>>>>>  wrote:
> >>>>>>>>
> >>>>>>>>> I am trying to run the examples from git. While
> >>>>>>>>> running the wordcount example I see this error:
> >>>>>>>>>
> >>>>>>>>> Caused by: *java.lang.RuntimeException*:
> >>>>>>>>> librocksdbjni-win64.dll was not found inside JAR.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Am I expected to include this jar locally?
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> -- *Michael G. Noll* Product Manager | Confluent +1
> >>>>>>>> 650 453 5860 | @miguno <https://twitter.com/miguno>
> >>>>>>>> Follow us: Twitter <https://twitter.com/ConfluentInc>
> >>>>>>>> | Blog <http://www.confluent.io/blog>
> >>>>>>>>
> >>>>>>>
> >>>>>
> >>>>
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJYDDbqAAoJECnhiMLycopPwVUP/RVRc1XjpUYt3aX4gHOw8eXq
> 3n4BwhxOyDNvWLSgkc+HsQmVxVdJOToN8ELRut7xXci7Z65p4J8llXhazO/8rs5N
> ZfW5nMfdgH82388UqizcNQ6BXeI89/nffZ85wL3S+b8NStC0YxpX2JoNPK1HydbM
> cPfgmAjuTsUpKRbHuUocGQK3qOROHk7nX7n75PdzdXRfJvtNVat2t9/uzEbuQb7H
> g1KtZKDEizCpKO6wFBgEr/K7Y0LUqvWPFA5PmsopBmg+ghBmwnbAUQl4M8MMYD02
> 5clTYDIv/t+ff9jUPBiIxc+i0y/2UH5GBBabZ/bIEmjmy2taabnpL9PZl+dHTm1h
> P3kqI+yiz4qstwzaYVb4er7vHv7LiahqIEKjoivtf7ZBWPC1mlISC3K8ZATV+0w3
> RdJ+7Ly1iUbPPNjrRfTeDAqT55CnRJYEyRzTeGR6MuwnDj7pZHGuZ0G8XPlPgmHs
> ucEqA3cOStcdMw83gM0bUezul4guaoR8Paj4Ky9E1JtMo1UjMWzfIGLVlqFAG7OB
> zNyq+xp+NoCXg6hZS9iU45fgWEx4vXfgRIC2sqIZRWL37CbAgR4WMqJ9TCn6Dc7A
> ZV/5q8Nr+dgWFia5i8fwvOoSeKLrydLo9BACJd9wnYDur3qx3euONsOxjQJ6it6K
> 1ABJ8pskAOdMiXQDtr+M
> =xB9H
> -END PGP SIGNATURE-
>


Re: Kafka Streaming

2016-10-20 Thread Mohit Anchlia
Any idea of when 3.2 is coming?

On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> No problem. Asking questions is the purpose of mailing lists. :)
>
> The issue will be fixed in next version of examples branch.
>
> Examples branch is build with CP dependency and not with Kafka
> dependency. CP-3.2 is not available yet; only Kafka 0.10.1.0.
> Nevertheless, they should work with Kafka dependency, too. I never
> tried it, but you should give it a shot...
>
> But you should use example master branch because of API changes from
> 0.10.0.x to 0.10.1 (and thus, changing CP-3.1 to 0.10.1.0 will not be
> compatible and not compile, while changing CP-3.2-SNAPSHOT to 0.10.1.0
> should work -- hopefully ;) )
>
>
> - -Matthias
>
> On 10/20/16 4:02 PM, Mohit Anchlia wrote:
> > So this issue I am seeing is fixed in the next version of example
> > branch? Can I change my pom to point it the higher version of Kafka
> > if that is the issue? Or do I need to wait until new branch is made
> > available? Sorry lot of questions :)
> >
> > On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax
> >  wrote:
> >
> > The branch is 0.10.0.1 and not 0.10.1.0 (sorry for so many zeros
> > and ones -- super easy to mix up)
> >
> > However, examples master branch uses CP-3.1-SNAPSHOT (ie, Kafka
> > 0.10.1.0) -- there will be a 0.10.1 examples branch, after CP-3.1
> > was released
> >
> >
> > -Matthias
> >
> > On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> >>>> I just now cloned this repo. It seems to be using 10.1
> >>>>
> >>>> https://github.com/confluentinc/examples and running examples
> >>>> in
> >>>> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0
> .1/
> >
> >>>>
> kafka-streams
> >>>>
> >>>> On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll
> >>>>  wrote:
> >>>>
> >>>>> I suspect you are running Kafka 0.10.0.x on Windows?  If
> >>>>> so, this is a known issue that is fixed in Kafka 0.10.1
> >>>>> that was just released today.
> >>>>>
> >>>>> Also: which examples are you referring to?  And, to
> >>>>> confirm: which git branch / Kafka version / OS in case my
> >>>>> guess above was wrong.
> >>>>>
> >>>>>
> >>>>> On Thursday, October 20, 2016, Mohit Anchlia
> >>>>>  wrote:
> >>>>>
> >>>>>> I am trying to run the examples from git. While running
> >>>>>> the wordcount example I see this error:
> >>>>>>
> >>>>>> Caused by: *java.lang.RuntimeException*:
> >>>>>> librocksdbjni-win64.dll was not found inside JAR.
> >>>>>>
> >>>>>>
> >>>>>> Am I expected to include this jar locally?
> >>>>>>
> >>>>>
> >>>>>
> >>>>> -- *Michael G. Noll* Product Manager | Confluent +1 650 453
> >>>>> 5860 | @miguno <https://twitter.com/miguno> Follow us:
> >>>>> Twitter <https://twitter.com/ConfluentInc> | Blog
> >>>>> <http://www.confluent.io/blog>
> >>>>>
> >>>>
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJYCVj+AAoJECnhiMLycopPisIP/RM+q1jJt04LeurXueqlAN8x
> 9Ndj/yxdHAelEe62g9JyjxqDoBn+pKkBCJXJxWqpoJUahQZ2YyCN8vA1X7w0eJnn
> /QO6y14KtB5vKI3LA7YMbxSnrP9Vuc092TMzOdiLE56SqWRneVxZlKWPPgcoi5a9
> 8BGSp/riX9ODWuUk55vYKUKSMomGPHqqMIq1s3s4ypWPWVx0Tmya5s/TyEpmxNM4
> X4NTxCUWhvlW+fnCHfQKzpGE+tc+6GfcoK+B8Dr4SINRXUlehhrLr4fqJ3JgqpRh
> ONo+2h9ANyWAZF8pMRbN0WWYhYEPHpQkOoP476jcbArniI1pSKvgXFrkhrjs4bWy
> gyo1ECL5X+UMtewYJq7iyRqU3HO1iaRIICm7mwq13KvN3U6Brxwu85c03qhv5oZj
> hF2Yz4+JaCtfpp5A4dneq4aJ7eh70FiV2IUUPQiq5+iZTkJshH21Td6QZHkOcGUD
> gyX062AC3+RQEFPRQmcqOdeIcHun7rxTJjRB7rBD0XpZ4acZweU3PfumsvfUvKjW
> D54lZB0bcqUj6HJjc4HUfu/SsTAehmAu2+V4iz188gnmDXINckUMsRN3mRnxKTSO
> 95ypzNM0XZ042U2nCUM0vIeb6sd5vL57u3J9Y4dhj697OZxfMizTyN74A/6zNq0B
> LRCZjJWbdN4ixWtH0OXW
> =8OwT
> -END PGP SIGNATURE-
>


Re: Kafka Streaming

2016-10-20 Thread Mohit Anchlia
So this issue I am seeing is fixed in the next version of example branch?
Can I change my pom to point it the higher version of Kafka if that is the
issue? Or do I need to wait until new branch is made available? Sorry lot
of questions :)

On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> The branch is 0.10.0.1 and not 0.10.1.0 (sorry for so many zeros and
> ones -- super easy to mix up)
>
> However, examples master branch uses CP-3.1-SNAPSHOT (ie, Kafka
> 0.10.1.0) -- there will be a 0.10.1 examples branch, after CP-3.1 was
> released
>
>
> - -Matthias
>
> On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> > I just now cloned this repo. It seems to be using 10.1
> >
> > https://github.com/confluentinc/examples and running examples in
> > https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0.1/
> kafka-streams
> >
> >  On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll
> >  wrote:
> >
> >> I suspect you are running Kafka 0.10.0.x on Windows?  If so, this
> >> is a known issue that is fixed in Kafka 0.10.1 that was just
> >> released today.
> >>
> >> Also: which examples are you referring to?  And, to confirm:
> >> which git branch / Kafka version / OS in case my guess above was
> >> wrong.
> >>
> >>
> >> On Thursday, October 20, 2016, Mohit Anchlia
> >>  wrote:
> >>
> >>> I am trying to run the examples from git. While running the
> >>> wordcount example I see this error:
> >>>
> >>> Caused by: *java.lang.RuntimeException*:
> >>> librocksdbjni-win64.dll was not found inside JAR.
> >>>
> >>>
> >>> Am I expected to include this jar locally?
> >>>
> >>
> >>
> >> -- *Michael G. Noll* Product Manager | Confluent +1 650 453 5860
> >> | @miguno <https://twitter.com/miguno> Follow us: Twitter
> >> <https://twitter.com/ConfluentInc> | Blog
> >> <http://www.confluent.io/blog>
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJYCUusAAoJECnhiMLycopPnb4P/2QDApIhfTan1UkI2zojrog9
> dF62bzkJYgTy8qPO3m+NrbYISRN7FdjidLnGeyR4f5VlzNxA5UpMnjsXJmYgKLPQ
> zNU8Ubraz6vG9jrVvTueFRf9SGMLNlqcKqR0M84l2vBzWOXUzDWTTTD08sF153ie
> rgH2JbRfhzs1C9reCWkGc9Gmh+dwVHZw/49kzEzHW/l0x65n0xylSaaBM4bNUBoZ
> 0l/BHf9se/FigZO4XqEl2oIrLOxmyt6bbe4c6pc3ITbDFLq+3hkckZkRODwKE7l8
> ftCJEPaKcQ1lN/kz742YeroH/O+n6ciTE9eaUuEPyfH/Kw87wMkqTwjq+2AHf9kw
> yrQg/ucWafMfkY31FeoxKsWJ2hPlSobDuiYf7nEbD0PP7klr3sR3qgaqgpUvBrpI
> SULyp3gWGtxSiFNyN6G41j4WoxgYB8dtfjJolb+BltDpncvg3wbKe+0Tto2NfrHc
> QYWtSkII5dMrK9m39czAIXLdIAsqJJmPotvgtOV2SXyG5ibUC1IQo9sSEVU3hKud
> AI0FeNv56MjibaeWpWmdex3g+l9PdI7z8SpQLBymUSvY/F0bnAUylrfVA7HDJ0Dz
> P4q252BcL7A5i+ltpoC3dAj9uUS0JEQ67qwORqQnOf7EqyqfFczX1/MVQXWT5KJz
> k4t8IJgyi1aOMGb5t+oL
> =jH2Q
> -END PGP SIGNATURE-
>


Re: Kafka Streaming

2016-10-20 Thread Mohit Anchlia
I just now cloned this repo. It seems to be using 10.1

https://github.com/confluentinc/examples and running examples in
https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0.1/kafka-streams

On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll  wrote:

> I suspect you are running Kafka 0.10.0.x on Windows?  If so, this is a
> known issue that is fixed in Kafka 0.10.1 that was just released today.
>
> Also: which examples are you referring to?  And, to confirm: which git
> branch / Kafka version / OS in case my guess above was wrong.
>
>
> On Thursday, October 20, 2016, Mohit Anchlia 
> wrote:
>
> > I am trying to run the examples from git. While running the wordcount
> > example I see this error:
> >
> > Caused by: *java.lang.RuntimeException*: librocksdbjni-win64.dll was not
> > found inside JAR.
> >
> >
> > Am I expected to include this jar locally?
> >
>
>
> --
> *Michael G. Noll*
> Product Manager | Confluent
> +1 650 453 5860 | @miguno <https://twitter.com/miguno>
> Follow us: Twitter <https://twitter.com/ConfluentInc> | Blog
> <http://www.confluent.io/blog>
>


Kafka Streaming

2016-10-20 Thread Mohit Anchlia
I am trying to run the examples from git. While running the wordcount
example I see this error:

Caused by: *java.lang.RuntimeException*: librocksdbjni-win64.dll was not
found inside JAR.


Am I expected to include this jar locally?


Security in Kafka

2016-01-06 Thread Mohit Anchlia
In 0.9 release it's not clear if Security features of LDAP authentication
and authorization are available? If authN and authZ are available can
somebody point me to relevant documentation that shows how to configure
Kafka to enable authN and authZ?


Re: Kafka Client Error

2015-11-20 Thread Mohit Anchlia
On the server side this is what I see:

[2015-11-20 14:45:31,849] INFO Closing socket connection to /177.40.23.2.
(kafka.network.Processor)

On Fri, Nov 20, 2015 at 11:51 AM, Mohit Anchlia 
wrote:

> I am using latest stable release of Kafka and trying to post a message.
> However I see this error:
>
> Client:
>
> Exception in thread "main" *kafka.common.FailedToSendMessageException*:
> Failed to send messages after 3 tries.
>
> at kafka.producer.async.DefaultEventHandler.handle(
> *DefaultEventHandler.scala:90*)
>
> at kafka.producer.Producer.send(*Producer.scala:77*)
>
> at kafka.javaapi.producer.Producer.send(*Producer.scala:33*)
>
> at com.kafka.test.KafkaProducer.sendMessage(*KafkaProducer.java:32*)
>
> at com.kafka.test.KafkaClient.main(*KafkaClient.java:7*)
>
>
> I don't see anything on the server.
>


Kafka Client Error

2015-11-20 Thread Mohit Anchlia
I am using latest stable release of Kafka and trying to post a message.
However I see this error:

Client:

Exception in thread "main" *kafka.common.FailedToSendMessageException*:
Failed to send messages after 3 tries.

at kafka.producer.async.DefaultEventHandler.handle(
*DefaultEventHandler.scala:90*)

at kafka.producer.Producer.send(*Producer.scala:77*)

at kafka.javaapi.producer.Producer.send(*Producer.scala:33*)

at com.kafka.test.KafkaProducer.sendMessage(*KafkaProducer.java:32*)

at com.kafka.test.KafkaClient.main(*KafkaClient.java:7*)


I don't see anything on the server.


Tools to monitor kafka

2015-11-20 Thread Mohit Anchlia
Are there any command line or UI tools available to monitor kafka?


Release of 0.9.0

2015-11-19 Thread Mohit Anchlia
Is there a tentative date for the release of 0.9.0? I tried looking at Jira
tickets however there is no mention of a tentative date when 0.9.0 is going
to be released.


Release 0.9.0

2015-11-03 Thread Mohit Anchlia
Is there a tentative release date for Kafka 0.9.0?


Re: Questions about .9 consumer API

2015-10-26 Thread Mohit Anchlia
There is a very basic example here:

https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/Consumer.java

However, I have a question on the failure scenario where say there is an
error in a for loop, is the expectation that developer set the offset to
offset -1 ?

On Sat, Oct 24, 2015 at 11:13 PM, Guozhang Wang  wrote:

> Mohit,
>
> We will update the java docs page to include more examples using the APIs
> soon, will keep you posted.
>
> Guozhang
>
> On Fri, Oct 23, 2015 at 9:30 AM, Mohit Anchlia 
> wrote:
>
> > Can I get a link to other type of examples? I would like to see how to
> > write the API code correctly.
> >
> > On Fri, Oct 23, 2015 at 8:23 AM, Gwen Shapira  wrote:
> >
> > > There are some examples that include error handling. These are to
> > > demonstrate the new and awesome seek() method.
> > > You don't have to handle errors that way, we are just showing that you
> > can.
> > >
> > > On Thu, Oct 22, 2015 at 8:34 PM, Mohit Anchlia  >
> > > wrote:
> > > > It's in this link. Most of the examples have some kind of error
> > handling
> > > >
> > > >
> http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> > > >
> > > > On Thu, Oct 22, 2015 at 7:45 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > >> Could you point me to the exact examples that indicate user error
> > > handling?
> > > >>
> > > >> Guozhang
> > > >>
> > > >> On Thu, Oct 22, 2015 at 5:43 PM, Mohit Anchlia <
> > mohitanch...@gmail.com>
> > > >> wrote:
> > > >>
> > > >> > The examples in the javadoc seems to imply that developers need to
> > > manage
> > > >> > all of the aspects around failures. Those examples are for
> rewinding
> > > >> > offsets, dealing with failed portioned for instance.
> > > >> >
> > > >> > On Thu, Oct 22, 2015 at 11:17 AM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > >> > wrote:
> > > >> >
> > > >> > > Hi Mohit:
> > > >> > >
> > > >> > > In general new consumers will abstract developers from any
> network
> > > >> > > failures. More specifically.
> > > >> > >
> > > >> > > 1) consumers will automatically try to re-fetch the messages if
> > the
> > > >> > > previous fetch has failed.
> > > >> > > 2) consumers will remember the currently fetch positions after
> > each
> > > >> > > successful fetch, and can periodically commit these offsets back
> > to
> > > >> > Kafka.
> > > >> > >
> > > >> > > Guozhang
> > > >> > >
> > > >> > > On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia <
> > > >> mohitanch...@gmail.com>
> > > >> > > wrote:
> > > >> > >
> > > >> > > > It looks like the new consumer API expects developers to
> manage
> > > the
> > > >> > > > failures? Or is there some other API that can abstract the
> > > failures,
> > > >> > > > primarily:
> > > >> > > >
> > > >> > > > 1) Automatically resent failed messages because of network
> issue
> > > or
> > > >> > some
> > > >> > > > other issue between the broker and the consumer
> > > >> > > > 2) Ability to acknowledge receipt of a message by the consumer
> > > such
> > > >> > that
> > > >> > > > message is sent again if consumer fails to acknowledge the
> > > receipt.
> > > >> > > >
> > > >> > > > Is there such an API or are the clients expected to deal with
> > > failure
> > > >> > > > scenarios?
> > > >> > > >
> > > >> > > > Docs I am looking at are here:
> > > >> > > >
> > > >> > > >
> > > >>
> > http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > > -- Guozhang
> > > >> > >
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> -- Guozhang
> > > >>
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Questions about .9 consumer API

2015-10-23 Thread Mohit Anchlia
Can I get a link to other type of examples? I would like to see how to
write the API code correctly.

On Fri, Oct 23, 2015 at 8:23 AM, Gwen Shapira  wrote:

> There are some examples that include error handling. These are to
> demonstrate the new and awesome seek() method.
> You don't have to handle errors that way, we are just showing that you can.
>
> On Thu, Oct 22, 2015 at 8:34 PM, Mohit Anchlia 
> wrote:
> > It's in this link. Most of the examples have some kind of error handling
> >
> > http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> >
> > On Thu, Oct 22, 2015 at 7:45 PM, Guozhang Wang 
> wrote:
> >
> >> Could you point me to the exact examples that indicate user error
> handling?
> >>
> >> Guozhang
> >>
> >> On Thu, Oct 22, 2015 at 5:43 PM, Mohit Anchlia 
> >> wrote:
> >>
> >> > The examples in the javadoc seems to imply that developers need to
> manage
> >> > all of the aspects around failures. Those examples are for rewinding
> >> > offsets, dealing with failed portioned for instance.
> >> >
> >> > On Thu, Oct 22, 2015 at 11:17 AM, Guozhang Wang 
> >> > wrote:
> >> >
> >> > > Hi Mohit:
> >> > >
> >> > > In general new consumers will abstract developers from any network
> >> > > failures. More specifically.
> >> > >
> >> > > 1) consumers will automatically try to re-fetch the messages if the
> >> > > previous fetch has failed.
> >> > > 2) consumers will remember the currently fetch positions after each
> >> > > successful fetch, and can periodically commit these offsets back to
> >> > Kafka.
> >> > >
> >> > > Guozhang
> >> > >
> >> > > On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia <
> >> mohitanch...@gmail.com>
> >> > > wrote:
> >> > >
> >> > > > It looks like the new consumer API expects developers to manage
> the
> >> > > > failures? Or is there some other API that can abstract the
> failures,
> >> > > > primarily:
> >> > > >
> >> > > > 1) Automatically resent failed messages because of network issue
> or
> >> > some
> >> > > > other issue between the broker and the consumer
> >> > > > 2) Ability to acknowledge receipt of a message by the consumer
> such
> >> > that
> >> > > > message is sent again if consumer fails to acknowledge the
> receipt.
> >> > > >
> >> > > > Is there such an API or are the clients expected to deal with
> failure
> >> > > > scenarios?
> >> > > >
> >> > > > Docs I am looking at are here:
> >> > > >
> >> > > >
> >> http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > -- Guozhang
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
>


Re: Questions about .9 consumer API

2015-10-22 Thread Mohit Anchlia
It's in this link. Most of the examples have some kind of error handling

http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/

On Thu, Oct 22, 2015 at 7:45 PM, Guozhang Wang  wrote:

> Could you point me to the exact examples that indicate user error handling?
>
> Guozhang
>
> On Thu, Oct 22, 2015 at 5:43 PM, Mohit Anchlia 
> wrote:
>
> > The examples in the javadoc seems to imply that developers need to manage
> > all of the aspects around failures. Those examples are for rewinding
> > offsets, dealing with failed portioned for instance.
> >
> > On Thu, Oct 22, 2015 at 11:17 AM, Guozhang Wang 
> > wrote:
> >
> > > Hi Mohit:
> > >
> > > In general new consumers will abstract developers from any network
> > > failures. More specifically.
> > >
> > > 1) consumers will automatically try to re-fetch the messages if the
> > > previous fetch has failed.
> > > 2) consumers will remember the currently fetch positions after each
> > > successful fetch, and can periodically commit these offsets back to
> > Kafka.
> > >
> > > Guozhang
> > >
> > > On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > > wrote:
> > >
> > > > It looks like the new consumer API expects developers to manage the
> > > > failures? Or is there some other API that can abstract the failures,
> > > > primarily:
> > > >
> > > > 1) Automatically resent failed messages because of network issue or
> > some
> > > > other issue between the broker and the consumer
> > > > 2) Ability to acknowledge receipt of a message by the consumer such
> > that
> > > > message is sent again if consumer fails to acknowledge the receipt.
> > > >
> > > > Is there such an API or are the clients expected to deal with failure
> > > > scenarios?
> > > >
> > > > Docs I am looking at are here:
> > > >
> > > >
> http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Questions about .9 consumer API

2015-10-22 Thread Mohit Anchlia
The examples in the javadoc seems to imply that developers need to manage
all of the aspects around failures. Those examples are for rewinding
offsets, dealing with failed portioned for instance.

On Thu, Oct 22, 2015 at 11:17 AM, Guozhang Wang  wrote:

> Hi Mohit:
>
> In general new consumers will abstract developers from any network
> failures. More specifically.
>
> 1) consumers will automatically try to re-fetch the messages if the
> previous fetch has failed.
> 2) consumers will remember the currently fetch positions after each
> successful fetch, and can periodically commit these offsets back to Kafka.
>
> Guozhang
>
> On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia 
> wrote:
>
> > It looks like the new consumer API expects developers to manage the
> > failures? Or is there some other API that can abstract the failures,
> > primarily:
> >
> > 1) Automatically resent failed messages because of network issue or some
> > other issue between the broker and the consumer
> > 2) Ability to acknowledge receipt of a message by the consumer such that
> > message is sent again if consumer fails to acknowledge the receipt.
> >
> > Is there such an API or are the clients expected to deal with failure
> > scenarios?
> >
> > Docs I am looking at are here:
> >
> > http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> >
>
>
>
> --
> -- Guozhang
>


Questions about .9 consumer API

2015-10-22 Thread Mohit Anchlia
It looks like the new consumer API expects developers to manage the
failures? Or is there some other API that can abstract the failures,
primarily:

1) Automatically resent failed messages because of network issue or some
other issue between the broker and the consumer
2) Ability to acknowledge receipt of a message by the consumer such that
message is sent again if consumer fails to acknowledge the receipt.

Is there such an API or are the clients expected to deal with failure
scenarios?

Docs I am looking at are here:

http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/


Re: Client consumer question

2015-10-21 Thread Mohit Anchlia
I read through the documentation however when I try to access Java API
through the link posted on the design page I get "no page found"

http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/kafka/clients/consumer/KafkaConsumer.html

On Wed, Oct 21, 2015 at 9:59 AM, Mohit Anchlia 
wrote:

> never mind, I found the documentation
>
> On Wed, Oct 21, 2015 at 9:50 AM, Mohit Anchlia 
> wrote:
>
>> Thanks. Where can I find new  Java consumer API documentation with
>> examples?
>>
>> On Tue, Oct 20, 2015 at 6:37 PM, Guozhang Wang 
>> wrote:
>>
>>> There are a bunch of new features added in 0.9 plus quite a lot of bug
>>> fixes as well, a complete ticket list can be found here:
>>>
>>>
>>> https://issues.apache.org/jira/browse/KAFKA-1686?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
>>>
>>> In a short summary of the new features, 0.9 introduces:
>>>
>>> 1) security and quota management on the brokers.
>>> 2) new Java consumer.
>>> 3) copycat framework for ingress / egress of Kafka.
>>>
>>> Guozhang
>>>
>>> On Tue, Oct 20, 2015 at 4:32 PM, Mohit Anchlia 
>>> wrote:
>>>
>>> > Thanks. Are there any other major changes in .9 release other than the
>>> > Consumer changes. Should I wait for .9 or go ahead and performance test
>>> > with .8?
>>> >
>>> > On Tue, Oct 20, 2015 at 3:54 PM, Guozhang Wang 
>>> wrote:
>>> >
>>> > > We will have a release document for that on the release date, it is
>>> not
>>> > > complete yet.
>>> > >
>>> > > Guozhang
>>> > >
>>> > > On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia <
>>> mohitanch...@gmail.com>
>>> > > wrote:
>>> > >
>>> > > > Is there a wiki page where I can find all the major design changes
>>> in
>>> > > > 0.9.0?
>>> > > >
>>> > > > On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang >> >
>>> > > wrote:
>>> > > >
>>> > > > > It is not released yet, we are shooting for Nov. for 0.9.0.
>>> > > > >
>>> > > > > Guozhang
>>> > > > >
>>> > > > > On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia <
>>> > mohitanch...@gmail.com
>>> > > >
>>> > > > > wrote:
>>> > > > >
>>> > > > > > Is 0.9.0 still under development? I don't see it here:
>>> > > > > > http://kafka.apache.org/downloads.html
>>> > > > > >
>>> > > > > > On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang <
>>> wangg...@gmail.com
>>> > >
>>> > > > > wrote:
>>> > > > > >
>>> > > > > > > The links you are referring are for the old consumer.
>>> > > > > > >
>>> > > > > > > If you are using the ZooKeeper based high-level version of
>>> the
>>> > old
>>> > > > > > consumer
>>> > > > > > > which is described in the second link, then failures are
>>> handled
>>> > > and
>>> > > > > > > abstracted from you so that if there is a failure in the
>>> current
>>> > > > > process,
>>> > > > > > > its fetching partitions will be re-assigned to other
>>> consumers
>>> > > within
>>> > > > > the
>>> > > > > > > same group starting at the last checkpointed offset. And
>>> offsets
>>> > > can
>>> > > > be
>>> > > > > > > either checkpointed periodically or manually throw
>>> > > consumer.commit()
>>> > > > > > calls.
>>> > > > > > >
>>> > > > > > > BTW, in the coming 0.9.0 release there is a new consumer
>>> written
>>> > in
>>> > > > > Java
>>> > > > > > > which uses a poll() based API instead of a stream iterating
>>> API.
>>> > > More
>>> > > > > > > details can be found here in case you are interested in
>>> trying it
>>> > > > out:
>>> > > >

Re: Client consumer question

2015-10-21 Thread Mohit Anchlia
never mind, I found the documentation

On Wed, Oct 21, 2015 at 9:50 AM, Mohit Anchlia 
wrote:

> Thanks. Where can I find new  Java consumer API documentation with
> examples?
>
> On Tue, Oct 20, 2015 at 6:37 PM, Guozhang Wang  wrote:
>
>> There are a bunch of new features added in 0.9 plus quite a lot of bug
>> fixes as well, a complete ticket list can be found here:
>>
>>
>> https://issues.apache.org/jira/browse/KAFKA-1686?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
>>
>> In a short summary of the new features, 0.9 introduces:
>>
>> 1) security and quota management on the brokers.
>> 2) new Java consumer.
>> 3) copycat framework for ingress / egress of Kafka.
>>
>> Guozhang
>>
>> On Tue, Oct 20, 2015 at 4:32 PM, Mohit Anchlia 
>> wrote:
>>
>> > Thanks. Are there any other major changes in .9 release other than the
>> > Consumer changes. Should I wait for .9 or go ahead and performance test
>> > with .8?
>> >
>> > On Tue, Oct 20, 2015 at 3:54 PM, Guozhang Wang 
>> wrote:
>> >
>> > > We will have a release document for that on the release date, it is
>> not
>> > > complete yet.
>> > >
>> > > Guozhang
>> > >
>> > > On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia <
>> mohitanch...@gmail.com>
>> > > wrote:
>> > >
>> > > > Is there a wiki page where I can find all the major design changes
>> in
>> > > > 0.9.0?
>> > > >
>> > > > On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang 
>> > > wrote:
>> > > >
>> > > > > It is not released yet, we are shooting for Nov. for 0.9.0.
>> > > > >
>> > > > > Guozhang
>> > > > >
>> > > > > On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia <
>> > mohitanch...@gmail.com
>> > > >
>> > > > > wrote:
>> > > > >
>> > > > > > Is 0.9.0 still under development? I don't see it here:
>> > > > > > http://kafka.apache.org/downloads.html
>> > > > > >
>> > > > > > On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang <
>> wangg...@gmail.com
>> > >
>> > > > > wrote:
>> > > > > >
>> > > > > > > The links you are referring are for the old consumer.
>> > > > > > >
>> > > > > > > If you are using the ZooKeeper based high-level version of the
>> > old
>> > > > > > consumer
>> > > > > > > which is described in the second link, then failures are
>> handled
>> > > and
>> > > > > > > abstracted from you so that if there is a failure in the
>> current
>> > > > > process,
>> > > > > > > its fetching partitions will be re-assigned to other consumers
>> > > within
>> > > > > the
>> > > > > > > same group starting at the last checkpointed offset. And
>> offsets
>> > > can
>> > > > be
>> > > > > > > either checkpointed periodically or manually throw
>> > > consumer.commit()
>> > > > > > calls.
>> > > > > > >
>> > > > > > > BTW, in the coming 0.9.0 release there is a new consumer
>> written
>> > in
>> > > > > Java
>> > > > > > > which uses a poll() based API instead of a stream iterating
>> API.
>> > > More
>> > > > > > > details can be found here in case you are interested in
>> trying it
>> > > > out:
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
>> > > > > > >
>> > > > > > > Guozhang
>> > > > > > >
>> > > > > > > On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia <
>> > > > mohitanch...@gmail.com
>> > > > > >
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > By old consumer you mean version < .8?
>> > > > > >

Re: Client consumer question

2015-10-21 Thread Mohit Anchlia
Thanks. Where can I find new  Java consumer API documentation with examples?

On Tue, Oct 20, 2015 at 6:37 PM, Guozhang Wang  wrote:

> There are a bunch of new features added in 0.9 plus quite a lot of bug
> fixes as well, a complete ticket list can be found here:
>
>
> https://issues.apache.org/jira/browse/KAFKA-1686?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
>
> In a short summary of the new features, 0.9 introduces:
>
> 1) security and quota management on the brokers.
> 2) new Java consumer.
> 3) copycat framework for ingress / egress of Kafka.
>
> Guozhang
>
> On Tue, Oct 20, 2015 at 4:32 PM, Mohit Anchlia 
> wrote:
>
> > Thanks. Are there any other major changes in .9 release other than the
> > Consumer changes. Should I wait for .9 or go ahead and performance test
> > with .8?
> >
> > On Tue, Oct 20, 2015 at 3:54 PM, Guozhang Wang 
> wrote:
> >
> > > We will have a release document for that on the release date, it is not
> > > complete yet.
> > >
> > > Guozhang
> > >
> > > On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia  >
> > > wrote:
> > >
> > > > Is there a wiki page where I can find all the major design changes in
> > > > 0.9.0?
> > > >
> > > > On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > It is not released yet, we are shooting for Nov. for 0.9.0.
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia <
> > mohitanch...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Is 0.9.0 still under development? I don't see it here:
> > > > > > http://kafka.apache.org/downloads.html
> > > > > >
> > > > > > On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > The links you are referring are for the old consumer.
> > > > > > >
> > > > > > > If you are using the ZooKeeper based high-level version of the
> > old
> > > > > > consumer
> > > > > > > which is described in the second link, then failures are
> handled
> > > and
> > > > > > > abstracted from you so that if there is a failure in the
> current
> > > > > process,
> > > > > > > its fetching partitions will be re-assigned to other consumers
> > > within
> > > > > the
> > > > > > > same group starting at the last checkpointed offset. And
> offsets
> > > can
> > > > be
> > > > > > > either checkpointed periodically or manually throw
> > > consumer.commit()
> > > > > > calls.
> > > > > > >
> > > > > > > BTW, in the coming 0.9.0 release there is a new consumer
> written
> > in
> > > > > Java
> > > > > > > which uses a poll() based API instead of a stream iterating
> API.
> > > More
> > > > > > > details can be found here in case you are interested in trying
> it
> > > > out:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia <
> > > > mohitanch...@gmail.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > By old consumer you mean version < .8?
> > > > > > > >
> > > > > > > > Here are the links:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> > > > > > > >
> > > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+E

Re: Client consumer question

2015-10-20 Thread Mohit Anchlia
Thanks. Are there any other major changes in .9 release other than the
Consumer changes. Should I wait for .9 or go ahead and performance test
with .8?

On Tue, Oct 20, 2015 at 3:54 PM, Guozhang Wang  wrote:

> We will have a release document for that on the release date, it is not
> complete yet.
>
> Guozhang
>
> On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia 
> wrote:
>
> > Is there a wiki page where I can find all the major design changes in
> > 0.9.0?
> >
> > On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang 
> wrote:
> >
> > > It is not released yet, we are shooting for Nov. for 0.9.0.
> > >
> > > Guozhang
> > >
> > > On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia  >
> > > wrote:
> > >
> > > > Is 0.9.0 still under development? I don't see it here:
> > > > http://kafka.apache.org/downloads.html
> > > >
> > > > On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > The links you are referring are for the old consumer.
> > > > >
> > > > > If you are using the ZooKeeper based high-level version of the old
> > > > consumer
> > > > > which is described in the second link, then failures are handled
> and
> > > > > abstracted from you so that if there is a failure in the current
> > > process,
> > > > > its fetching partitions will be re-assigned to other consumers
> within
> > > the
> > > > > same group starting at the last checkpointed offset. And offsets
> can
> > be
> > > > > either checkpointed periodically or manually throw
> consumer.commit()
> > > > calls.
> > > > >
> > > > > BTW, in the coming 0.9.0 release there is a new consumer written in
> > > Java
> > > > > which uses a poll() based API instead of a stream iterating API.
> More
> > > > > details can be found here in case you are interested in trying it
> > out:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia <
> > mohitanch...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > By old consumer you mean version < .8?
> > > > > >
> > > > > > Here are the links:
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> > > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> > > > > >
> > > > > > On Mon, Oct 19, 2015 at 12:52 PM, Guozhang Wang <
> > wangg...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Mohit,
> > > > > > >
> > > > > > > Are you referring to the new Java consumer or the old consumer?
> > Or
> > > > more
> > > > > > > specifically what examples doc are you referring to?
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia <
> > > > > mohitanch...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I see most of the consumer examples create a while/for loop
> and
> > > > then
> > > > > > > fetch
> > > > > > > > messages iteratively. Is that the only way by which clients
> can
> > > > > > consumer
> > > > > > > > messages? If this is the preferred way then how do you deal
> > with
> > > > > > > failures,
> > > > > > > > exceptions such that messages are not lost.
> > > > > > > >
> > > > > > > > Also, please point me to examples that one would consider as
> a
> > > > robust
> > > > > > way
> > > > > > > > of coding consumers.
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -- Guozhang
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Client consumer question

2015-10-20 Thread Mohit Anchlia
Is there a wiki page where I can find all the major design changes in 0.9.0?

On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang  wrote:

> It is not released yet, we are shooting for Nov. for 0.9.0.
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia 
> wrote:
>
> > Is 0.9.0 still under development? I don't see it here:
> > http://kafka.apache.org/downloads.html
> >
> > On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang 
> wrote:
> >
> > > The links you are referring are for the old consumer.
> > >
> > > If you are using the ZooKeeper based high-level version of the old
> > consumer
> > > which is described in the second link, then failures are handled and
> > > abstracted from you so that if there is a failure in the current
> process,
> > > its fetching partitions will be re-assigned to other consumers within
> the
> > > same group starting at the last checkpointed offset. And offsets can be
> > > either checkpointed periodically or manually throw consumer.commit()
> > calls.
> > >
> > > BTW, in the coming 0.9.0 release there is a new consumer written in
> Java
> > > which uses a poll() based API instead of a stream iterating API. More
> > > details can be found here in case you are interested in trying it out:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
> > >
> > > Guozhang
> > >
> > > On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia  >
> > > wrote:
> > >
> > > > By old consumer you mean version < .8?
> > > >
> > > > Here are the links:
> > > >
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> > > >
> > > > On Mon, Oct 19, 2015 at 12:52 PM, Guozhang Wang 
> > > > wrote:
> > > >
> > > > > Hi Mohit,
> > > > >
> > > > > Are you referring to the new Java consumer or the old consumer? Or
> > more
> > > > > specifically what examples doc are you referring to?
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > I see most of the consumer examples create a while/for loop and
> > then
> > > > > fetch
> > > > > > messages iteratively. Is that the only way by which clients can
> > > > consumer
> > > > > > messages? If this is the preferred way then how do you deal with
> > > > > failures,
> > > > > > exceptions such that messages are not lost.
> > > > > >
> > > > > > Also, please point me to examples that one would consider as a
> > robust
> > > > way
> > > > > > of coding consumers.
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Client consumer question

2015-10-19 Thread Mohit Anchlia
Is 0.9.0 still under development? I don't see it here:
http://kafka.apache.org/downloads.html

On Mon, Oct 19, 2015 at 4:05 PM, Guozhang Wang  wrote:

> The links you are referring are for the old consumer.
>
> If you are using the ZooKeeper based high-level version of the old consumer
> which is described in the second link, then failures are handled and
> abstracted from you so that if there is a failure in the current process,
> its fetching partitions will be re-assigned to other consumers within the
> same group starting at the last checkpointed offset. And offsets can be
> either checkpointed periodically or manually throw consumer.commit() calls.
>
> BTW, in the coming 0.9.0 release there is a new consumer written in Java
> which uses a poll() based API instead of a stream iterating API. More
> details can be found here in case you are interested in trying it out:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia 
> wrote:
>
> > By old consumer you mean version < .8?
> >
> > Here are the links:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> > https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> >
> > On Mon, Oct 19, 2015 at 12:52 PM, Guozhang Wang 
> > wrote:
> >
> > > Hi Mohit,
> > >
> > > Are you referring to the new Java consumer or the old consumer? Or more
> > > specifically what examples doc are you referring to?
> > >
> > > Guozhang
> > >
> > > On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > > wrote:
> > >
> > > > I see most of the consumer examples create a while/for loop and then
> > > fetch
> > > > messages iteratively. Is that the only way by which clients can
> > consumer
> > > > messages? If this is the preferred way then how do you deal with
> > > failures,
> > > > exceptions such that messages are not lost.
> > > >
> > > > Also, please point me to examples that one would consider as a robust
> > way
> > > > of coding consumers.
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Client consumer question

2015-10-19 Thread Mohit Anchlia
By old consumer you mean version < .8?

Here are the links:

https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

On Mon, Oct 19, 2015 at 12:52 PM, Guozhang Wang  wrote:

> Hi Mohit,
>
> Are you referring to the new Java consumer or the old consumer? Or more
> specifically what examples doc are you referring to?
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia 
> wrote:
>
> > I see most of the consumer examples create a while/for loop and then
> fetch
> > messages iteratively. Is that the only way by which clients can consumer
> > messages? If this is the preferred way then how do you deal with
> failures,
> > exceptions such that messages are not lost.
> >
> > Also, please point me to examples that one would consider as a robust way
> > of coding consumers.
> >
>
>
>
> --
> -- Guozhang
>


Client consumer question

2015-10-19 Thread Mohit Anchlia
I see most of the consumer examples create a while/for loop and then fetch
messages iteratively. Is that the only way by which clients can consumer
messages? If this is the preferred way then how do you deal with failures,
exceptions such that messages are not lost.

Also, please point me to examples that one would consider as a robust way
of coding consumers.


kafka.common.ConsumerRebalanceFailedException

2014-10-24 Thread Mohit Anchlia
I am seeing following exception, don't understand the issue here. Is there
a way to resolve this error?

client consumer logs:


Exception in thread "main" kafka.common.ConsumerRebalanceFailedException:
groupB_ip-10-38-19-230-1414174925481-97fa3f2a can't rebalance after 4
retries
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:432)
at
kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:722)
at
kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:212)
at kafka.javaapi.consumer.Zookeep


server logs:



[2014-10-24 14:21:47,327] INFO Got user-level KeeperException when
processing sessionid:0x149435a553d007d type:create cxid:0x97 zxid:0xb4e
txntype:-1 reqpath:n/a Error Path:/consumers/groupB/owners/topicA/28
Error:KeeperErrorCode = NodeExists for /consumers/groupB/owners/topicA/28
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-24 14:21:47,329] INFO Got user-level KeeperException when
processing sessionid:0x149435a553d007d type:create cxid:0x99 zxid:0xb4f
txntype:-1 reqpath:n/a Error Path:/consumers/groupB/owners/topicA/23
Error:KeeperErrorCode = NodeExists for /consumers/groupB/owners/topicA/23
(org.apache.zookeeper.server.PrepRequestProcessor)


Re: Performance issues

2014-10-23 Thread Mohit Anchlia
By increasing partitions and using kafka from master branch I was able to
cut down the response times into half. But it still seems high and it looks
like there still is a delay between a successful post and the first time
message is seen by the consumers. There are plenty of resources available.

Is there a way I can easily check breakdown of latency on every tier. For
eg: producer -> broker -> consumer
On Wed, Oct 22, 2014 at 2:37 PM, Neha Narkhede 
wrote:

> the server.properties file doesn't have all the properties. You can add it
> there and try your test.
>
> On Wed, Oct 22, 2014 at 11:41 AM, Mohit Anchlia 
> wrote:
>
> > I can't find this property in server.properties file. Is that the right
> > place to set this parameter?
> > On Tue, Oct 21, 2014 at 6:27 PM, Jun Rao  wrote:
> >
> > > Could you also set replica.fetch.wait.max.ms in the broker to sth much
> > > smaller?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Oct 21, 2014 at 2:15 PM, Mohit Anchlia  >
> > > wrote:
> > >
> > > > I set the property to 1 in the consumer code that is passed to
> > > > "createJavaConsumerConnector"
> > > > code, but it didn't seem to help
> > > >
> > > > props.put("fetch.wait.max.ms", fetchMaxWait);
> > > >
> > > > On Tue, Oct 21, 2014 at 1:21 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > This is a consumer config:
> > > > >
> > > > > fetch.wait.max.ms
> > > > >
> > > > > On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Is this a parameter I need to set it in kafka server or on the
> > client
> > > > > side?
> > > > > > Also, can you help point out which one exactly is consumer max
> wait
> > > > time
> > > > > > from this list?
> > > > > >
> > > > > > https://kafka.apache.org/08/configuration.html
> > > > > >
> > > > > > On Tue, Oct 21, 2014 at 11:35 AM, Jay Kreps  >
> > > > wrote:
> > > > > >
> > > > > > > There was a bug that could lead to the fetch request from the
> > > > consumer
> > > > > > > hitting it's timeout instead of being immediately triggered by
> > the
> > > > > > produce
> > > > > > > request. To see if you are effected by that set you consumer
> max
> > > wait
> > > > > > time
> > > > > > > to 1 ms and see if the latency drops to 1 ms (or, alternately,
> > try
> > > > with
> > > > > > > trunk and see if that fixes the problem).
> > > > > > >
> > > > > > > The reason I suspect this problem is because the default
> timeout
> > in
> > > > the
> > > > > > > java consumer is 100ms.
> > > > > > >
> > > > > > > -Jay
> > > > > > >
> > > > > > > On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia <
> > > > > mohitanch...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > This is the version I am using: kafka_2.10-0.8.1.1
> > > > > > > >
> > > > > > > > I think this is fairly recent version
> > > > > > > > On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps <
> > jay.kr...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > What version of Kafka is this? Can you try the same test
> > > against
> > > > > > trunk?
> > > > > > > > We
> > > > > > > > > fixed a couple of latency related bugs which may be the
> > cause.
> > > > > > > > >
> > > > > > > > > -Jay
> > > > > > > > >
> > > > > > > > > On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia <
> > > > > > > mohitanch...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > It's consistently close to 100ms which makes me believe
> > that
> &g

Re: Performance issues

2014-10-22 Thread Mohit Anchlia
I can't find this property in server.properties file. Is that the right
place to set this parameter?
On Tue, Oct 21, 2014 at 6:27 PM, Jun Rao  wrote:

> Could you also set replica.fetch.wait.max.ms in the broker to sth much
> smaller?
>
> Thanks,
>
> Jun
>
> On Tue, Oct 21, 2014 at 2:15 PM, Mohit Anchlia 
> wrote:
>
> > I set the property to 1 in the consumer code that is passed to
> > "createJavaConsumerConnector"
> > code, but it didn't seem to help
> >
> > props.put("fetch.wait.max.ms", fetchMaxWait);
> >
> > On Tue, Oct 21, 2014 at 1:21 PM, Guozhang Wang 
> wrote:
> >
> > > This is a consumer config:
> > >
> > > fetch.wait.max.ms
> > >
> > > On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > > wrote:
> > >
> > > > Is this a parameter I need to set it in kafka server or on the client
> > > side?
> > > > Also, can you help point out which one exactly is consumer max wait
> > time
> > > > from this list?
> > > >
> > > > https://kafka.apache.org/08/configuration.html
> > > >
> > > > On Tue, Oct 21, 2014 at 11:35 AM, Jay Kreps 
> > wrote:
> > > >
> > > > > There was a bug that could lead to the fetch request from the
> > consumer
> > > > > hitting it's timeout instead of being immediately triggered by the
> > > > produce
> > > > > request. To see if you are effected by that set you consumer max
> wait
> > > > time
> > > > > to 1 ms and see if the latency drops to 1 ms (or, alternately, try
> > with
> > > > > trunk and see if that fixes the problem).
> > > > >
> > > > > The reason I suspect this problem is because the default timeout in
> > the
> > > > > java consumer is 100ms.
> > > > >
> > > > > -Jay
> > > > >
> > > > > On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > This is the version I am using: kafka_2.10-0.8.1.1
> > > > > >
> > > > > > I think this is fairly recent version
> > > > > > On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps  >
> > > > wrote:
> > > > > >
> > > > > > > What version of Kafka is this? Can you try the same test
> against
> > > > trunk?
> > > > > > We
> > > > > > > fixed a couple of latency related bugs which may be the cause.
> > > > > > >
> > > > > > > -Jay
> > > > > > >
> > > > > > > On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia <
> > > > > mohitanch...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > It's consistently close to 100ms which makes me believe that
> > > there
> > > > > are
> > > > > > > some
> > > > > > > > settings that I might have to tweak, however, I am not sure
> how
> > > to
> > > > > > > confirm
> > > > > > > > that assumption :)
> > > > > > > > On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia <
> > > > > mohitanch...@gmail.com
> > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > I have a java test that produces messages and then consumer
> > > > > consumers
> > > > > > > it.
> > > > > > > > > Consumers are active all the time. There is 1 consumer for
> 1
> > > > > > producer.
> > > > > > > I
> > > > > > > > am
> > > > > > > > > measuring the time between the message is successfully
> > written
> > > to
> > > > > the
> > > > > > > > queue
> > > > > > > > > and the time consumer picks it up.
> > > > > > > > >
> > > > > > > > > On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede <
> > > > > > > neha.narkh...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >> Can you give more information about the performance test?
> > > Which
> > > > > > test?
> > > > > > > > >> Which
> > > > > > > > >> queue? How did you measure the dequeue latency.
> > > > > > > > >>
> > > > > > > > >> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia <
> > > > > > > mohitanch...@gmail.com>
> > > > > > > > >> wrote:
> > > > > > > > >>
> > > > > > > > >> > I am running a performance test and from what I am
> seeing
> > is
> > > > > that
> > > > > > > > >> messages
> > > > > > > > >> > are taking about 100ms to pop from the queue itself and
> > > hence
> > > > > > making
> > > > > > > > the
> > > > > > > > >> > test slow. I am looking for pointers of how I can
> > > troubleshoot
> > > > > > this
> > > > > > > > >> issue.
> > > > > > > > >> >
> > > > > > > > >> > There seems to be plenty of CPU and IO available. I am
> > > running
> > > > > 22
> > > > > > > > >> producers
> > > > > > > > >> > and 22 consumers in the same group.
> > > > > > > > >> >
> > > > > > > > >>
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: Performance issues

2014-10-21 Thread Mohit Anchlia
Most of the consumer threads seems to be waiting:

"ConsumerFetcherThread-groupA_ip-10-38-19-230-1413925671158-3cc3e22f-0-0"
prio=10 tid=0x7f0aa84db800 nid=0x5be9 runnable [0x7f0a5a618000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x9515bec0> (a sun.nio.ch.Util$2)
- locked <0x9515bea8> (a
java.util.Collections$UnmodifiableSet)
- locked <0x95511d00> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:221)
- locked <0x9515bd28> (a java.lang.Object)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
- locked <0x95293828> (a
sun.nio.ch.SocketAdaptor$SocketInputStream)
at
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
- locked <0x9515bcb0> (a java.lang.Object)
at kafka.utils.Utils$.read(Utils.scala:375)

On Tue, Oct 21, 2014 at 2:15 PM, Mohit Anchlia 
wrote:

> I set the property to 1 in the consumer code that is passed to 
> "createJavaConsumerConnector"
> code, but it didn't seem to help
>
> props.put("fetch.wait.max.ms", fetchMaxWait);
>
> On Tue, Oct 21, 2014 at 1:21 PM, Guozhang Wang  wrote:
>
>> This is a consumer config:
>>
>> fetch.wait.max.ms
>>
>> On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia 
>> wrote:
>>
>> > Is this a parameter I need to set it in kafka server or on the client
>> side?
>> > Also, can you help point out which one exactly is consumer max wait time
>> > from this list?
>> >
>> > https://kafka.apache.org/08/configuration.html
>> >
>> > On Tue, Oct 21, 2014 at 11:35 AM, Jay Kreps 
>> wrote:
>> >
>> > > There was a bug that could lead to the fetch request from the consumer
>> > > hitting it's timeout instead of being immediately triggered by the
>> > produce
>> > > request. To see if you are effected by that set you consumer max wait
>> > time
>> > > to 1 ms and see if the latency drops to 1 ms (or, alternately, try
>> with
>> > > trunk and see if that fixes the problem).
>> > >
>> > > The reason I suspect this problem is because the default timeout in
>> the
>> > > java consumer is 100ms.
>> > >
>> > > -Jay
>> > >
>> > > On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia <
>> mohitanch...@gmail.com>
>> > > wrote:
>> > >
>> > > > This is the version I am using: kafka_2.10-0.8.1.1
>> > > >
>> > > > I think this is fairly recent version
>> > > > On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps 
>> > wrote:
>> > > >
>> > > > > What version of Kafka is this? Can you try the same test against
>> > trunk?
>> > > > We
>> > > > > fixed a couple of latency related bugs which may be the cause.
>> > > > >
>> > > > > -Jay
>> > > > >
>> > > > > On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia <
>> > > mohitanch...@gmail.com>
>> > > > > wrote:
>> > > > >
>> > > > > > It's consistently close to 100ms which makes me believe that
>> there
>> > > are
>> > > > > some
>> > > > > > settings that I might have to tweak, however, I am not sure how
>> to
>> > > > > confirm
>> > > > > > that assumption :)
>> > > > > > On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia <
>> > > mohitanch...@gmail.com
>> > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > I have a java test that produces messages and then consumer
>> > > consumers
>> > > > > it.
>> > > > > > > Consumers are active all the time. There is 1 consumer for 1
>> > > > producer.
>> > > > > I
>> > > > > > am
>> > > > > > > measuring the time between the message is successfully
>> written to
>> > > the
&g

Re: Performance issues

2014-10-21 Thread Mohit Anchlia
I set the property to 1 in the consumer code that is passed to
"createJavaConsumerConnector"
code, but it didn't seem to help

props.put("fetch.wait.max.ms", fetchMaxWait);

On Tue, Oct 21, 2014 at 1:21 PM, Guozhang Wang  wrote:

> This is a consumer config:
>
> fetch.wait.max.ms
>
> On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia 
> wrote:
>
> > Is this a parameter I need to set it in kafka server or on the client
> side?
> > Also, can you help point out which one exactly is consumer max wait time
> > from this list?
> >
> > https://kafka.apache.org/08/configuration.html
> >
> > On Tue, Oct 21, 2014 at 11:35 AM, Jay Kreps  wrote:
> >
> > > There was a bug that could lead to the fetch request from the consumer
> > > hitting it's timeout instead of being immediately triggered by the
> > produce
> > > request. To see if you are effected by that set you consumer max wait
> > time
> > > to 1 ms and see if the latency drops to 1 ms (or, alternately, try with
> > > trunk and see if that fixes the problem).
> > >
> > > The reason I suspect this problem is because the default timeout in the
> > > java consumer is 100ms.
> > >
> > > -Jay
> > >
> > > On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > > wrote:
> > >
> > > > This is the version I am using: kafka_2.10-0.8.1.1
> > > >
> > > > I think this is fairly recent version
> > > > On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps 
> > wrote:
> > > >
> > > > > What version of Kafka is this? Can you try the same test against
> > trunk?
> > > > We
> > > > > fixed a couple of latency related bugs which may be the cause.
> > > > >
> > > > > -Jay
> > > > >
> > > > > On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > It's consistently close to 100ms which makes me believe that
> there
> > > are
> > > > > some
> > > > > > settings that I might have to tweak, however, I am not sure how
> to
> > > > > confirm
> > > > > > that assumption :)
> > > > > > On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia <
> > > mohitanch...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > I have a java test that produces messages and then consumer
> > > consumers
> > > > > it.
> > > > > > > Consumers are active all the time. There is 1 consumer for 1
> > > > producer.
> > > > > I
> > > > > > am
> > > > > > > measuring the time between the message is successfully written
> to
> > > the
> > > > > > queue
> > > > > > > and the time consumer picks it up.
> > > > > > >
> > > > > > > On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede <
> > > > > neha.narkh...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Can you give more information about the performance test?
> Which
> > > > test?
> > > > > > >> Which
> > > > > > >> queue? How did you measure the dequeue latency.
> > > > > > >>
> > > > > > >> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia <
> > > > > mohitanch...@gmail.com>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >> > I am running a performance test and from what I am seeing is
> > > that
> > > > > > >> messages
> > > > > > >> > are taking about 100ms to pop from the queue itself and
> hence
> > > > making
> > > > > > the
> > > > > > >> > test slow. I am looking for pointers of how I can
> troubleshoot
> > > > this
> > > > > > >> issue.
> > > > > > >> >
> > > > > > >> > There seems to be plenty of CPU and IO available. I am
> running
> > > 22
> > > > > > >> producers
> > > > > > >> > and 22 consumers in the same group.
> > > > > > >> >
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Performance issues

2014-10-21 Thread Mohit Anchlia
Is this a parameter I need to set it in kafka server or on the client side?
Also, can you help point out which one exactly is consumer max wait time
from this list?

https://kafka.apache.org/08/configuration.html

On Tue, Oct 21, 2014 at 11:35 AM, Jay Kreps  wrote:

> There was a bug that could lead to the fetch request from the consumer
> hitting it's timeout instead of being immediately triggered by the produce
> request. To see if you are effected by that set you consumer max wait time
> to 1 ms and see if the latency drops to 1 ms (or, alternately, try with
> trunk and see if that fixes the problem).
>
> The reason I suspect this problem is because the default timeout in the
> java consumer is 100ms.
>
> -Jay
>
> On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia 
> wrote:
>
> > This is the version I am using: kafka_2.10-0.8.1.1
> >
> > I think this is fairly recent version
> > On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps  wrote:
> >
> > > What version of Kafka is this? Can you try the same test against trunk?
> > We
> > > fixed a couple of latency related bugs which may be the cause.
> > >
> > > -Jay
> > >
> > > On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > > wrote:
> > >
> > > > It's consistently close to 100ms which makes me believe that there
> are
> > > some
> > > > settings that I might have to tweak, however, I am not sure how to
> > > confirm
> > > > that assumption :)
> > > > On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia <
> mohitanch...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > I have a java test that produces messages and then consumer
> consumers
> > > it.
> > > > > Consumers are active all the time. There is 1 consumer for 1
> > producer.
> > > I
> > > > am
> > > > > measuring the time between the message is successfully written to
> the
> > > > queue
> > > > > and the time consumer picks it up.
> > > > >
> > > > > On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede <
> > > neha.narkh...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> Can you give more information about the performance test? Which
> > test?
> > > > >> Which
> > > > >> queue? How did you measure the dequeue latency.
> > > > >>
> > > > >> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > I am running a performance test and from what I am seeing is
> that
> > > > >> messages
> > > > >> > are taking about 100ms to pop from the queue itself and hence
> > making
> > > > the
> > > > >> > test slow. I am looking for pointers of how I can troubleshoot
> > this
> > > > >> issue.
> > > > >> >
> > > > >> > There seems to be plenty of CPU and IO available. I am running
> 22
> > > > >> producers
> > > > >> > and 22 consumers in the same group.
> > > > >> >
> > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: Performance issues

2014-10-21 Thread Mohit Anchlia
This is the version I am using: kafka_2.10-0.8.1.1

I think this is fairly recent version
On Tue, Oct 21, 2014 at 10:57 AM, Jay Kreps  wrote:

> What version of Kafka is this? Can you try the same test against trunk? We
> fixed a couple of latency related bugs which may be the cause.
>
> -Jay
>
> On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia 
> wrote:
>
> > It's consistently close to 100ms which makes me believe that there are
> some
> > settings that I might have to tweak, however, I am not sure how to
> confirm
> > that assumption :)
> > On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia 
> > wrote:
> >
> > > I have a java test that produces messages and then consumer consumers
> it.
> > > Consumers are active all the time. There is 1 consumer for 1 producer.
> I
> > am
> > > measuring the time between the message is successfully written to the
> > queue
> > > and the time consumer picks it up.
> > >
> > > On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede <
> neha.narkh...@gmail.com>
> > > wrote:
> > >
> > >> Can you give more information about the performance test? Which test?
> > >> Which
> > >> queue? How did you measure the dequeue latency.
> > >>
> > >> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia <
> mohitanch...@gmail.com>
> > >> wrote:
> > >>
> > >> > I am running a performance test and from what I am seeing is that
> > >> messages
> > >> > are taking about 100ms to pop from the queue itself and hence making
> > the
> > >> > test slow. I am looking for pointers of how I can troubleshoot this
> > >> issue.
> > >> >
> > >> > There seems to be plenty of CPU and IO available. I am running 22
> > >> producers
> > >> > and 22 consumers in the same group.
> > >> >
> > >>
> > >
> > >
> >
>


Re: Performance issues

2014-10-21 Thread Mohit Anchlia
It's consistently close to 100ms which makes me believe that there are some
settings that I might have to tweak, however, I am not sure how to confirm
that assumption :)
On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia 
wrote:

> I have a java test that produces messages and then consumer consumers it.
> Consumers are active all the time. There is 1 consumer for 1 producer. I am
> measuring the time between the message is successfully written to the queue
> and the time consumer picks it up.
>
> On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede 
> wrote:
>
>> Can you give more information about the performance test? Which test?
>> Which
>> queue? How did you measure the dequeue latency.
>>
>> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia 
>> wrote:
>>
>> > I am running a performance test and from what I am seeing is that
>> messages
>> > are taking about 100ms to pop from the queue itself and hence making the
>> > test slow. I am looking for pointers of how I can troubleshoot this
>> issue.
>> >
>> > There seems to be plenty of CPU and IO available. I am running 22
>> producers
>> > and 22 consumers in the same group.
>> >
>>
>
>


Re: Performance issues

2014-10-21 Thread Mohit Anchlia
I have a java test that produces messages and then consumer consumers it.
Consumers are active all the time. There is 1 consumer for 1 producer. I am
measuring the time between the message is successfully written to the queue
and the time consumer picks it up.
On Tue, Oct 21, 2014 at 8:32 AM, Neha Narkhede 
wrote:

> Can you give more information about the performance test? Which test? Which
> queue? How did you measure the dequeue latency.
>
> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia 
> wrote:
>
> > I am running a performance test and from what I am seeing is that
> messages
> > are taking about 100ms to pop from the queue itself and hence making the
> > test slow. I am looking for pointers of how I can troubleshoot this
> issue.
> >
> > There seems to be plenty of CPU and IO available. I am running 22
> producers
> > and 22 consumers in the same group.
> >
>


Performance issues

2014-10-20 Thread Mohit Anchlia
I am running a performance test and from what I am seeing is that messages
are taking about 100ms to pop from the queue itself and hence making the
test slow. I am looking for pointers of how I can troubleshoot this issue.

There seems to be plenty of CPU and IO available. I am running 22 producers
and 22 consumers in the same group.


Re: Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
Thanks for the info. I see there are tons of parameters but is there a
place that lists some important performance specific parameters?
On Fri, Oct 17, 2014 at 2:43 PM, Gwen Shapira  wrote:

> If I understand correctly (and I'll be happy if someone who knows more
> will jump in and correct me):
>
> The Sync/Async part is not between the producer and the broker. Its
> between you and the producer. The Sync producer takes your message and
> immediately contacts the broker, sends the message, either wait for
> acks or not and returns. The Async producer takes your message and
> immediately returns. The producer will send the message to the broker
> at some time later, batching multiple requests for efficiency and
> throughput.
>
> So yeah, I think you got it mostly right. Just note that the producer
> doesn't wait on .send, the producer executes the send - either
> returning immediately (if async) or when it managed to contact the
> broker (if sync).
>
> Gwen
>
> On Fri, Oct 17, 2014 at 4:38 PM, Mohit Anchlia 
> wrote:
> > My understanding of sync is that producer waits on .send until Kafka
> > receives the message. And async means it just dispatches the message
> > without any gurantees that message is delivered. Did I get that part
> right?
> > On Fri, Oct 17, 2014 at 1:28 PM, Gwen Shapira 
> wrote:
> >
> >> Sorry if I'm confusing you :)
> >>
> >> Kafka 0.8.1.1 has two producers sync and async. You are using the sync
> >> producer without waiting for acks. I hope this helps?
> >>
> >> Regardless, did you check if the partition got created? are you able
> >> to produce messages? are you able to consume them?
> >>
> >> Gwen
> >>
> >> On Fri, Oct 17, 2014 at 4:13 PM, Mohit Anchlia 
> >> wrote:
> >> > Still don't understand the difference. If it's not waiting for the ack
> >> then
> >> > doesn't it make async?
> >> > On Fri, Oct 17, 2014 at 12:55 PM,  wrote:
> >> >
> >> >> Its using the sync producer without waiting for any broker to
> >> acknowledge
> >> >> the write.  This explains the lack of errors you are seeing.
> >> >>
> >> >> —
> >> >> Sent from Mailbox
> >> >>
> >> >> On Fri, Oct 17, 2014 at 3:15 PM, Mohit Anchlia <
> mohitanch...@gmail.com>
> >> >> wrote:
> >> >>
> >> >> > Little confused :) From one of the examples I am using property
> >> >> > request.required.acks=0,
> >> >> > I thought this sets the producer to be async?
> >> >> > On Fri, Oct 17, 2014 at 11:59 AM, Gwen Shapira <
> gshap...@cloudera.com
> >> >
> >> >> > wrote:
> >> >> >> 0.8.1.1 producer is Sync by default, and you can set
> producer.type to
> >> >> >> async if needed.
> >> >> >>
> >> >> >> On Fri, Oct 17, 2014 at 2:57 PM, Mohit Anchlia <
> >> mohitanch...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Thanks! How can I tell if I am using async producer? I thought
> all
> >> the
> >> >> >> > sends are async in nature
> >> >> >> > On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira <
> >> gshap...@cloudera.com
> >> >> >
> >> >> >> > wrote:
> >> >> >> >
> >> >> >> >> If you have "auto.create.topics.enable" set to "true"
> (default),
> >> >> >> >> producing to a topic creates it.
> >> >> >> >>
> >> >> >> >> Its a bit tricky because the "send" that creates the topic can
> >> fail
> >> >> >> >> with "leader not found" or similar issue. retrying few times
> will
> >> >> >> >> eventually succeed as the topic gets created and the leader
> gets
> >> >> >> >> elected.
> >> >> >> >>
> >> >> >> >> Is it possible that you are not getting errors because you are
> >> using
> >> >> >> >> async producer?
> >> >> >> >>
> >> >> >> >> Also "no messages are delivered" can have many causes. Check if
> >> the
> >> >> >> >> topic exists using:
> >> >> >> >> bin/kafka-topics.sh --list --zookeeper localhost:2181
> >> >> >> >>
> >> >> >> >> Perhaps the topic was created and the issue is elsewhere (the
> >> >> consumer
> >> >> >> >> is a usual suspect! perhaps look in the FAQ for tips with that
> >> issue)
> >> >> >> >>
> >> >> >> >> Gwen
> >> >> >> >>
> >> >> >> >> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia <
> >> >> mohitanch...@gmail.com
> >> >> >> >
> >> >> >> >> wrote:
> >> >> >> >> > Is Kafka supposed to throw exception if topic doesn't exist?
> It
> >> >> >> appears
> >> >> >> >> > that there is no exception thrown even though no messages are
> >> >> >> delivered
> >> >> >> >> and
> >> >> >> >> > there are errors logged in Kafka logs.
> >> >> >> >>
> >> >> >>
> >> >>
> >>
>


Re: Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
My understanding of sync is that producer waits on .send until Kafka
receives the message. And async means it just dispatches the message
without any gurantees that message is delivered. Did I get that part right?
On Fri, Oct 17, 2014 at 1:28 PM, Gwen Shapira  wrote:

> Sorry if I'm confusing you :)
>
> Kafka 0.8.1.1 has two producers sync and async. You are using the sync
> producer without waiting for acks. I hope this helps?
>
> Regardless, did you check if the partition got created? are you able
> to produce messages? are you able to consume them?
>
> Gwen
>
> On Fri, Oct 17, 2014 at 4:13 PM, Mohit Anchlia 
> wrote:
> > Still don't understand the difference. If it's not waiting for the ack
> then
> > doesn't it make async?
> > On Fri, Oct 17, 2014 at 12:55 PM,  wrote:
> >
> >> Its using the sync producer without waiting for any broker to
> acknowledge
> >> the write.  This explains the lack of errors you are seeing.
> >>
> >> —
> >> Sent from Mailbox
> >>
> >> On Fri, Oct 17, 2014 at 3:15 PM, Mohit Anchlia 
> >> wrote:
> >>
> >> > Little confused :) From one of the examples I am using property
> >> > request.required.acks=0,
> >> > I thought this sets the producer to be async?
> >> > On Fri, Oct 17, 2014 at 11:59 AM, Gwen Shapira  >
> >> > wrote:
> >> >> 0.8.1.1 producer is Sync by default, and you can set producer.type to
> >> >> async if needed.
> >> >>
> >> >> On Fri, Oct 17, 2014 at 2:57 PM, Mohit Anchlia <
> mohitanch...@gmail.com>
> >> >> wrote:
> >> >> > Thanks! How can I tell if I am using async producer? I thought all
> the
> >> >> > sends are async in nature
> >> >> > On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira <
> gshap...@cloudera.com
> >> >
> >> >> > wrote:
> >> >> >
> >> >> >> If you have "auto.create.topics.enable" set to "true" (default),
> >> >> >> producing to a topic creates it.
> >> >> >>
> >> >> >> Its a bit tricky because the "send" that creates the topic can
> fail
> >> >> >> with "leader not found" or similar issue. retrying few times will
> >> >> >> eventually succeed as the topic gets created and the leader gets
> >> >> >> elected.
> >> >> >>
> >> >> >> Is it possible that you are not getting errors because you are
> using
> >> >> >> async producer?
> >> >> >>
> >> >> >> Also "no messages are delivered" can have many causes. Check if
> the
> >> >> >> topic exists using:
> >> >> >> bin/kafka-topics.sh --list --zookeeper localhost:2181
> >> >> >>
> >> >> >> Perhaps the topic was created and the issue is elsewhere (the
> >> consumer
> >> >> >> is a usual suspect! perhaps look in the FAQ for tips with that
> issue)
> >> >> >>
> >> >> >> Gwen
> >> >> >>
> >> >> >> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia <
> >> mohitanch...@gmail.com
> >> >> >
> >> >> >> wrote:
> >> >> >> > Is Kafka supposed to throw exception if topic doesn't exist? It
> >> >> appears
> >> >> >> > that there is no exception thrown even though no messages are
> >> >> delivered
> >> >> >> and
> >> >> >> > there are errors logged in Kafka logs.
> >> >> >>
> >> >>
> >>
>


Re: Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
Still don't understand the difference. If it's not waiting for the ack then
doesn't it make async?
On Fri, Oct 17, 2014 at 12:55 PM,  wrote:

> Its using the sync producer without waiting for any broker to acknowledge
> the write.  This explains the lack of errors you are seeing.
>
> —
> Sent from Mailbox
>
> On Fri, Oct 17, 2014 at 3:15 PM, Mohit Anchlia 
> wrote:
>
> > Little confused :) From one of the examples I am using property
> > request.required.acks=0,
> > I thought this sets the producer to be async?
> > On Fri, Oct 17, 2014 at 11:59 AM, Gwen Shapira 
> > wrote:
> >> 0.8.1.1 producer is Sync by default, and you can set producer.type to
> >> async if needed.
> >>
> >> On Fri, Oct 17, 2014 at 2:57 PM, Mohit Anchlia 
> >> wrote:
> >> > Thanks! How can I tell if I am using async producer? I thought all the
> >> > sends are async in nature
> >> > On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira  >
> >> > wrote:
> >> >
> >> >> If you have "auto.create.topics.enable" set to "true" (default),
> >> >> producing to a topic creates it.
> >> >>
> >> >> Its a bit tricky because the "send" that creates the topic can fail
> >> >> with "leader not found" or similar issue. retrying few times will
> >> >> eventually succeed as the topic gets created and the leader gets
> >> >> elected.
> >> >>
> >> >> Is it possible that you are not getting errors because you are using
> >> >> async producer?
> >> >>
> >> >> Also "no messages are delivered" can have many causes. Check if the
> >> >> topic exists using:
> >> >> bin/kafka-topics.sh --list --zookeeper localhost:2181
> >> >>
> >> >> Perhaps the topic was created and the issue is elsewhere (the
> consumer
> >> >> is a usual suspect! perhaps look in the FAQ for tips with that issue)
> >> >>
> >> >> Gwen
> >> >>
> >> >> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia <
> mohitanch...@gmail.com
> >> >
> >> >> wrote:
> >> >> > Is Kafka supposed to throw exception if topic doesn't exist? It
> >> appears
> >> >> > that there is no exception thrown even though no messages are
> >> delivered
> >> >> and
> >> >> > there are errors logged in Kafka logs.
> >> >>
> >>
>


Re: Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
Little confused :) From one of the examples I am using property
request.required.acks=0,
I thought this sets the producer to be async?
On Fri, Oct 17, 2014 at 11:59 AM, Gwen Shapira 
wrote:

> 0.8.1.1 producer is Sync by default, and you can set producer.type to
> async if needed.
>
> On Fri, Oct 17, 2014 at 2:57 PM, Mohit Anchlia 
> wrote:
> > Thanks! How can I tell if I am using async producer? I thought all the
> > sends are async in nature
> > On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira 
> > wrote:
> >
> >> If you have "auto.create.topics.enable" set to "true" (default),
> >> producing to a topic creates it.
> >>
> >> Its a bit tricky because the "send" that creates the topic can fail
> >> with "leader not found" or similar issue. retrying few times will
> >> eventually succeed as the topic gets created and the leader gets
> >> elected.
> >>
> >> Is it possible that you are not getting errors because you are using
> >> async producer?
> >>
> >> Also "no messages are delivered" can have many causes. Check if the
> >> topic exists using:
> >> bin/kafka-topics.sh --list --zookeeper localhost:2181
> >>
> >> Perhaps the topic was created and the issue is elsewhere (the consumer
> >> is a usual suspect! perhaps look in the FAQ for tips with that issue)
> >>
> >> Gwen
> >>
> >> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia  >
> >> wrote:
> >> > Is Kafka supposed to throw exception if topic doesn't exist? It
> appears
> >> > that there is no exception thrown even though no messages are
> delivered
> >> and
> >> > there are errors logged in Kafka logs.
> >>
>


Re: Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
Thanks! How can I tell if I am using async producer? I thought all the
sends are async in nature
On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira 
wrote:

> If you have "auto.create.topics.enable" set to "true" (default),
> producing to a topic creates it.
>
> Its a bit tricky because the "send" that creates the topic can fail
> with "leader not found" or similar issue. retrying few times will
> eventually succeed as the topic gets created and the leader gets
> elected.
>
> Is it possible that you are not getting errors because you are using
> async producer?
>
> Also "no messages are delivered" can have many causes. Check if the
> topic exists using:
> bin/kafka-topics.sh --list --zookeeper localhost:2181
>
> Perhaps the topic was created and the issue is elsewhere (the consumer
> is a usual suspect! perhaps look in the FAQ for tips with that issue)
>
> Gwen
>
> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia 
> wrote:
> > Is Kafka supposed to throw exception if topic doesn't exist? It appears
> > that there is no exception thrown even though no messages are delivered
> and
> > there are errors logged in Kafka logs.
>


Topic doesn't exist exception

2014-10-17 Thread Mohit Anchlia
Is Kafka supposed to throw exception if topic doesn't exist? It appears
that there is no exception thrown even though no messages are delivered and
there are errors logged in Kafka logs.


Missing jars

2014-10-15 Thread Mohit Anchlia
I added the following dependency in my pom file, however after I add the
dependency I get errors:



org.apache.kafka

kafka_2.10

0.8.1.1




Errors:

ArtifactTransferException: Failure to transfer
com.sun.jdmk:jmxtools:jar:1.2.1 from

Missing artifact com.sun.jmx:jmxri:jar:1.2.1


Re: Error running example

2014-10-14 Thread Mohit Anchlia
Could somebody help throw some light on why my commands might be hanging?
What's the easiest way to monitor and debug this problem?




On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia 
wrote:

> I am new to Kafka and I just installed Kafka. I am getting the following
> error. Zookeeper seems to be running.
>
> [ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
> bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> [2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:setData cxid:0x37
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/config/topics/test Error:KeeperErrorCode = NoNode for
> /config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x38
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,568] INFO Topic creation
> {"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
> [2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
> with 2 partitions and replication factor 1 is successful!
> (kafka.server.KafkaApis)
> [2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
>
>
> [2014-10-13 20:04:40,658] WARN Error while fetching metadata
> [{TopicMetadata for topic test ->
> No partition metadata for topic test due to
> kafka.common.LeaderNotAvailableException}] for topic [test]: class
> kafka.common.LeaderNotAvailableException
> (kafka.producer.BrokerPartitionInfo)
>
>
>
>
> [2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x43
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
> /brokers/topics/test/partitions/1
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x44
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
> /brokers/topics/test/partitions
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [
>
>
> 2014-10-13 20:04:40,678] WARN Error while fetching metadata
> [{TopicMetadata for topic test ->
> No partition metadata for topic test due to
> kafka.common.LeaderNotAvailableException}] for topic [test]: class
> kafka.common.LeaderNotAvailableException
> (kafka.producer.BrokerPartitionInfo)
> [2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
> partition due to: Failed to fetch topic metadata for topic: test
> (kafka.producer.async.DefaultEventHandler)
>


Re: Error running example

2014-10-13 Thread Mohit Anchlia
I ran it again but it seems to be hanging:


[2014-10-13 20:43:36,341] INFO Closing socket connection to /10.231.154.117.
(kafka.network.Processor)
[ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.

[2014-10-13 20:43:59,397] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)


On Mon, Oct 13, 2014 at 5:29 PM, Jun Rao  wrote:

> Is that error transient or persistent?
>
> Thanks,
>
> Jun
>
> On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia 
> wrote:
>
> > I am new to Kafka and I just installed Kafka. I am getting the following
> > error. Zookeeper seems to be running.
> >
> > [ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
> > bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
> > SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> > SLF4J: Defaulting to no-operation (NOP) logger implementation
> > SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
> further
> > details.
> > [2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:setData cxid:0x37
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/config/topics/test Error:KeeperErrorCode = NoNode for
> > /config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x38
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,568] INFO Topic creation
> > {"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
> > [2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
> > with 2 partitions and replication factor 1 is successful!
> > (kafka.server.KafkaApis)
> > [2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
> > (kafka.network.Processor)
> >
> >
> > [2014-10-13 20:04:40,658] WARN Error while fetching metadata
> > [{TopicMetadata for topic test ->
> > No partition metadata for topic test due to
> > kafka.common.LeaderNotAvailableException}] for topic [test]: class
> > kafka.common.LeaderNotAvailableException
> > (kafka.producer.BrokerPartitionInfo)
> >
> >
> >
> >
> > [2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x43
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
> > /brokers/topics/test/partitions/1
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x44
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
> > /brokers/topics/test/partitions
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
> > (kafka.network.Processor)
> > [
> >
> >
> > 2014-10-13 20:04:40,678] WARN Error while fetching metadata
> [{TopicMetadata
> > for topic test ->
> > No partition metadata for topic test due to
> > kafka.common.LeaderNotAvailableException}] for topic [test]: class
> > kafka.common.LeaderNotAvailableException
> > (kafka.producer.BrokerPartitionInfo)
> > [2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
> > partition due to: Failed to fetch topic metadata for topic: test
> > (kafka.producer.async.DefaultEventHandler)
> >
>


Error running example

2014-10-13 Thread Mohit Anchlia
I am new to Kafka and I just installed Kafka. I am getting the following
error. Zookeeper seems to be running.

[ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
[2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:setData cxid:0x37
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/config/topics/test Error:KeeperErrorCode = NoNode for
/config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x38
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,568] INFO Topic creation
{"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
[2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
with 2 partitions and replication factor 1 is successful!
(kafka.server.KafkaApis)
[2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)


[2014-10-13 20:04:40,658] WARN Error while fetching metadata
[{TopicMetadata for topic test ->
No partition metadata for topic test due to
kafka.common.LeaderNotAvailableException}] for topic [test]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)




[2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x43
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
/brokers/topics/test/partitions/1
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x44
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
/brokers/topics/test/partitions
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
[


2014-10-13 20:04:40,678] WARN Error while fetching metadata [{TopicMetadata
for topic test ->
No partition metadata for topic test due to
kafka.common.LeaderNotAvailableException}] for topic [test]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
[2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
partition due to: Failed to fetch topic metadata for topic: test
(kafka.producer.async.DefaultEventHandler)


Re: Installation and Java Examples

2014-10-13 Thread Mohit Anchlia
By request/reply pattern I meant this:


http://www.eaipatterns.com/RequestReply.html


In this pattern client posts request on a queue and server sends the
response on another queue. The jmsReplyTo property on a JMS message is
commonly used to identify the response queue name.
On Fri, Oct 10, 2014 at 4:58 PM, Harsha  wrote:

> Mohit,
> Kafka uses gradle to build the project, check the README.md
> under source dir for details on how to build and run unit
> tests.
> You can find consumer and producer api here
> http://kafka.apache.org/documentation.html and also more details on
> consumer http://kafka.apache.org/documentation.html#theconsumer
> 1) Follow request/reply pattern
>  Incase if you are looking for producers waiting for a reply from
>  broker if the message is successfully returned , yes there is a
>  configurable option "request.required.acks" in producer config.
> 2) Normal pub/sub with multi-threaded consumers
> Here is a producer example
>
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example
>and consumer
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> 3) multi threaded consumers with different group ids.
>you can use the same consumer group example and use different group
>id to run it
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
>
> "I see in some examples that iterator is being used, is there also a
> notion of listeners or is everything
> iterators?"
>Kafka consumer works by making fetch requests to the brokers .There
>is no need to place the while loop over the iterator.
>ConsumerIterator will take care of it for you. It uses long polling
>to listen for messages on the broker and blocks those fetch requests
>    until there is data available.
>
> hope that helps.
>
> -Harsha
> On Fri, Oct 10, 2014, at 12:32 PM, Mohit Anchlia wrote:
> > I am new to Kafka and very little familiarity with Scala. I see that the
> > build requires "sbt" tool, but do I also need to install Scala
> > separately?
> > Is there a detailed documentation on software requirements on the broker
> > machine.
> >
> > I am also looking for 3 different types of java examples 1) Follow
> > request/reply pattern 2) Normal pub/sub with multi-threaded consumers 3)
> > multi threaded consumers with different group ids. I am trying to
> > understand how the code works for these 2 scenarios.
> >
> > Last question is around consumers. I see in some examples that iterator
> > is
> > being used, is there also a notion of listeners or is everything
> > iterators?
> > In other words in real world would we place the iterator in a while loop
> > to
> > continuously grab messages? It would be helpful to see some practical
> > examples.
>


Installation and Java Examples

2014-10-10 Thread Mohit Anchlia
I am new to Kafka and very little familiarity with Scala. I see that the
build requires "sbt" tool, but do I also need to install Scala separately?
Is there a detailed documentation on software requirements on the broker
machine.

I am also looking for 3 different types of java examples 1) Follow
request/reply pattern 2) Normal pub/sub with multi-threaded consumers 3)
multi threaded consumers with different group ids. I am trying to
understand how the code works for these 2 scenarios.

Last question is around consumers. I see in some examples that iterator is
being used, is there also a notion of listeners or is everything iterators?
In other words in real world would we place the iterator in a while loop to
continuously grab messages? It would be helpful to see some practical
examples.