Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Daniel Compton
Hi Gerrit

Thanks for your contribution, I'm sure everyone here appreciates it, especially 
Clojure developers like myself. I do have one question: what are the guarantees 
you offer to users of your library under failures, particularly when Redis 
fails?

--
Daniel

> On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren  
> wrote:
> 
> Hi,
> 
> Just thought I'll put this out for the kafka community to see (if anyone
> finds it useful great!!).
> 
> Kafka-fast is 100% pure clojure implementation for kafka, but not just
> meant for clojure because it has a Java API wrapper that can be used from
> Java, Groovy, JRuby or Scala.
> 
> This library does not wrap scala instead it directly communicates with the
> kafka cluster.
> 
> It also uses redis for offsets instead of zookeeper and removes the one
> consumer per partition limit that all other kafka libraries have, by diving
> offsets into logical work units and running consumption via a queue(list)
> in redis.
> 
> https://github.com/gerritjvv/kafka-fast
> 
> Any constructive critique is more than welcome :)


Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Gerrit Jansen van Vuuren
Hi Daniel,

At the moment redis is a spof in the architecture, but you can setup
replication and I'm seriously looking into using redis cluster to eliminate
this.
   Some docs that point to this are:
   http://redis.io/topics/cluster-tutorial
   http://redis.io/topics/sentinel


Consumer:

Consumption is plit into logical work units by default 10K offsets.

If redis fails, the messages currently read will all be consumed, while the
redis connection threads go mad trying to reconnect.
No data loss should occur (albeit I'm still setting up scenarios in which I
will test this).
The consumer threads read work assignments from a redis list
using brpoplpush (see http://redis.io/commands/BRPOPLPUSH) and with this
pushes
the current working unit into a "working" queue, if any errors are
encountered while consuming the current work unit will not be marked done,
on recovery (startup)
these "working" queues are scanned and any work units found are placed back
onto the primary queue ready for consumption.

If an error message from kafka is received e.g error-code 6, the consumer
will try to recreate the metadata for a work-unit, while this is done the
work-unit will reside on
a "working" queue.

Another idea I'm going to play with is disabling consumption per
machine/topic to allow a machine to completely drain its messages in a
normal operational way,
so that safe shutdown can be scheduled for any machine without having to
rely on multiple threads synching etc.

Producer:

All sends are async and a broker is randomly selected.
The producer has a retry cache (implemented using http://www.mapdb.org/),
if any failure during sending (network or connection errors)  the message
is saved and retried.
If acks==1 and a failure response from kafka is received, the message is
retried N times.

The low-level producer api allows you to by-pass all this logic and send to
any producer you want, the message send is still async but you can
wait/block on a response from the server
and then react as required.

Auditing:

There is an event channel (clojure.core.async) that receives work-units
consumed with a status field for errors.
This can be used to auditing consumers, I normally send this data to a
riemann instance, or just to disk.


Redis is fast and stable and I've found that even running it on a mixed
service box (e.g along side mysql etc) works fine, but this doesn't mean
that the physical hardware underneath can't fail.
In my case with 2 years of production use of redis I've only had one outage
(due to hardware failure).


Hope this answers some of your questions.
Any ideas of improving this is welcome and feel free to contribute and/or
create issue tickets with possible features ;)

Regards,


On Mon, Oct 13, 2014 at 9:58 AM, Daniel Compton 
wrote:

> Hi Gerrit
>
> Thanks for your contribution, I'm sure everyone here appreciates it,
> especially Clojure developers like myself. I do have one question: what are
> the guarantees you offer to users of your library under failures,
> particularly when Redis fails?
>
> --
> Daniel
>
> > On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
> >
> > Hi,
> >
> > Just thought I'll put this out for the kafka community to see (if anyone
> > finds it useful great!!).
> >
> > Kafka-fast is 100% pure clojure implementation for kafka, but not just
> > meant for clojure because it has a Java API wrapper that can be used from
> > Java, Groovy, JRuby or Scala.
> >
> > This library does not wrap scala instead it directly communicates with
> the
> > kafka cluster.
> >
> > It also uses redis for offsets instead of zookeeper and removes the one
> > consumer per partition limit that all other kafka libraries have, by
> diving
> > offsets into logical work units and running consumption via a queue(list)
> > in redis.
> >
> > https://github.com/gerritjvv/kafka-fast
> >
> > Any constructive critique is more than welcome :)
>


Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Steven Schlansker
Couple of mostly-uninformed comments inline,


On Oct 13, 2014, at 2:00 AM, Gerrit Jansen van Vuuren  
wrote:

> Hi Daniel,
> 
> At the moment redis is a spof in the architecture, but you can setup
> replication and I'm seriously looking into using redis cluster to eliminate
> this.
>   Some docs that point to this are:
>   http://redis.io/topics/cluster-tutorial
>   http://redis.io/topics/sentinel

There's some evidence that redis clusters are *not* good for managing state
in the way that you are using it:

http://aphyr.com/posts/283-call-me-maybe-redis

> If you can’t tolerate data loss, Redis Sentinel (and by extension Redis 
> Cluster) is not safe for use as:
> 
>   • A lock service
>   • A queue
>   • A database


>> 
>>> On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
>> gerrit...@gmail.com> wrote:
>>> 
>>> Hi,
>>> 
>>> Just thought I'll put this out for the kafka community to see (if anyone
>>> finds it useful great!!).
>>> 
>>> Kafka-fast is 100% pure clojure implementation for kafka, but not just
>>> meant for clojure because it has a Java API wrapper that can be used from
>>> Java, Groovy, JRuby or Scala.

One thing that frustrates me with the Kafka library is that despite it claiming
that the Scala code is interoperable with Java, it really isn't.  You end up
having to work around the Scala compiler 'magic' in increasingly bizarre ways,
e.g. default arguments:

kafka = new KafkaServer(createConfig(), KafkaServer.init$default$2());

which is both magical and fragile.  I don't know whether Clojure is the same 
way,
just want to point out that if you don't take particular care of us old fart 
Java
nuts, you'll lose us quickly :)

Another example, from your docs:

Object connector = Producer.createConnector(new BrokerConf("192.168.4.40", 
9092));
Producer.sendMsg(connector, "my-topic", "Hi".getBytes("UTF-8"));

This is downright bizarre to me, I would instead expect:

Producer connector = Producer.createConnector(...)
connector.sendMsg("my-topic", bytes)

which is IMO shorter, cleaner, and easier for testing (especially mocking).


Hope some of my ravings are helpful,
Steven



Create topic programmatically

2014-10-13 Thread hsy...@gmail.com
Hi guys,

Besides TopicCommand, which I believe is not provided to create topic
programmatically, is there any other way to automate creating topic in
code? Thanks!

Best,
Siyuan


Re: Create topic programmatically

2014-10-13 Thread Jonathan Weeks
Sure — take a look at the kafka unit tests as well as admin.AdminUtils , e.g.:

import kafka.admin.AdminUtils
   AdminUtils.createTopic(zkClient, topicNameString, 10, 1)

Best Regards,

-Jonathan

On Oct 13, 2014, at 9:58 AM, hsy...@gmail.com wrote:

> Hi guys,
> 
> Besides TopicCommand, which I believe is not provided to create topic
> programmatically, is there any other way to automate creating topic in
> code? Thanks!
> 
> Best,
> Siyuan



Re: Create topic programmatically

2014-10-13 Thread Guozhang Wang
A side note is that you may want to use waitUntil to check the topic is
created after using AdminUtils.createTopic since it is async.

On Mon, Oct 13, 2014 at 10:07 AM, Jonathan Weeks 
wrote:

> Sure — take a look at the kafka unit tests as well as admin.AdminUtils ,
> e.g.:
>
> import kafka.admin.AdminUtils
>AdminUtils.createTopic(zkClient, topicNameString, 10, 1)
>
> Best Regards,
>
> -Jonathan
>
> On Oct 13, 2014, at 9:58 AM, hsy...@gmail.com wrote:
>
> > Hi guys,
> >
> > Besides TopicCommand, which I believe is not provided to create topic
> > programmatically, is there any other way to automate creating topic in
> > code? Thanks!
> >
> > Best,
> > Siyuan
>
>


-- 
-- Guozhang


Re: Installation and Java Examples

2014-10-13 Thread Mohit Anchlia
By request/reply pattern I meant this:


http://www.eaipatterns.com/RequestReply.html


In this pattern client posts request on a queue and server sends the
response on another queue. The jmsReplyTo property on a JMS message is
commonly used to identify the response queue name.
On Fri, Oct 10, 2014 at 4:58 PM, Harsha  wrote:

> Mohit,
> Kafka uses gradle to build the project, check the README.md
> under source dir for details on how to build and run unit
> tests.
> You can find consumer and producer api here
> http://kafka.apache.org/documentation.html and also more details on
> consumer http://kafka.apache.org/documentation.html#theconsumer
> 1) Follow request/reply pattern
>  Incase if you are looking for producers waiting for a reply from
>  broker if the message is successfully returned , yes there is a
>  configurable option "request.required.acks" in producer config.
> 2) Normal pub/sub with multi-threaded consumers
> Here is a producer example
>
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example
>and consumer
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> 3) multi threaded consumers with different group ids.
>you can use the same consumer group example and use different group
>id to run it
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
>
> "I see in some examples that iterator is being used, is there also a
> notion of listeners or is everything
> iterators?"
>Kafka consumer works by making fetch requests to the brokers .There
>is no need to place the while loop over the iterator.
>ConsumerIterator will take care of it for you. It uses long polling
>to listen for messages on the broker and blocks those fetch requests
>until there is data available.
>
> hope that helps.
>
> -Harsha
> On Fri, Oct 10, 2014, at 12:32 PM, Mohit Anchlia wrote:
> > I am new to Kafka and very little familiarity with Scala. I see that the
> > build requires "sbt" tool, but do I also need to install Scala
> > separately?
> > Is there a detailed documentation on software requirements on the broker
> > machine.
> >
> > I am also looking for 3 different types of java examples 1) Follow
> > request/reply pattern 2) Normal pub/sub with multi-threaded consumers 3)
> > multi threaded consumers with different group ids. I am trying to
> > understand how the code works for these 2 scenarios.
> >
> > Last question is around consumers. I see in some examples that iterator
> > is
> > being used, is there also a notion of listeners or is everything
> > iterators?
> > In other words in real world would we place the iterator in a while loop
> > to
> > continuously grab messages? It would be helpful to see some practical
> > examples.
>


Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Gerrit Jansen van Vuuren
Hi Steven,

Redis:

  I've had a discussion on redis today, and one architecture that does come
up is using a master slave, then if the master fails the have the
application start writing to the slave. Writing to a slave is possible in
redis, albeit you cannot fail back to the master because writes to a slave
will not be automatically replicated to the master.

  Any suggestions are welcome.

Java:

  I like Java, have been with it a long time, did groovy, went back to
Java, tried scala, went back to Java, then tried clojure and got sold on it
(mainly because it fits my way of thinking). OtherJVMLang -> Java interop
is always better than Java -> OtherJVMLang interop. Clojure interop to Java
is really great, but Java to Clojure you need to use things like:
  RT.var("kafka-clj.consumer.node", "read-msg!").invoke(connector,
timeoutMillis))


 I've refactored the Java API to be more "Java like" (plus made Consumer
Iterable), and made a new release "2.3.9", see the updated examples, also
have a look at
https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/doc/vagrant.md
.

My idea for this library is that from Java/Groovy etc you do not need to
know about Clojure behind the scenes (barring the stacktraces), you just
get Java, and obviously if your using Clojure you just get Clojure ;).

Cheers,
 Gerrit


On Mon, Oct 13, 2014 at 6:52 PM, Steven Schlansker <
sschlans...@opentable.com> wrote:

> Couple of mostly-uninformed comments inline,
>
>
> On Oct 13, 2014, at 2:00 AM, Gerrit Jansen van Vuuren 
> wrote:
>
> > Hi Daniel,
> >
> > At the moment redis is a spof in the architecture, but you can setup
> > replication and I'm seriously looking into using redis cluster to
> eliminate
> > this.
> >   Some docs that point to this are:
> >   http://redis.io/topics/cluster-tutorial
> >   http://redis.io/topics/sentinel
>
> There's some evidence that redis clusters are *not* good for managing state
> in the way that you are using it:
>
> http://aphyr.com/posts/283-call-me-maybe-redis
>
> > If you can’t tolerate data loss, Redis Sentinel (and by extension Redis
> Cluster) is not safe for use as:
> >
> >   • A lock service
> >   • A queue
> >   • A database
>
>
> >>
> >>> On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
> >> gerrit...@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> Just thought I'll put this out for the kafka community to see (if
> anyone
> >>> finds it useful great!!).
> >>>
> >>> Kafka-fast is 100% pure clojure implementation for kafka, but not just
> >>> meant for clojure because it has a Java API wrapper that can be used
> from
> >>> Java, Groovy, JRuby or Scala.
>
> One thing that frustrates me with the Kafka library is that despite it
> claiming
> that the Scala code is interoperable with Java, it really isn't.  You end
> up
> having to work around the Scala compiler 'magic' in increasingly bizarre
> ways,
> e.g. default arguments:
>
> kafka = new KafkaServer(createConfig(), KafkaServer.init$default$2());
>
> which is both magical and fragile.  I don't know whether Clojure is the
> same way,
> just want to point out that if you don't take particular care of us old
> fart Java
> nuts, you'll lose us quickly :)
>
> Another example, from your docs:
>
> Object connector = Producer.createConnector(new BrokerConf("192.168.4.40",
> 9092));
> Producer.sendMsg(connector, "my-topic", "Hi".getBytes("UTF-8"));
>
> This is downright bizarre to me, I would instead expect:
>
> Producer connector = Producer.createConnector(...)
> connector.sendMsg("my-topic", bytes)
>
> which is IMO shorter, cleaner, and easier for testing (especially mocking).
>
>
> Hope some of my ravings are helpful,
> Steven
>
>


Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Gwen Shapira
Out of curiosity: did you choose Redis because ZooKeeper is not well
supported in Clojure? Or were there other reasons?

On Mon, Oct 13, 2014 at 2:04 PM, Gerrit Jansen van Vuuren
 wrote:
> Hi Steven,
>
> Redis:
>
>   I've had a discussion on redis today, and one architecture that does come
> up is using a master slave, then if the master fails the have the
> application start writing to the slave. Writing to a slave is possible in
> redis, albeit you cannot fail back to the master because writes to a slave
> will not be automatically replicated to the master.
>
>   Any suggestions are welcome.
>
> Java:
>
>   I like Java, have been with it a long time, did groovy, went back to
> Java, tried scala, went back to Java, then tried clojure and got sold on it
> (mainly because it fits my way of thinking). OtherJVMLang -> Java interop
> is always better than Java -> OtherJVMLang interop. Clojure interop to Java
> is really great, but Java to Clojure you need to use things like:
>   RT.var("kafka-clj.consumer.node", "read-msg!").invoke(connector,
> timeoutMillis))
>
>
>  I've refactored the Java API to be more "Java like" (plus made Consumer
> Iterable), and made a new release "2.3.9", see the updated examples, also
> have a look at
> https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/doc/vagrant.md
> .
>
> My idea for this library is that from Java/Groovy etc you do not need to
> know about Clojure behind the scenes (barring the stacktraces), you just
> get Java, and obviously if your using Clojure you just get Clojure ;).
>
> Cheers,
>  Gerrit
>
>
> On Mon, Oct 13, 2014 at 6:52 PM, Steven Schlansker <
> sschlans...@opentable.com> wrote:
>
>> Couple of mostly-uninformed comments inline,
>>
>>
>> On Oct 13, 2014, at 2:00 AM, Gerrit Jansen van Vuuren 
>> wrote:
>>
>> > Hi Daniel,
>> >
>> > At the moment redis is a spof in the architecture, but you can setup
>> > replication and I'm seriously looking into using redis cluster to
>> eliminate
>> > this.
>> >   Some docs that point to this are:
>> >   http://redis.io/topics/cluster-tutorial
>> >   http://redis.io/topics/sentinel
>>
>> There's some evidence that redis clusters are *not* good for managing state
>> in the way that you are using it:
>>
>> http://aphyr.com/posts/283-call-me-maybe-redis
>>
>> > If you can’t tolerate data loss, Redis Sentinel (and by extension Redis
>> Cluster) is not safe for use as:
>> >
>> >   • A lock service
>> >   • A queue
>> >   • A database
>>
>>
>> >>
>> >>> On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
>> >> gerrit...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> Just thought I'll put this out for the kafka community to see (if
>> anyone
>> >>> finds it useful great!!).
>> >>>
>> >>> Kafka-fast is 100% pure clojure implementation for kafka, but not just
>> >>> meant for clojure because it has a Java API wrapper that can be used
>> from
>> >>> Java, Groovy, JRuby or Scala.
>>
>> One thing that frustrates me with the Kafka library is that despite it
>> claiming
>> that the Scala code is interoperable with Java, it really isn't.  You end
>> up
>> having to work around the Scala compiler 'magic' in increasingly bizarre
>> ways,
>> e.g. default arguments:
>>
>> kafka = new KafkaServer(createConfig(), KafkaServer.init$default$2());
>>
>> which is both magical and fragile.  I don't know whether Clojure is the
>> same way,
>> just want to point out that if you don't take particular care of us old
>> fart Java
>> nuts, you'll lose us quickly :)
>>
>> Another example, from your docs:
>>
>> Object connector = Producer.createConnector(new BrokerConf("192.168.4.40",
>> 9092));
>> Producer.sendMsg(connector, "my-topic", "Hi".getBytes("UTF-8"));
>>
>> This is downright bizarre to me, I would instead expect:
>>
>> Producer connector = Producer.createConnector(...)
>> connector.sendMsg("my-topic", bytes)
>>
>> which is IMO shorter, cleaner, and easier for testing (especially mocking).
>>
>>
>> Hope some of my ravings are helpful,
>> Steven
>>
>>


Re: kafka java api (written in 100% clojure)

2014-10-13 Thread Gerrit Jansen van Vuuren
Hi Gwen,

For other reasons :), I've used zookeeper (from Java) in the past to store
similar information and have found it doesnt work good enough for this use
case in particular. Zk is great for config and locks but not for data that
changes fast and can grow beyond a certain limit. In particular my worst
experience with zk has been sync data becoming too big to flush to disks
and snapshots taking too long if ever to be read by quorum nodes after a
leader election, sometimes the only real option was to delete all data from
disk and start a new fresh zookeeper cluster.

I've also moved away from the consumer = topic/partition constraint by
using a list/queue in redis (agreeably something I can also do in zk) but
redis again just feels like a better fit for it, especially with things
like brpoppush.


On Tue, Oct 14, 2014 at 12:09 AM, Gwen Shapira 
wrote:

> Out of curiosity: did you choose Redis because ZooKeeper is not well
> supported in Clojure? Or were there other reasons?
>
> On Mon, Oct 13, 2014 at 2:04 PM, Gerrit Jansen van Vuuren
>  wrote:
> > Hi Steven,
> >
> > Redis:
> >
> >   I've had a discussion on redis today, and one architecture that does
> come
> > up is using a master slave, then if the master fails the have the
> > application start writing to the slave. Writing to a slave is possible in
> > redis, albeit you cannot fail back to the master because writes to a
> slave
> > will not be automatically replicated to the master.
> >
> >   Any suggestions are welcome.
> >
> > Java:
> >
> >   I like Java, have been with it a long time, did groovy, went back to
> > Java, tried scala, went back to Java, then tried clojure and got sold on
> it
> > (mainly because it fits my way of thinking). OtherJVMLang -> Java interop
> > is always better than Java -> OtherJVMLang interop. Clojure interop to
> Java
> > is really great, but Java to Clojure you need to use things like:
> >   RT.var("kafka-clj.consumer.node", "read-msg!").invoke(connector,
> > timeoutMillis))
> >
> >
> >  I've refactored the Java API to be more "Java like" (plus made Consumer
> > Iterable), and made a new release "2.3.9", see the updated examples, also
> > have a look at
> >
> https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/doc/vagrant.md
> > .
> >
> > My idea for this library is that from Java/Groovy etc you do not need to
> > know about Clojure behind the scenes (barring the stacktraces), you just
> > get Java, and obviously if your using Clojure you just get Clojure ;).
> >
> > Cheers,
> >  Gerrit
> >
> >
> > On Mon, Oct 13, 2014 at 6:52 PM, Steven Schlansker <
> > sschlans...@opentable.com> wrote:
> >
> >> Couple of mostly-uninformed comments inline,
> >>
> >>
> >> On Oct 13, 2014, at 2:00 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com>
> >> wrote:
> >>
> >> > Hi Daniel,
> >> >
> >> > At the moment redis is a spof in the architecture, but you can setup
> >> > replication and I'm seriously looking into using redis cluster to
> >> eliminate
> >> > this.
> >> >   Some docs that point to this are:
> >> >   http://redis.io/topics/cluster-tutorial
> >> >   http://redis.io/topics/sentinel
> >>
> >> There's some evidence that redis clusters are *not* good for managing
> state
> >> in the way that you are using it:
> >>
> >> http://aphyr.com/posts/283-call-me-maybe-redis
> >>
> >> > If you can’t tolerate data loss, Redis Sentinel (and by extension
> Redis
> >> Cluster) is not safe for use as:
> >> >
> >> >   • A lock service
> >> >   • A queue
> >> >   • A database
> >>
> >>
> >> >>
> >> >>> On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
> >> >> gerrit...@gmail.com> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> Just thought I'll put this out for the kafka community to see (if
> >> anyone
> >> >>> finds it useful great!!).
> >> >>>
> >> >>> Kafka-fast is 100% pure clojure implementation for kafka, but not
> just
> >> >>> meant for clojure because it has a Java API wrapper that can be used
> >> from
> >> >>> Java, Groovy, JRuby or Scala.
> >>
> >> One thing that frustrates me with the Kafka library is that despite it
> >> claiming
> >> that the Scala code is interoperable with Java, it really isn't.  You
> end
> >> up
> >> having to work around the Scala compiler 'magic' in increasingly bizarre
> >> ways,
> >> e.g. default arguments:
> >>
> >> kafka = new KafkaServer(createConfig(), KafkaServer.init$default$2());
> >>
> >> which is both magical and fragile.  I don't know whether Clojure is the
> >> same way,
> >> just want to point out that if you don't take particular care of us old
> >> fart Java
> >> nuts, you'll lose us quickly :)
> >>
> >> Another example, from your docs:
> >>
> >> Object connector = Producer.createConnector(new
> BrokerConf("192.168.4.40",
> >> 9092));
> >> Producer.sendMsg(connector, "my-topic", "Hi".getBytes("UTF-8"));
> >>
> >> This is downright bizarre to me, I would instead expect:
> >>
> >> Producer connector = Producer.createConnector(...)
> >> connector.sendMsg("my-topi

Error running example

2014-10-13 Thread Mohit Anchlia
I am new to Kafka and I just installed Kafka. I am getting the following
error. Zookeeper seems to be running.

[ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
[2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:setData cxid:0x37
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/config/topics/test Error:KeeperErrorCode = NoNode for
/config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x38
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,568] INFO Topic creation
{"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
[2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
with 2 partitions and replication factor 1 is successful!
(kafka.server.KafkaApis)
[2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)


[2014-10-13 20:04:40,658] WARN Error while fetching metadata
[{TopicMetadata for topic test ->
No partition metadata for topic test due to
kafka.common.LeaderNotAvailableException}] for topic [test]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)




[2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x43
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
/brokers/topics/test/partitions/1
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
processing sessionid:0x1490bea316f type:create cxid:0x44
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
/brokers/topics/test/partitions
(org.apache.zookeeper.server.PrepRequestProcessor)
[2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
[


2014-10-13 20:04:40,678] WARN Error while fetching metadata [{TopicMetadata
for topic test ->
No partition metadata for topic test due to
kafka.common.LeaderNotAvailableException}] for topic [test]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
[2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
partition due to: Failed to fetch topic metadata for topic: test
(kafka.producer.async.DefaultEventHandler)


Re: Error running example

2014-10-13 Thread Jun Rao
Is that error transient or persistent?

Thanks,

Jun

On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia 
wrote:

> I am new to Kafka and I just installed Kafka. I am getting the following
> error. Zookeeper seems to be running.
>
> [ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
> bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> [2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:setData cxid:0x37
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/config/topics/test Error:KeeperErrorCode = NoNode for
> /config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x38
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,568] INFO Topic creation
> {"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
> [2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
> with 2 partitions and replication factor 1 is successful!
> (kafka.server.KafkaApis)
> [2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
>
>
> [2014-10-13 20:04:40,658] WARN Error while fetching metadata
> [{TopicMetadata for topic test ->
> No partition metadata for topic test due to
> kafka.common.LeaderNotAvailableException}] for topic [test]: class
> kafka.common.LeaderNotAvailableException
> (kafka.producer.BrokerPartitionInfo)
>
>
>
>
> [2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x43
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
> /brokers/topics/test/partitions/1
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
> processing sessionid:0x1490bea316f type:create cxid:0x44
> zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
> /brokers/topics/test/partitions
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [
>
>
> 2014-10-13 20:04:40,678] WARN Error while fetching metadata [{TopicMetadata
> for topic test ->
> No partition metadata for topic test due to
> kafka.common.LeaderNotAvailableException}] for topic [test]: class
> kafka.common.LeaderNotAvailableException
> (kafka.producer.BrokerPartitionInfo)
> [2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
> partition due to: Failed to fetch topic metadata for topic: test
> (kafka.producer.async.DefaultEventHandler)
>


Kafka - NotLeaderForPartitionException / LeaderNotAvailableException

2014-10-13 Thread Abraham Jacob
Hi All,

I have a 8 node Kafka cluster (broker.id - 1..8). On this cluster I have a
topic "wordcount", which was 8 partitions with a replication factor of 3.

So a describe of topic wordcount
# bin/kafka-topics.sh --describe --zookeeper
tr-pan-hclstr-08.amers1b.ciscloud:2181/kafka/kafka-clstr-01 --topic
wordcount


Topic:wordcount PartitionCount:8ReplicationFactor:3 Configs:
Topic: wordcountPartition: 0Leader: 6   Replicas: 7,6,8
Isr: 6,7,8
Topic: wordcountPartition: 1Leader: 7   Replicas: 8,7,1
Isr: 7
Topic: wordcountPartition: 2Leader: 8   Replicas: 1,8,2
Isr: 8
Topic: wordcountPartition: 3Leader: 3   Replicas: 2,1,3
Isr: 3
Topic: wordcountPartition: 4Leader: 3   Replicas: 3,2,4
Isr: 3,2,4
Topic: wordcountPartition: 5Leader: 3   Replicas: 4,3,5
Isr: 3,5
Topic: wordcountPartition: 6Leader: 6   Replicas: 5,4,6
Isr: 6,5
Topic: wordcountPartition: 7Leader: 6   Replicas: 6,5,7
Isr: 6,5,7

I wrote a simple producer to write to this topic. However when running I
get these messages in the logs -

2014-10-13 19:02:32,459 INFO [main] kafka.client.ClientUtils$: Fetching
metadata from broker id:0,host:tr-pan-hclstr-11.amers1b.ciscloud,port:9092
with correlation id 0 for 1 topic(s) Set(wordcount)
2014-10-13 19:02:32,464 INFO [main] kafka.producer.SyncProducer: Connected
to tr-pan-hclstr-11.amers1b.ciscloud:9092 for producing
2014-10-13 19:02:32,551 INFO [main] kafka.producer.SyncProducer:
Disconnecting from tr-pan-hclstr-11.amers1b.ciscloud:9092
2014-10-13 19:02:32,611 WARN [main] kafka.producer.BrokerPartitionInfo:
Error while fetching metadata   partition 4 leader: nonereplicas: 3
(tr-pan-hclstr-13.amers1b.ciscloud:9092),2
(tr-pan-hclstr-12.amers1b.ciscloud:9092),4
(tr-pan-hclstr-14.amers1b.ciscloud:9092) isr:isUnderReplicated:
true for topic partition [wordcount,4]: [class
kafka.common.LeaderNotAvailableException]
2014-10-13 19:02:33,505 INFO [main] kafka.producer.SyncProducer: Connected
to tr-pan-hclstr-15.amers1b.ciscloud:9092 for producing
2014-10-13 19:02:33,543 WARN [main]
kafka.producer.async.DefaultEventHandler: Produce request with correlation
id 20611 failed due to [wordcount,5]:
kafka.common.NotLeaderForPartitionException,[wordcount,6]:
kafka.common.NotLeaderForPartitionException,[wordcount,7]:
kafka.common.NotLeaderForPartitionException
2014-10-13 19:02:33,694 INFO [main] kafka.producer.SyncProducer: Connected
to tr-pan-hclstr-18.amers1b.ciscloud:9092 for producing
2014-10-13 19:02:33,725 WARN [main]
kafka.producer.async.DefaultEventHandler: Produce request with correlation
id 20612 failed due to [wordcount,0]:
kafka.common.NotLeaderForPartitionException
2014-10-13 19:02:33,861 INFO [main] kafka.producer.SyncProducer: Connected
to tr-pan-hclstr-11.amers1b.ciscloud:9092 for producing
2014-10-13 19:02:33,983 WARN [main]
kafka.producer.async.DefaultEventHandler: Failed to send data since
partitions [wordcount,4] don't have a leader


Obviously something is terribly wrong... I am quite new to Kafka, hence
these messages don't make any sense to me, except for the fact that it is
telling me that some of the partitions don't have any leader.

Could somebody be kind enough to explain the above message?

A few more questions -

(1) How does one get into this state?
(2) How can I get out of this state?
(3) I have set auto.leader.rebalance.enable=true on all brokers. Shouldn't
the partitions be balanced across all the brokers?
(4) I can see that the Kafka service are running on all 8 nodes. (I used
 ps ax -o  "pid pgid args" and I can see under the kafka Java process).
(5) Is there a way I can force a re-balance?



Regards,
Jacob



-- 
~


Re: kafka-web-console

2014-10-13 Thread Sa Li
All, 

Again, I am still unable to install, seems to stuck on ivy.lock, any ideas to 
continue?

thanks

Alec

On Oct 12, 2014, at 7:38 PM, Sa Li  wrote:

> Hi



Re: kafka-web-console

2014-10-13 Thread Joe Stein
rm -f /root/.ivy2/.sbt.ivy.lock

On Mon, Oct 13, 2014 at 8:39 PM, Sa Li  wrote:

> All,
>
> Again, I am still unable to install, seems to stuck on ivy.lock, any ideas
> to continue?
>
> thanks
>
> Alec
>
> On Oct 12, 2014, at 7:38 PM, Sa Li  wrote:
>
> > Hi
>
>


Re: Error running example

2014-10-13 Thread Mohit Anchlia
I ran it again but it seems to be hanging:


[2014-10-13 20:43:36,341] INFO Closing socket connection to /10.231.154.117.
(kafka.network.Processor)
[ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.

[2014-10-13 20:43:59,397] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)


On Mon, Oct 13, 2014 at 5:29 PM, Jun Rao  wrote:

> Is that error transient or persistent?
>
> Thanks,
>
> Jun
>
> On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia 
> wrote:
>
> > I am new to Kafka and I just installed Kafka. I am getting the following
> > error. Zookeeper seems to be running.
> >
> > [ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
> > bin/kafka-console-producer.sh  --broker-list localhost:9092 --topic test
> > SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> > SLF4J: Defaulting to no-operation (NOP) logger implementation
> > SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
> further
> > details.
> > [2014-10-13 20:04:40,559] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:setData cxid:0x37
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/config/topics/test Error:KeeperErrorCode = NoNode for
> > /config/topics/test (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,562] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x38
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,568] INFO Topic creation
> > {"version":1,"partitions":{"1":[0],"0":[0]}} (kafka.admin.AdminUtils$)
> > [2014-10-13 20:04:40,574] INFO [KafkaApi-0] Auto creation of topic test
> > with 2 partitions and replication factor 1 is successful!
> > (kafka.server.KafkaApis)
> > [2014-10-13 20:04:40,650] INFO Closing socket connection to /127.0.0.1.
> > (kafka.network.Processor)
> >
> >
> > [2014-10-13 20:04:40,658] WARN Error while fetching metadata
> > [{TopicMetadata for topic test ->
> > No partition metadata for topic test due to
> > kafka.common.LeaderNotAvailableException}] for topic [test]: class
> > kafka.common.LeaderNotAvailableException
> > (kafka.producer.BrokerPartitionInfo)
> >
> >
> >
> >
> > [2014-10-13 20:04:40,661] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x43
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/brokers/topics/test/partitions/1 Error:KeeperErrorCode = NoNode for
> > /brokers/topics/test/partitions/1
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,668] INFO Got user-level KeeperException when
> > processing sessionid:0x1490bea316f type:create cxid:0x44
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> > Path:/brokers/topics/test/partitions Error:KeeperErrorCode = NoNode for
> > /brokers/topics/test/partitions
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2014-10-13 20:04:40,678] INFO Closing socket connection to /127.0.0.1.
> > (kafka.network.Processor)
> > [
> >
> >
> > 2014-10-13 20:04:40,678] WARN Error while fetching metadata
> [{TopicMetadata
> > for topic test ->
> > No partition metadata for topic test due to
> > kafka.common.LeaderNotAvailableException}] for topic [test]: class
> > kafka.common.LeaderNotAvailableException
> > (kafka.producer.BrokerPartitionInfo)
> > [2014-10-13 20:04:40,679] ERROR Failed to collate messages by topic,
> > partition due to: Failed to fetch topic metadata for topic: test
> > (kafka.producer.async.DefaultEventHandler)
> >
>