Re: Kafka Sync Producer threads(ack=0) are blocked

2015-03-27 Thread ankit tyagi
any insight would be useful.

On Tue, Mar 24, 2015 at 11:06 AM, ankit tyagi ankittyagi.mn...@gmail.com
wrote:

 Hi All,

 Currently we are using kafka_2.8.0-0.8.0-beta1 in our production system. I
 am using sync producer with ack=0 to send the events to  broker.

 but I am seeing  most of my producer threads are blocked.

 jmsListnerTaskExecutor-818 prio=10 tid=0x7f3f5c05a800 nid=0x1719
 waiting for monitor entry [0x7f405935e000]
  *  java.lang.Thread.State: BLOCKED (on object monitor)*
 *at
 kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:53)*
 *- waiting to lock 0x000602358ee8 (a java.lang.Object)*
 at kafka.producer.Producer.send(Producer.scala:74)
 at kafka.javaapi.producer.Producer.send(Producer.scala:32)
 at
 com.snapdeal.coms.kafka.KafkaProducer.send(KafkaProducer.java:135)
 at
 com.snapdeal.coms.kafka.KafkaEventPublisher.publishCOMSEventOnESB(KafkaEventPublisher.java:61)
 at
 com.snapdeal.coms.service.EventsPublishingService.publishStateChangeEvent(EventsPublishingService.java:88)
 at
 com.snapdeal.coms.publisher.core.PublisherEventPublishingService.publishUploadIdState(PublisherEventPublishingService.java:46)
 at
 com.snapdeal.coms.publisher.splitter.VendorProductUpdateSplitter.split(VendorProductUpdateSplitter.java:112)
 at sun.reflect.GeneratedMethodAccessor227.invoke(Unknown Source)



 [image: Inline image 1]

 Jvisualvm also shows that most of the time producer threads are in blocked
 state though I don't see any exception in kafka sever logs. Any insight??







Re: New Java Producer Client handling unreachable Kafka

2015-03-27 Thread ankit tyagi
Hi Samuel,

you can use *metadata.fetch.timeout.ms
http://metadata.fetch.timeout.ms *property
to reduce the blocking time while cluster is unreachable. The default value
of this property is 1min.


*waitOnMetadata*()  is blocking call which blocks current thread waiting
for the metadata fetch to succeed before throwing an excaption back to
client.

On Fri, Mar 20, 2015 at 11:29 AM, Samuel Chase samebch...@gmail.com wrote:

 Hello Everyone,

 In the the new Java Producer API, the Callback code in
 KafkaProducer.send is run after there is a response from the Kafka
 server. This can be used if some error handling needs to be done based
 on the response.

 When using the new Java Kafka Producer, I've noticed that when the
 Kafka server is down/unreachable, KafkaProducer.send blocks until the
 Kafka server is back up again.

 We've been using the older Scala Producer and when Kafka is
 unreachable it throws an exception after a few retries. This exception
 is caught and then some error handling code is run.

 - What is the recommended way of using the new Java Producer API to
 handle the case where Kafka is unreachable temporarily?

 I don't want to wait until it is reachable again before I know that
 the send failed.

 Any help, advice shall be much appreciated.

 Samuel



Re: Dropping support for Scala 2.9.x

2015-03-27 Thread Stevo Slavić
+1 for dropping 2.9.x support

Kind regards,
Stevo Slavic.

On Fri, Mar 27, 2015 at 3:20 PM, Ismael Juma mli...@juma.me.uk wrote:

 Hi all,

 The Kafka build currently includes support for Scala 2.9, which means that
 it cannot take advantage of features introduced in Scala 2.10 or depend on
 libraries that require it.

 This restricts the solutions available while trying to solve existing
 issues. I was browsing JIRA looking for areas to contribute and I quickly
 ran into two issues where this is the case:

 * KAFKA-1351: String.format is very expensive in Scala could be solved
 nicely by using the String interpolation feature introduced in Scala 2.10.

 * KAFKA-1595: Remove deprecated and slower scala JSON parser from
 kafka.consumer.TopicCount could be solved by using an existing JSON
 library, but both jackson-scala and play-json require 2.10 (argonaut
 supports Scala 2.9, but it brings other dependencies like scalaz). We can
 workaround this by writing our own code instead of using libraries, of
 course, but it's not ideal.

 Other features like Scala Futures and value classes would also be useful in
 some situations, I would think (for a more extensive list of new features,
 see

 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
 ).

 Another pain point of supporting 2.9.x is that it doubles the number of
 build and test configurations required from 2 to 4 (because the 2.9.x
 series was not necessarily binary compatible).

 A strong argument for maintaining support for 2.9.x was the client library,
 but that has been rewritten in Java.

 It's also worth mentioning that Scala 2.9.1 was released in August 2011
 (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
 any sort since early 2013. Scala 2.10.0, in turn, was released in January
 2013 (over 2 years ago) and 2.10.5, the last planned release in the 2.10.x
 series, has been recently released (so even 2.10.x won't be receiving
 updates any longer).

 All in all, I think it would not be unreasonable to drop support for Scala
 2.9.x in a future release, but I may be missing something. What do others
 think?

 Ismael



Dropping support for Scala 2.9.x

2015-03-27 Thread Ismael Juma
Hi all,

The Kafka build currently includes support for Scala 2.9, which means that
it cannot take advantage of features introduced in Scala 2.10 or depend on
libraries that require it.

This restricts the solutions available while trying to solve existing
issues. I was browsing JIRA looking for areas to contribute and I quickly
ran into two issues where this is the case:

* KAFKA-1351: String.format is very expensive in Scala could be solved
nicely by using the String interpolation feature introduced in Scala 2.10.

* KAFKA-1595: Remove deprecated and slower scala JSON parser from
kafka.consumer.TopicCount could be solved by using an existing JSON
library, but both jackson-scala and play-json require 2.10 (argonaut
supports Scala 2.9, but it brings other dependencies like scalaz). We can
workaround this by writing our own code instead of using libraries, of
course, but it's not ideal.

Other features like Scala Futures and value classes would also be useful in
some situations, I would think (for a more extensive list of new features,
see
http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
).

Another pain point of supporting 2.9.x is that it doubles the number of
build and test configurations required from 2 to 4 (because the 2.9.x
series was not necessarily binary compatible).

A strong argument for maintaining support for 2.9.x was the client library,
but that has been rewritten in Java.

It's also worth mentioning that Scala 2.9.1 was released in August 2011
(more than 3.5 years ago) and the 2.9.x series hasn't received updates of
any sort since early 2013. Scala 2.10.0, in turn, was released in January
2013 (over 2 years ago) and 2.10.5, the last planned release in the 2.10.x
series, has been recently released (so even 2.10.x won't be receiving
updates any longer).

All in all, I think it would not be unreasonable to drop support for Scala
2.9.x in a future release, but I may be missing something. What do others
think?

Ismael


Re: Dropping support for Scala 2.9.x

2015-03-27 Thread Stephen Boesch
+1  I was on a project that ended up not using kafka - and this was one
reason: there are many other third party libraries that do not even have
2.9 versions so the interdependencies did not work.

2015-03-27 7:34 GMT-07:00 Stevo Slavić ssla...@gmail.com:

 +1 for dropping 2.9.x support

 Kind regards,
 Stevo Slavic.

 On Fri, Mar 27, 2015 at 3:20 PM, Ismael Juma mli...@juma.me.uk wrote:

  Hi all,
 
  The Kafka build currently includes support for Scala 2.9, which means
 that
  it cannot take advantage of features introduced in Scala 2.10 or depend
 on
  libraries that require it.
 
  This restricts the solutions available while trying to solve existing
  issues. I was browsing JIRA looking for areas to contribute and I quickly
  ran into two issues where this is the case:
 
  * KAFKA-1351: String.format is very expensive in Scala could be solved
  nicely by using the String interpolation feature introduced in Scala
 2.10.
 
  * KAFKA-1595: Remove deprecated and slower scala JSON parser from
  kafka.consumer.TopicCount could be solved by using an existing JSON
  library, but both jackson-scala and play-json require 2.10 (argonaut
  supports Scala 2.9, but it brings other dependencies like scalaz). We can
  workaround this by writing our own code instead of using libraries, of
  course, but it's not ideal.
 
  Other features like Scala Futures and value classes would also be useful
 in
  some situations, I would think (for a more extensive list of new
 features,
  see
 
 
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
  ).
 
  Another pain point of supporting 2.9.x is that it doubles the number of
  build and test configurations required from 2 to 4 (because the 2.9.x
  series was not necessarily binary compatible).
 
  A strong argument for maintaining support for 2.9.x was the client
 library,
  but that has been rewritten in Java.
 
  It's also worth mentioning that Scala 2.9.1 was released in August 2011
  (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
  any sort since early 2013. Scala 2.10.0, in turn, was released in January
  2013 (over 2 years ago) and 2.10.5, the last planned release in the
 2.10.x
  series, has been recently released (so even 2.10.x won't be receiving
  updates any longer).
 
  All in all, I think it would not be unreasonable to drop support for
 Scala
  2.9.x in a future release, but I may be missing something. What do others
  think?
 
  Ismael
 



Re: Dropping support for Scala 2.9.x

2015-03-27 Thread Tong Li

+1. But can we also look at this from the deployment base point view and
find out how many production deployments are still using 2.9? If there is
not any, dropping it is really an easy decision.

Thanks

Sent from my iPhone

 On Mar 27, 2015, at 8:21 AM, Ismael Juma mli...@juma.me.uk wrote:

 Hi all,

 The Kafka build currently includes support for Scala 2.9, which means
that
 it cannot take advantage of features introduced in Scala 2.10 or depend
on
 libraries that require it.

 This restricts the solutions available while trying to solve existing
 issues. I was browsing JIRA looking for areas to contribute and I quickly
 ran into two issues where this is the case:

 * KAFKA-1351: String.format is very expensive in Scala could be solved
 nicely by using the String interpolation feature introduced in Scala
2.10.

 * KAFKA-1595: Remove deprecated and slower scala JSON parser from
 kafka.consumer.TopicCount could be solved by using an existing JSON
 library, but both jackson-scala and play-json require 2.10 (argonaut
 supports Scala 2.9, but it brings other dependencies like scalaz). We can
 workaround this by writing our own code instead of using libraries, of
 course, but it's not ideal.

 Other features like Scala Futures and value classes would also be useful
in
 some situations, I would think (for a more extensive list of new
features,
 see

http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html

 ).

 Another pain point of supporting 2.9.x is that it doubles the number of
 build and test configurations required from 2 to 4 (because the 2.9.x
 series was not necessarily binary compatible).

 A strong argument for maintaining support for 2.9.x was the client
library,
 but that has been rewritten in Java.

 It's also worth mentioning that Scala 2.9.1 was released in August 2011
 (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
 any sort since early 2013. Scala 2.10.0, in turn, was released in January
 2013 (over 2 years ago) and 2.10.5, the last planned release in the
2.10.x
 series, has been recently released (so even 2.10.x won't be receiving
 updates any longer).

 All in all, I think it would not be unreasonable to drop support for
Scala
 2.9.x in a future release, but I may be missing something. What do others
 think?

 Ismael

Re: Kafka Sync Producer threads(ack=0) are blocked

2015-03-27 Thread Manu Zhang
Hi ankit, I think you could find out the stacktrace of the lock
*0x000602358ee8
*that blocks the producer.

Thanks,
Manu

On Fri, Mar 27, 2015 at 3:11 PM ankit tyagi ankittyagi.mn...@gmail.com
wrote:

 any insight would be useful.

 On Tue, Mar 24, 2015 at 11:06 AM, ankit tyagi ankittyagi.mn...@gmail.com
 wrote:

 Hi All,

 Currently we are using kafka_2.8.0-0.8.0-beta1 in our production system.
 I am using sync producer with ack=0 to send the events to  broker.

 but I am seeing  most of my producer threads are blocked.

 jmsListnerTaskExecutor-818 prio=10 tid=0x7f3f5c05a800 nid=0x1719
 waiting for monitor entry [0x7f405935e000]
  *  java.lang.Thread.State: BLOCKED (on object monitor)*
 *at
 kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:53)*
 *- waiting to lock 0x000602358ee8 (a java.lang.Object)*
 at kafka.producer.Producer.send(Producer.scala:74)
 at kafka.javaapi.producer.Producer.send(Producer.scala:32)
 at
 com.snapdeal.coms.kafka.KafkaProducer.send(KafkaProducer.java:135)
 at
 com.snapdeal.coms.kafka.KafkaEventPublisher.publishCOMSEventOnESB(KafkaEventPublisher.java:61)
 at
 com.snapdeal.coms.service.EventsPublishingService.publishStateChangeEvent(EventsPublishingService.java:88)
 at
 com.snapdeal.coms.publisher.core.PublisherEventPublishingService.publishUploadIdState(PublisherEventPublishingService.java:46)
 at
 com.snapdeal.coms.publisher.splitter.VendorProductUpdateSplitter.split(VendorProductUpdateSplitter.java:112)
 at sun.reflect.GeneratedMethodAccessor227.invoke(Unknown Source)



 [image: Inline image 1]

 Jvisualvm also shows that most of the time producer threads are in
 blocked state though I don't see any exception in kafka sever logs. Any
 insight??









Re: Kafka 8.2.1 Offset fetch Request

2015-03-27 Thread Mayuresh Gharat
Hi Madhukar,

I am going through your code now. Let me see what I can find.

Where were you storing your offsets before?
Was it always Zookeeper or was it Kafka?
If it was Zookeeper, the correct way to migrate from zookeeper to kafka
based offsets is this :

1) Config Change :
 - offsets.storage = kafka
 - dual.commit.enabled = true
2) Rolling Bounce
3) Config Change :
 - dual.commit.enabled=false
4) Rolling Bounce.

For more info on Offset Management, you can also refer these slides from
Kafka Meetup:
http://www.slideshare.net/jjkoshy/offset-management-in-kafka


Apart from that for using Kafka based offsets, to do a fetchOffsetRequest
or commit offset request you don't need a consumer. You need to know the
groupId. You need to connect to kafka, issue a consumerMetaData Request.
This will fetch you the OffsetManager for that groupId. You can then issue
the fetch or commit request to that OffsetManager.

BTW, we are coming up with an offsetClient soon.

Thanks,

Mayuresh

On Fri, Mar 27, 2015 at 1:53 AM, Madhukar Bharti bhartimadhu...@gmail.com
wrote:

 Hi Mayuresh,

 Please check this
 https://github.com/madhukarbharti/kafka-8.2.1-test/blob/master/src/com/bharti/kafka/offset/OffsetRequester.java
  program.
 Am I doing any mistake?

 Thanks


 On Thu, Mar 26, 2015 at 6:27 PM, Madhukar Bharti bhartimadhu...@gmail.com
  wrote:

 Hi Mayuresh,

 I have tried to fetch the offset using OffsetFetchRequest as given in
 this wiki


 https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka

 But It only works if we set dual.commit.enabled to true and
 offsets.storage to kafka. Otherwise it returns -1.

 Do I need to change anything?


 Thanks in advance!





-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Kafka 8.2.1 Offset fetch Request

2015-03-27 Thread Mayuresh Gharat
Other thing is if you are using SimpleConsumer, it is up to your app to do
the offsetManagement. The ZK based offsets or Kafka based offsets will work
if you are using the HighLevel Consumer.

Thanks,

Mayuresh

On Fri, Mar 27, 2015 at 9:17 AM, Mayuresh Gharat gharatmayures...@gmail.com
 wrote:

 Hi Madhukar,

 I am going through your code now. Let me see what I can find.

 Where were you storing your offsets before?
 Was it always Zookeeper or was it Kafka?
 If it was Zookeeper, the correct way to migrate from zookeeper to kafka
 based offsets is this :

 1) Config Change :
  - offsets.storage = kafka
  - dual.commit.enabled = true
 2) Rolling Bounce
 3) Config Change :
  - dual.commit.enabled=false
 4) Rolling Bounce.

 For more info on Offset Management, you can also refer these slides from
 Kafka Meetup:
 http://www.slideshare.net/jjkoshy/offset-management-in-kafka


 Apart from that for using Kafka based offsets, to do a fetchOffsetRequest
 or commit offset request you don't need a consumer. You need to know the
 groupId. You need to connect to kafka, issue a consumerMetaData Request.
 This will fetch you the OffsetManager for that groupId. You can then issue
 the fetch or commit request to that OffsetManager.

 BTW, we are coming up with an offsetClient soon.

 Thanks,

 Mayuresh

 On Fri, Mar 27, 2015 at 1:53 AM, Madhukar Bharti bhartimadhu...@gmail.com
  wrote:

 Hi Mayuresh,

 Please check this
 https://github.com/madhukarbharti/kafka-8.2.1-test/blob/master/src/com/bharti/kafka/offset/OffsetRequester.java
  program.
 Am I doing any mistake?

 Thanks


 On Thu, Mar 26, 2015 at 6:27 PM, Madhukar Bharti 
 bhartimadhu...@gmail.com wrote:

 Hi Mayuresh,

 I have tried to fetch the offset using OffsetFetchRequest as given in
 this wiki


 https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka

 But It only works if we set dual.commit.enabled to true and
 offsets.storage to kafka. Otherwise it returns -1.

 Do I need to change anything?


 Thanks in advance!





 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Kafka 8.2.1 Offset fetch Request

2015-03-27 Thread Mayuresh Gharat
In your case you are trying to issue an offsetRequest and not a
fetchOffsetRequest. I know this is little confusing.

Let me point you to a scala patch which has a client for doing fetch offset
and commit offset.

I am going to rewrite that in java. Here is the Kafka ticket :

https://issues.apache.org/jira/browse/KAFKA-1013

You can look at the RB and see how it is done. If you have any further
questions I will be happy to answer them.

Thanks,

Mayuresh


On Fri, Mar 27, 2015 at 9:30 AM, Mayuresh Gharat gharatmayures...@gmail.com
 wrote:

 Other thing is if you are using SimpleConsumer, it is up to your app to do
 the offsetManagement. The ZK based offsets or Kafka based offsets will work
 if you are using the HighLevel Consumer.

 Thanks,

 Mayuresh

 On Fri, Mar 27, 2015 at 9:17 AM, Mayuresh Gharat 
 gharatmayures...@gmail.com wrote:

 Hi Madhukar,

 I am going through your code now. Let me see what I can find.

 Where were you storing your offsets before?
 Was it always Zookeeper or was it Kafka?
 If it was Zookeeper, the correct way to migrate from zookeeper to kafka
 based offsets is this :

 1) Config Change :
  - offsets.storage = kafka
  - dual.commit.enabled = true
 2) Rolling Bounce
 3) Config Change :
  - dual.commit.enabled=false
 4) Rolling Bounce.

 For more info on Offset Management, you can also refer these slides from
 Kafka Meetup:
 http://www.slideshare.net/jjkoshy/offset-management-in-kafka


 Apart from that for using Kafka based offsets, to do a fetchOffsetRequest
 or commit offset request you don't need a consumer. You need to know the
 groupId. You need to connect to kafka, issue a consumerMetaData Request.
 This will fetch you the OffsetManager for that groupId. You can then issue
 the fetch or commit request to that OffsetManager.

 BTW, we are coming up with an offsetClient soon.

 Thanks,

 Mayuresh

 On Fri, Mar 27, 2015 at 1:53 AM, Madhukar Bharti 
 bhartimadhu...@gmail.com wrote:

 Hi Mayuresh,

 Please check this
 https://github.com/madhukarbharti/kafka-8.2.1-test/blob/master/src/com/bharti/kafka/offset/OffsetRequester.java
  program.
 Am I doing any mistake?

 Thanks


 On Thu, Mar 26, 2015 at 6:27 PM, Madhukar Bharti 
 bhartimadhu...@gmail.com wrote:

 Hi Mayuresh,

 I have tried to fetch the offset using OffsetFetchRequest as given in
 this wiki


 https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka

 But It only works if we set dual.commit.enabled to true and
 offsets.storage to kafka. Otherwise it returns -1.

 Do I need to change anything?


 Thanks in advance!





 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125




 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: 2 kafka cluster sharing same ZK Ensemble.

2015-03-27 Thread Rendy Bambang Junior
Based on documentation, as long as you define different folder zookeeper
chroot at broker configuration, it should be OK. Cmiiw.

Disclaimer: myself never tried this scheme.

Rendy
On Mar 28, 2015 2:14 AM, Shrikant Patel spa...@pdxinc.com wrote:

 Can 2 separate kafka cluster share same ZK Ensemble??
 If yes, how does ZK deal with that 2 clusters having brokers with same id.

 Thanks,
 Shri


 
 This message and its contents (to include attachments) are the property of
 National Health Systems, Inc. and may contain confidential and proprietary
 information. This email and any files transmitted with it are intended
 solely for the use of the individual or entity to whom they are addressed.
 You are hereby notified that any unauthorized disclosure, copying, or
 distribution of this message, or the taking of any unauthorized action
 based on information contained herein is strictly prohibited. Unauthorized
 use of information contained herein may subject you to civil and criminal
 prosecution and penalties. If you are not the intended recipient, you
 should delete this message immediately and notify the sender immediately by
 telephone or by replying to this transmission.



Re: 2 kafka cluster sharing same ZK Ensemble.

2015-03-27 Thread Alexander Makarenko
Well, I'm using 1 ZK cluster with 3 Kafka clusters. Chroot works perfectly.

On Fri, Mar 27, 2015 at 10:20 PM Rendy Bambang Junior 
rendy.b.jun...@gmail.com wrote:

 Based on documentation, as long as you define different folder zookeeper
 chroot at broker configuration, it should be OK. Cmiiw.

 Disclaimer: myself never tried this scheme.

 Rendy
 On Mar 28, 2015 2:14 AM, Shrikant Patel spa...@pdxinc.com wrote:

  Can 2 separate kafka cluster share same ZK Ensemble??
  If yes, how does ZK deal with that 2 clusters having brokers with same
 id.
 
  Thanks,
  Shri
 
 
  
  This message and its contents (to include attachments) are the property
 of
  National Health Systems, Inc. and may contain confidential and
 proprietary
  information. This email and any files transmitted with it are intended
  solely for the use of the individual or entity to whom they are
 addressed.
  You are hereby notified that any unauthorized disclosure, copying, or
  distribution of this message, or the taking of any unauthorized action
  based on information contained herein is strictly prohibited.
 Unauthorized
  use of information contained herein may subject you to civil and criminal
  prosecution and penalties. If you are not the intended recipient, you
  should delete this message immediately and notify the sender immediately
 by
  telephone or by replying to this transmission.
 



Re: A kafka web monitor

2015-03-27 Thread Yuheng Du
Hi Wan,

I tried to install this DCMonitor, but when I try to clone the project, but
it gives me Permission denied, the remote end hung up unexpectedly. Can
you provide any suggestions to this issue?

Thanks.

best,
Yuheng

On Mon, Mar 23, 2015 at 8:54 AM, Wan Wei flowbeha...@gmail.com wrote:

 We have make a simple web console to monitor some kafka informations like
 consumer offset, logsize.

 https://github.com/shunfei/DCMonitor


 Hope you like it and offer your help to make it better :)
 Regards
 Flow



2 kafka cluster sharing same ZK Ensemble.

2015-03-27 Thread Shrikant Patel
Can 2 separate kafka cluster share same ZK Ensemble??
If yes, how does ZK deal with that 2 clusters having brokers with same id.

Thanks,
Shri



This message and its contents (to include attachments) are the property of 
National Health Systems, Inc. and may contain confidential and proprietary 
information. This email and any files transmitted with it are intended solely 
for the use of the individual or entity to whom they are addressed. You are 
hereby notified that any unauthorized disclosure, copying, or distribution of 
this message, or the taking of any unauthorized action based on information 
contained herein is strictly prohibited. Unauthorized use of information 
contained herein may subject you to civil and criminal prosecution and 
penalties. If you are not the intended recipient, you should delete this 
message immediately and notify the sender immediately by telephone or by 
replying to this transmission.


RE: A kafka web monitor

2015-03-27 Thread Emmanuel
for a kafka web console I've been using this one and it worked well for mejust 
make sure to install the right version of Play framework (in ReadMe.md) 
https://github.com/claudemamo/kafka-web-console


 Date: Fri, 27 Mar 2015 15:28:09 -0400
 Subject: Re: A kafka web monitor
 From: yuheng.du.h...@gmail.com
 To: users@kafka.apache.org
 
 Hi Wan,
 
 I tried to install this DCMonitor, but when I try to clone the project, but
 it gives me Permission denied, the remote end hung up unexpectedly. Can
 you provide any suggestions to this issue?
 
 Thanks.
 
 best,
 Yuheng
 
 On Mon, Mar 23, 2015 at 8:54 AM, Wan Wei flowbeha...@gmail.com wrote:
 
  We have make a simple web console to monitor some kafka informations like
  consumer offset, logsize.
 
  https://github.com/shunfei/DCMonitor
 
 
  Hope you like it and offer your help to make it better :)
  Regards
  Flow