Re: Node-Kafka Client Review and Question

2013-04-24 Thread Christopher Alexander
We did an examination of the tagged branch but the version was 0.1.7 and its 
been static for over a year now. I will say that the Node-Kafka (v2.3) producer 
has been stable however. A previous thread concerning Node-Kafka client 
development revealed that a C library will be out for 0.8, supporting more 
stable library client development (hint hint too ;-)

Unfortunately, we are moving rapidly with what works now to get our platform to 
an MVP state. Hopefully, there won't be too much refactoring when the next gen 
Node-Kafka client is complete. :(

Chris

- Original Message -
From: "Taylor Gautier" 
To: users@kafka.apache.org
Sent: Wednesday, April 24, 2013 4:39:24 PM
Subject: Re: Node-Kafka Client Review and Question

This implementation is what I worked on while at Tagged, which was forked
from Marcus' version, but I don't think it ever merged back to Marcus':

https://github.com/tagged/node-kafka

It was in production for about a year when I left Tagged about 6 months
ago.  I know that there were some internal fixes that never made it back
out.  I think the version that is public is pretty stable, but it's hard
for me to know since I am no longer at Tagged and don't have access to the
internal repos to know for sure what fixes are still private and which have
been published back.

That said, I think you should definitely give this version a shot first
before moving on.

Finally, if I were to do it all over again, with about a solid additional
year of node programming experience under my belt, I'd probably rewrite
everything from the interfaces to the implementation.

In my current job there's a strong possibility of us adopting a Node.js +
Kafka implementation, but not for at least a few months, so I wouldn't
expect to be back in the community to work on this for a little while.
 Also, I'm kind of waiting for 0.8 (hint hint ;-)



On Wed, Apr 24, 2013 at 10:45 AM, Christian Carollo wrote:

> Hi Everyone,
>
> I have been experimenting with the libraries listed below and experienced
> the same problems.
> I have not found any another other node clients.  I am interested in
> finding a node solution as well.
> Happy to contribute on a common solution.
>
> Christian Carollo
>
> On Apr 24, 2013, at 10:19 AM, Christopher Alexander <
> calexan...@gravycard.com> wrote:
>
> > Hi Everyone,
> >
> > I just wanted to follow-up on a previous thread concerning our
> investigation of identifying a stable Node-Kafka client. To date we have
> tested the following:
> >
> > 1. Franz-Kafka (https://github.com/dannycoates/franz-kafka)
> > 2. Node-Kafka (v2.1, https://github.com/radekg/node-kafka)
> > 3. Node-Kafka (v2.3, https://github.com/marcuswestin/node-kafka)
> > 4. Prozess (v0.3.5, https://github.com/cainus/Prozess)
> >
> > Results:
> >
> > 1. Could not get Franz-Kafka and Prozess to work. Requires funky
> dependencies.
> > 2. Node-Kafka, v2.1 was successfully setup but performed less stable
> than #3.
> > 3. Node-Kafka, v2.3 was successfully setup, exhibited the best
> performance profile but the consumer is highly inconsistent - specifically,
> consumer object remained in-memory regardless what we did (i.e. var
> consumer = undefined after receiving message). Nothing appears to mitigate
> this and ALL consumed messaged get replayed on reception of a new message.
> >
> > With this said, is there a Node-Kafka client people are actually using
> in production that doesn't exhibit the profiles we have seen? We have
> back-tracked using Node-Kafka (v2.3) to only produce messages and rely on
> Redis PubSub channels for asynchronous acking of these messages. We would
> be willing to roll-up our sleeves with the community to develop a much more
> stable Node-Kafka client.
> >
> > Kind regards,
> >
> > Chris Alexander
> > Chief Technical Architect and Engineer
> > Gravy, Inc.
>
>


Node-Kafka Client Review and Question

2013-04-24 Thread Christopher Alexander
Hi Everyone,

I just wanted to follow-up on a previous thread concerning our investigation of 
identifying a stable Node-Kafka client. To date we have tested the following:

1. Franz-Kafka (https://github.com/dannycoates/franz-kafka)
2. Node-Kafka (v2.1, https://github.com/radekg/node-kafka)
3. Node-Kafka (v2.3, https://github.com/marcuswestin/node-kafka)
4. Prozess (v0.3.5, https://github.com/cainus/Prozess)

Results:

1. Could not get Franz-Kafka and Prozess to work. Requires funky dependencies.
2. Node-Kafka, v2.1 was successfully setup but performed less stable than #3.
3. Node-Kafka, v2.3 was successfully setup, exhibited the best performance 
profile but the consumer is highly inconsistent - specifically, consumer object 
remained in-memory regardless what we did (i.e. var consumer = undefined after 
receiving message). Nothing appears to mitigate this and ALL consumed messaged 
get replayed on reception of a new message.

With this said, is there a Node-Kafka client people are actually using in 
production that doesn't exhibit the profiles we have seen? We have back-tracked 
using Node-Kafka (v2.3) to only produce messages and rely on Redis PubSub 
channels for asynchronous acking of these messages. We would be willing to 
roll-up our sleeves with the community to develop a much more stable Node-Kafka 
client.

Kind regards,

Chris Alexander
Chief Technical Architect and Engineer
Gravy, Inc.


Re: kafka.common.OffsetOutOfRangeException

2013-03-15 Thread Christopher Alexander
I would appreciate it if someone can provide some guidance on how to handle a 
consumer offset reset. I know this feature is expected to be baked into 0.8.0 
(I'm using 0.7.2). Although I in a local development environment, such an 
exercise would allow me to understand Kafka better and build a trouble-shooting 
solution set for an eventual production release.

Thanks gurus. :)

Chris

- Original Message -
From: "Christopher Alexander" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 11:22:27 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

OK, I re-reviewed the Kafka design doc and looked at the topic file mytopic-0. 
It definitely isn't 562949953452239 in size (just 293476). Since I am in a 
local test configuration, how should I resolve the offset drift and where:

1. In ZK by wiping a snapshot.XXX file? This would also affect another app that 
is also using ZK.
2. In Kafka by wiping the mytopic-0 file?

Thanks,

Chris

- Original Message -
From: "Christopher Alexander" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 11:02:57 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

Thanks Jun,

I don't mean to be obtuse, but could you please provide an example? Which file 
should I determine size for?

Thanks,

Chris

- Original Message -
From: "Jun Rao" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 12:18:31 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

Chris,

The last offset can be calculated by adding the file size to the name of
the last Kafka segment file. Then you can see if your offset is in the
range.

Thanks,

Jun

On Wed, Mar 13, 2013 at 2:53 PM, Christopher Alexander <
calexan...@gravycard.com> wrote:

> Thanks for the reply Phillip. I am new to kafka so please bear with me if
> I say something that's "noobish".
>
> I am running in a localhost configuration for testing. If I checkout kafka
> logs:
>
> > cd /tmp/kafka-logs
> > ls mytopic-0 # my topic is present
> > cd mytopic-0
> > ls .kafka
> > vi .kafka
>
> This reveals a hexstring (or another format). No offset visible.
>
> If I checkout zk:
>
> > cd /tmp/zookeeper/version-2
> > ls # I see log and snapshot files
> > vi filename
>
> This reveals binary data. No offset visible.
>
> Are these the locations for finding the current/recent offsets? Thanks.
>
>
> - Original Message -
> From: "Philip O'Toole" 
> To: users@kafka.apache.org
> Sent: Wednesday, March 13, 2013 5:04:01 PM
> Subject: Re: kafka.common.OffsetOutOfRangeException
>
> Is offset 562949953452239, partition 0, actually available on the
> Kafka broker? Have you checked?
>
> Philip
>
> On Wed, Mar 13, 2013 at 1:53 PM, Christopher Alexander
>  wrote:
> > Hello All,
> >
> > I am using Node-Kafka to connect to Kafka 0.7.2. Things were working
> fine but we are experiencing repeated exceptions of the following:
> >
> > [2013-03-13 16:45:14,615] ERROR error when processing request
> FetchRequest(topic:promoterregack, part:0 offset:562949953452239
> maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> > kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out of
> range
> > at kafka.log.Log$.findRange(Log.scala:46)
> > at kafka.log.Log.read(Log.scala:264)
> > at
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
> > at
> kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at kafka.network.Processor.handle(SocketServer.scala:296)
> > at kafka.network.Processor.read(SocketServer.scala:319)
> > at kafka.network.Processor.run(SocketServer.scala:214)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> > Is there some configuration or manual maintenance I need to perform on
> Kafka to remediate the exception? Thank you in advance for your assistance.
> >
> > Kind regards,
> >
> > Chris Alexander
> > Technical Architect and Engineer
> > Gravy, Inc.
>


Re: kafka.common.OffsetOutOfRangeException

2013-03-14 Thread Christopher Alexander
OK, I re-reviewed the Kafka design doc and looked at the topic file mytopic-0. 
It definitely isn't 562949953452239 in size (just 293476). Since I am in a 
local test configuration, how should I resolve the offset drift and where:

1. In ZK by wiping a snapshot.XXX file? This would also affect another app that 
is also using ZK.
2. In Kafka by wiping the mytopic-0 file?

Thanks,

Chris

- Original Message -
From: "Christopher Alexander" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 11:02:57 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

Thanks Jun,

I don't mean to be obtuse, but could you please provide an example? Which file 
should I determine size for?

Thanks,

Chris

- Original Message -
From: "Jun Rao" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 12:18:31 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

Chris,

The last offset can be calculated by adding the file size to the name of
the last Kafka segment file. Then you can see if your offset is in the
range.

Thanks,

Jun

On Wed, Mar 13, 2013 at 2:53 PM, Christopher Alexander <
calexan...@gravycard.com> wrote:

> Thanks for the reply Phillip. I am new to kafka so please bear with me if
> I say something that's "noobish".
>
> I am running in a localhost configuration for testing. If I checkout kafka
> logs:
>
> > cd /tmp/kafka-logs
> > ls mytopic-0 # my topic is present
> > cd mytopic-0
> > ls .kafka
> > vi .kafka
>
> This reveals a hexstring (or another format). No offset visible.
>
> If I checkout zk:
>
> > cd /tmp/zookeeper/version-2
> > ls # I see log and snapshot files
> > vi filename
>
> This reveals binary data. No offset visible.
>
> Are these the locations for finding the current/recent offsets? Thanks.
>
>
> - Original Message -
> From: "Philip O'Toole" 
> To: users@kafka.apache.org
> Sent: Wednesday, March 13, 2013 5:04:01 PM
> Subject: Re: kafka.common.OffsetOutOfRangeException
>
> Is offset 562949953452239, partition 0, actually available on the
> Kafka broker? Have you checked?
>
> Philip
>
> On Wed, Mar 13, 2013 at 1:53 PM, Christopher Alexander
>  wrote:
> > Hello All,
> >
> > I am using Node-Kafka to connect to Kafka 0.7.2. Things were working
> fine but we are experiencing repeated exceptions of the following:
> >
> > [2013-03-13 16:45:14,615] ERROR error when processing request
> FetchRequest(topic:promoterregack, part:0 offset:562949953452239
> maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> > kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out of
> range
> > at kafka.log.Log$.findRange(Log.scala:46)
> > at kafka.log.Log.read(Log.scala:264)
> > at
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
> > at
> kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at kafka.network.Processor.handle(SocketServer.scala:296)
> > at kafka.network.Processor.read(SocketServer.scala:319)
> > at kafka.network.Processor.run(SocketServer.scala:214)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> > Is there some configuration or manual maintenance I need to perform on
> Kafka to remediate the exception? Thank you in advance for your assistance.
> >
> > Kind regards,
> >
> > Chris Alexander
> > Technical Architect and Engineer
> > Gravy, Inc.
>


Re: kafka.common.OffsetOutOfRangeException

2013-03-14 Thread Christopher Alexander
Thanks Jun,

I don't mean to be obtuse, but could you please provide an example? Which file 
should I determine size for?

Thanks,

Chris

- Original Message -
From: "Jun Rao" 
To: users@kafka.apache.org
Sent: Thursday, March 14, 2013 12:18:31 AM
Subject: Re: kafka.common.OffsetOutOfRangeException

Chris,

The last offset can be calculated by adding the file size to the name of
the last Kafka segment file. Then you can see if your offset is in the
range.

Thanks,

Jun

On Wed, Mar 13, 2013 at 2:53 PM, Christopher Alexander <
calexan...@gravycard.com> wrote:

> Thanks for the reply Phillip. I am new to kafka so please bear with me if
> I say something that's "noobish".
>
> I am running in a localhost configuration for testing. If I checkout kafka
> logs:
>
> > cd /tmp/kafka-logs
> > ls mytopic-0 # my topic is present
> > cd mytopic-0
> > ls .kafka
> > vi .kafka
>
> This reveals a hexstring (or another format). No offset visible.
>
> If I checkout zk:
>
> > cd /tmp/zookeeper/version-2
> > ls # I see log and snapshot files
> > vi filename
>
> This reveals binary data. No offset visible.
>
> Are these the locations for finding the current/recent offsets? Thanks.
>
>
> - Original Message -
> From: "Philip O'Toole" 
> To: users@kafka.apache.org
> Sent: Wednesday, March 13, 2013 5:04:01 PM
> Subject: Re: kafka.common.OffsetOutOfRangeException
>
> Is offset 562949953452239, partition 0, actually available on the
> Kafka broker? Have you checked?
>
> Philip
>
> On Wed, Mar 13, 2013 at 1:53 PM, Christopher Alexander
>  wrote:
> > Hello All,
> >
> > I am using Node-Kafka to connect to Kafka 0.7.2. Things were working
> fine but we are experiencing repeated exceptions of the following:
> >
> > [2013-03-13 16:45:14,615] ERROR error when processing request
> FetchRequest(topic:promoterregack, part:0 offset:562949953452239
> maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> > kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out of
> range
> > at kafka.log.Log$.findRange(Log.scala:46)
> > at kafka.log.Log.read(Log.scala:264)
> > at
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
> > at
> kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> > at kafka.network.Processor.handle(SocketServer.scala:296)
> > at kafka.network.Processor.read(SocketServer.scala:319)
> > at kafka.network.Processor.run(SocketServer.scala:214)
> > at java.lang.Thread.run(Thread.java:662)
> >
> >
> > Is there some configuration or manual maintenance I need to perform on
> Kafka to remediate the exception? Thank you in advance for your assistance.
> >
> > Kind regards,
> >
> > Chris Alexander
> > Technical Architect and Engineer
> > Gravy, Inc.
>


Re: kafka.common.OffsetOutOfRangeException

2013-03-13 Thread Christopher Alexander
Thanks for the reply Phillip. I am new to kafka so please bear with me if I say 
something that's "noobish".

I am running in a localhost configuration for testing. If I checkout kafka logs:

> cd /tmp/kafka-logs
> ls mytopic-0 # my topic is present
> cd mytopic-0
> ls .kafka
> vi .kafka 

This reveals a hexstring (or another format). No offset visible.

If I checkout zk:

> cd /tmp/zookeeper/version-2
> ls # I see log and snapshot files
> vi filename

This reveals binary data. No offset visible.

Are these the locations for finding the current/recent offsets? Thanks.


- Original Message -
From: "Philip O'Toole" 
To: users@kafka.apache.org
Sent: Wednesday, March 13, 2013 5:04:01 PM
Subject: Re: kafka.common.OffsetOutOfRangeException

Is offset 562949953452239, partition 0, actually available on the
Kafka broker? Have you checked?

Philip

On Wed, Mar 13, 2013 at 1:53 PM, Christopher Alexander
 wrote:
> Hello All,
>
> I am using Node-Kafka to connect to Kafka 0.7.2. Things were working fine but 
> we are experiencing repeated exceptions of the following:
>
> [2013-03-13 16:45:14,615] ERROR error when processing request 
> FetchRequest(topic:promoterregack, part:0 offset:562949953452239 
> maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out of range
> at kafka.log.Log$.findRange(Log.scala:46)
> at kafka.log.Log.read(Log.scala:264)
> at 
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
> at 
> kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92)
> at 
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> at 
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
> at kafka.network.Processor.handle(SocketServer.scala:296)
> at kafka.network.Processor.read(SocketServer.scala:319)
> at kafka.network.Processor.run(SocketServer.scala:214)
> at java.lang.Thread.run(Thread.java:662)
>
>
> Is there some configuration or manual maintenance I need to perform on Kafka 
> to remediate the exception? Thank you in advance for your assistance.
>
> Kind regards,
>
> Chris Alexander
> Technical Architect and Engineer
> Gravy, Inc.


kafka.common.OffsetOutOfRangeException

2013-03-13 Thread Christopher Alexander
Hello All,

I am using Node-Kafka to connect to Kafka 0.7.2. Things were working fine but 
we are experiencing repeated exceptions of the following:

[2013-03-13 16:45:14,615] ERROR error when processing request 
FetchRequest(topic:promoterregack, part:0 offset:562949953452239 
maxSize:1048576) (kafka.server.KafkaRequestHandlers)
kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out of range
at kafka.log.Log$.findRange(Log.scala:46)
at kafka.log.Log.read(Log.scala:264)
at 
kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
at 
kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92)
at 
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
at 
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39)
at kafka.network.Processor.handle(SocketServer.scala:296)
at kafka.network.Processor.read(SocketServer.scala:319)
at kafka.network.Processor.run(SocketServer.scala:214)
at java.lang.Thread.run(Thread.java:662)


Is there some configuration or manual maintenance I need to perform on Kafka to 
remediate the exception? Thank you in advance for your assistance.

Kind regards,

Chris Alexander
Technical Architect and Engineer
Gravy, Inc.


Re: Kafka Node.js Integration Questions/Advice

2012-12-28 Thread Christopher Alexander
 Another kafka module is this one:
> >>>>> https://github.com/dannycoates/franz-kafka.
> >>>>>
> >>>>> Kind regards,
> >>>>> Radek Gruchalski
> >>>>> radek.gruchal...@technicolor.com (mailto:
> >>> radek.gruchal...@technicolor.com)
> >>>>> | radek.gruchal...@portico.io (mailto:radek.gruchal...@portico.io) |
> >>>>> ra...@gruchalski.com (mailto:ra...@gruchalski.com)
> >>>>> 00447889948663
> >>>>>
> >>>>> Confidentiality:
> >>>>> This communication is intended for the above-named person and may be
> >>>>> confidential and/or legally privileged.
> >>>>> If it has come to you in error you must take no action based on it,
> nor
> >>>>> must you copy or show it to anyone; please delete/destroy and inform
> the
> >>>>> sender immediately.
> >>>>>
> >>>>>
> >>>>> On Thursday, 20 December 2012 at 18:31, Jun Rao wrote:
> >>>>>
> >>>>>> Chris,
> >>>>>>
> >>>>>> Not sure how stable those node.js clients are. In 0.8, we plan to
> >>>>> provide a
> >>>>>> native C version of the producer. A thin node.js layer can
> potentially
> >>> be
> >>>>>> built on top of that.
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Jun
> >>>>>>
> >>>>>> On Thu, Dec 20, 2012 at 8:46 AM, Christopher Alexander <
> >>>>>> calexan...@gravycard.com (mailto:calexan...@gravycard.com)> wrote:
> >>>>>>
> >>>>>>> During my due diligence to assess use of Kafka for both our
> activity
> >>>>> and
> >>>>>>> log message streams, I would like to ask the project committers and
> >>>>>>> community users about using Kafka with Node.js. Yes, I am aware
> that a
> >>>>>>> Kafka client exists for Node.js (
> >>>>>>> https://github.com/marcuswestin/node-kafka), which has spurred
> >>> further
> >>>>>>> interest by our front-end team. Here are my questions, excuse me if
> >>>>> they
> >>>>>>> seem "noobish".
> >>>>>>>
> >>>>>>> 1. How reliable is the Node.js client (
> >>>>>>> https://github.com/marcuswestin/node-kafka) in production
> >>>>> applications?
> >>>>>>> If there are issues, what are they (the GitHub repo currently lists
> >>>>> none)?
> >>>>>>> 2. To support real-time activity streams within Node.js, what is
> the
> >>>>>>> recommended consumer polling interval?
> >>>>>>> 3. General advise observations on integrating a front-end based
> >>> Node.js
> >>>>>>> application with Kafka mediated messaging.
> >>>>>>>
> >>>>>>> Thanks you!
> >>>>>>>
> >>>>>>> Chris
> >>>
>



-- 
Thanks & Regards,
Apoorva


Re: Kafka Node.js Integration Questions/Advice

2012-12-20 Thread Christopher Alexander
Thanks David. Yes, I am aware of the Prozess Node lib also. I forgot to include 
it in my posting. Good catch!

- Original Message -
From: "David Arthur" 
To: users@kafka.apache.org
Sent: Thursday, December 20, 2012 11:58:45 AM
Subject: Re: Kafka Node.js Integration Questions/Advice


On 12/20/12 11:46 AM, Christopher Alexander wrote:
> During my due diligence to assess use of Kafka for both our activity and log 
> message streams, I would like to ask the project committers and community 
> users about using Kafka with Node.js. Yes, I am aware that a Kafka client 
> exists for Node.js (https://github.com/marcuswestin/node-kafka), which has 
> spurred further interest by our front-end team. Here are my questions, excuse 
> me if they seem "noobish".
>
> 1. How reliable is the Node.js client 
> (https://github.com/marcuswestin/node-kafka) in production applications? If 
> there are issues, what are they (the GitHub repo currently lists none)?
Just FYI, there is another node.js library 
https://github.com/cainus/Prozess. I have no experience with either, so 
I cannot say how reliable they are.
> 2. To support real-time activity streams within Node.js, what is the 
> recommended consumer polling interval?
What kind of data velocity do you expect? You should only have to poll 
if your consumer catches up to the broker and there's no more data. 
Blocking/polling behavior of the consumer depends entirely on the client 
implementation.
> 3. General advise observations on integrating a front-end based Node.js 
> application with Kafka mediated messaging.
>
> Thanks you!
>
> Chris



Kafka Node.js Integration Questions/Advice

2012-12-20 Thread Christopher Alexander
During my due diligence to assess use of Kafka for both our activity and log 
message streams, I would like to ask the project committers and community users 
about using Kafka with Node.js. Yes, I am aware that a Kafka client exists for 
Node.js (https://github.com/marcuswestin/node-kafka), which has spurred further 
interest by our front-end team. Here are my questions, excuse me if they seem 
"noobish".

1. How reliable is the Node.js client 
(https://github.com/marcuswestin/node-kafka) in production applications? If 
there are issues, what are they (the GitHub repo currently lists none)?
2. To support real-time activity streams within Node.js, what is the 
recommended consumer polling interval?
3. General advise observations on integrating a front-end based Node.js 
application with Kafka mediated messaging.

Thanks you!

Chris


Re: Unable To Run QuickStart From CLI

2012-12-20 Thread Christopher Alexander
Thanks Jun and Joel. I've got my Kafka development instance up and running. I 
do have a few questions through:

1. In /bin I note that there is kafka-server-stop.sh and 
zookeeper-server-stop.sh. I assume the scripts should be executed in this order 
for a clean shutdown?
2. Clearly absent are shutdown scripts for the producer and consumer. Does this 
reflect the overall design decisions about Kafka's push/pull model wherein any 
type of producer or consumer shutdown will not impact the broker? If so, no 
need for an explicit shutdown script. Is that correct?

- Original Message -
From: "Joel Koshy" 
To: users@kafka.apache.org
Sent: Wednesday, December 19, 2012 2:11:22 PM
Subject: Re: Unable To Run QuickStart From CLI

You will need to use the ConsoleConsumer (see the bin directory) or create
a Java/Scala consumer connector.


On Wed, Dec 19, 2012 at 9:41 AM, Christopher Alexander <
calexan...@gravycard.com> wrote:

> Hi Jun,
>
> Although this may not be the ideal method, I did get it working after I
> issued the following commands:
>
> ./sbt clean
> ./sbt clean-cache
> ./sbt update
> ./sbt package
>
> And then reran the FAQ QuickStart. Maybe newly created directories finally
> recursively inherited my permissions.
>
> I see /tmp/kafka-logs/test0/.kafka but unable to open
> the file in a standard editor to view what has been logged - file is
> binary. What is the recommend application to view the topic log?
>
> - Original Message -
> From: "Jun Rao" 
> To: users@kafka.apache.org
> Sent: Wednesday, December 19, 2012 11:53:56 AM
> Subject: Re: Unable To Run QuickStart From CLI
>
> These exceptions are at the info level and are normal. Did you see data in
> the kafka broker log?
>
> Thanks,
>
> Jun
>
> On Wed, Dec 19, 2012 at 7:59 AM, Christopher Alexander <
> calexan...@gravycard.com> wrote:
>
> > Hello All,
> >
> > I am in the early stages of exploring the use of Kafka for large-scale
> > application I am authoring. Generally, the documentation has been pretty
> > good and quite pleased with getting things set-up in a couple of hours.
> > However, I am attempting to run the QuickStart using CLI to locally
> confirm
> > that producers/consumers works through Zookeeper and Kafka. I get the
> > following exception when a producer connects to Zookeeper. Subsequently,
> I
> > am unable to send/receive message. The exception is:
> >
> > [2012-12-19 10:38:15,993] INFO Got user-level KeeperException when
> > processing sessionid:0x13bb3cfbcde type:create cxid:0x1
> > zxid:0xfffe txntype:unknown reqpath:n/a Error
> Path:/brokers/ids
> > Error:KeeperErrorCode = NoNode for /brokers/ids
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> > [2012-12-19 10:38:16,032] INFO Got user-level KeeperException when
> > processing sessionid:0x13bb3cfbcde type:create cxid:0x2
> > zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/brokers
> > Error:KeeperErrorCode = NoNode for /brokers
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > I would appreciate it if some could point me in the right direction.
> >
> >
> > Kind regards,
> >
> > Chris Alexander
> > Technical Architect and Engineer
> > Gravy, Inc.
> >
> > W: http://www.gravycard.com
> >
> >
>


Re: Unable To Run QuickStart From CLI

2012-12-19 Thread Christopher Alexander
Hi Jun,

Although this may not be the ideal method, I did get it working after I issued 
the following commands:

./sbt clean
./sbt clean-cache
./sbt update
./sbt package

And then reran the FAQ QuickStart. Maybe newly created directories finally 
recursively inherited my permissions.

I see /tmp/kafka-logs/test0/.kafka but unable to open the 
file in a standard editor to view what has been logged - file is binary. What 
is the recommend application to view the topic log?

- Original Message -
From: "Jun Rao" 
To: users@kafka.apache.org
Sent: Wednesday, December 19, 2012 11:53:56 AM
Subject: Re: Unable To Run QuickStart From CLI

These exceptions are at the info level and are normal. Did you see data in
the kafka broker log?

Thanks,

Jun

On Wed, Dec 19, 2012 at 7:59 AM, Christopher Alexander <
calexan...@gravycard.com> wrote:

> Hello All,
>
> I am in the early stages of exploring the use of Kafka for large-scale
> application I am authoring. Generally, the documentation has been pretty
> good and quite pleased with getting things set-up in a couple of hours.
> However, I am attempting to run the QuickStart using CLI to locally confirm
> that producers/consumers works through Zookeeper and Kafka. I get the
> following exception when a producer connects to Zookeeper. Subsequently, I
> am unable to send/receive message. The exception is:
>
> [2012-12-19 10:38:15,993] INFO Got user-level KeeperException when
> processing sessionid:0x13bb3cfbcde type:create cxid:0x1
> zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/brokers/ids
> Error:KeeperErrorCode = NoNode for /brokers/ids
> (org.apache.zookeeper.server.PrepRequestProcessor)
> [2012-12-19 10:38:16,032] INFO Got user-level KeeperException when
> processing sessionid:0x13bb3cfbcde type:create cxid:0x2
> zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/brokers
> Error:KeeperErrorCode = NoNode for /brokers
> (org.apache.zookeeper.server.PrepRequestProcessor)
>
> I would appreciate it if some could point me in the right direction.
>
>
> Kind regards,
>
> Chris Alexander
> Technical Architect and Engineer
> Gravy, Inc.
>
> W: http://www.gravycard.com
>
>


Unable To Run QuickStart From CLI

2012-12-19 Thread Christopher Alexander
Hello All,

I am in the early stages of exploring the use of Kafka for large-scale 
application I am authoring. Generally, the documentation has been pretty good 
and quite pleased with getting things set-up in a couple of hours. However, I 
am attempting to run the QuickStart using CLI to locally confirm that 
producers/consumers works through Zookeeper and Kafka. I get the following 
exception when a producer connects to Zookeeper. Subsequently, I am unable to 
send/receive message. The exception is:

[2012-12-19 10:38:15,993] INFO Got user-level KeeperException when processing 
sessionid:0x13bb3cfbcde type:create cxid:0x1 zxid:0xfffe 
txntype:unknown reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = 
NoNode for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2012-12-19 10:38:16,032] INFO Got user-level KeeperException when processing 
sessionid:0x13bb3cfbcde type:create cxid:0x2 zxid:0xfffe 
txntype:unknown reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode 
for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)

I would appreciate it if some could point me in the right direction.


Kind regards,

Chris Alexander
Technical Architect and Engineer
Gravy, Inc.

W: http://www.gravycard.com