Re: macbook air and kafka

2016-05-28 Thread S Ahmed
gt; > On Thu, May 26, 2016 at 11:58 AM, S Ahmed wrote: > > > I just pulled lated on the same old 2010 MPB and the build took over 4 > > minutes. > > > > Have things changed so much since 2013? :) > > > > I ran: ./gradlew jar > > > > On Tue, Jun 4

Re: macbook air and kafka

2016-05-26 Thread S Ahmed
gt; Thanks, > Neha > > > > On Tue, Jun 4, 2013 at 5:56 AM, S Ahmed wrote: > > > I have a 1st gen i7 wit 8GB ram, and mine takes 77 seconds: > > > > >clean > > >++2.9.2 package > > > > info] Packaging > > > > > /Users/abc

Re: steps path to kafka mastery

2016-03-29 Thread S Ahmed
The book says Feb. 2016 release date, looks like the producer is slower than the consumers (sorry couldn't resist) On Tue, Mar 29, 2016 at 11:56 AM, S Ahmed wrote: > It was some pakt pub one. > > Yeah I am waiting for that book to be released! > > On Tue, Mar 29, 2016 at 6

Re: steps path to kafka mastery

2016-03-29 Thread S Ahmed
vestment. > > B > > On 29 Mar 2016, at 03:40, S Ahmed wrote: > > > > Hello, > > > > This may be a silly question for some but here goes :) > > > > Without real production experience, what steps do you suggest one take to > > really have some solid

RE: steps path to kafka mastery

2016-03-28 Thread S Ahmed
Hello, This may be a silly question for some but here goes :) Without real production experience, what steps do you suggest one take to really have some solid skillz in kafka? I tend to learn in a structured way, but it just seems that since kafka is a general purpose tool there isn't really a c

kafka used for a chat system backend

2015-11-06 Thread S Ahmed
Hello, I have read a few sites that *might* be using kafka for a real-time chat system. I say might because in their job descriptions they hinted toward this. If there are people out there using kafka as a backend for a real-time chat system, can you explain at a high level how the data flows?

jvm processes to consume messages

2014-12-02 Thread S Ahmed
Hi, I have a light load scenerio but I am starting off with kafka because I like how the messages are durable etc. If I have 4-5 topics, am I required to create the same # of consumers? I am assuming each consumer runs in a long-running jvm process correct? Are there any consumer examples that

Re: refactoring ZK so it is plugable, would this make sense?

2014-10-09 Thread S Ahmed
he ZK dependency? > > Thanks, > > Jun > > On Thu, Oct 9, 2014 at 8:20 AM, S Ahmed wrote: > > > Hi, > > > > I was wondering if the zookeeper library (zkutils.scala etc) was designed > > in a more modular way, would it make it possible to run a more "lean"

RE: refactoring ZK so it is plugable, would this make sense?

2014-10-09 Thread S Ahmed
Hi, I was wondering if the zookeeper library (zkutils.scala etc) was designed in a more modular way, would it make it possible to run a more "lean" version of kafka? The idea is I want to run kafka but with a less emphasis on it being durable with failover and more on it being a replacement for a

do apps with producers have to be restarted if cluster goes down and comes back up?

2014-06-26 Thread S Ahmed
Hi, A few questions on timing related issues when certain parts of kafka go down. 1. If zookeeper goes down, then I bring it back online, do I have to restart the brokers? 2. If the brokers go down, producers will be erroring out. When the brokers are back online, do I have to restart the proc

is it smarter to go with a java class for message serialization/des?

2014-06-17 Thread S Ahmed
My app is in scala and a quick search on serializing a scala class seems to have potential issues with different versions of scala (I could be wrong as I did a quick search). Is it generally just a better idea to use plain old java classes for kafka messages? i.e. I simply use jackson like: publ

linkedin and pageview producer + when kafka is down

2014-06-16 Thread S Ahmed
I'd love to get some insights on how things work at linkedin in terms of your web servers and kafka producers. You guys probably connect to multiple kafka clusters, so let's assume you are only connecting to a single cluster. 1. do you use a single producer for all message types/topics? 2. For y

kafka producer, one per web app?

2014-06-16 Thread S Ahmed
In my web application, I should be creating a single instance of a producer correct? So in scala I should be doing something like: object KafkaProducer { // props... val producer = new Producer[AnyRef, AnyRef](new ProducerConfig(props)) } And then say in my QueueService I would do: class Q

Re: re-writing old fetch request to work with 0.8 version

2014-06-13 Thread S Ahmed
dAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) On Fri, Jun 13, 2014 at 4:51 PM, S Ahm

re-writing old fetch request to work with 0.8 version

2014-06-13 Thread S Ahmed
I found this embedded kafka example online ( https://gist.github.com/mardambey/2650743) which I am re-writing to work with 0.8 Can someone help me re-write this portion: val cons = new SimpleConsumer("localhost", 9090, 100, 1024) var offset = 0L var i = 0 while (true) { val fetchR

Re: scala based service/deamon consumer example

2014-06-12 Thread S Ahmed
gt; Guozhang > > > On Thu, Jun 12, 2014 at 11:13 AM, S Ahmed wrote: > > > Is there a simple example (scala preferred) where there is a consumer > that > > is written to run as a deamon i.e. it keeps as open connection into a > > broker and reads off new messages > > > > > > -- > -- Guozhang >

scala based service/deamon consumer example

2014-06-12 Thread S Ahmed
Is there a simple example (scala preferred) where there is a consumer that is written to run as a deamon i.e. it keeps as open connection into a broker and reads off new messages

ec2 suggestion setup for a minimum kafka setup; zookeeper is bad in ec2?

2014-06-11 Thread S Ahmed
For those of you hosting on ec2, could someone suggest a "minimum" recommended setup for kafka? i.e. the # and type of instance size that you would say is the bare minimum to get started with kafka in ec2. My guess is the suggest route is the m3 instance type? How about: m3.medium 1 cpu, 3.75GB

Re: are consumer offsets stored in a log?

2014-06-04 Thread S Ahmed
s a > special log. > > > https://cwiki.apache.org/confluence/display/KAFKA/Inbuilt+Consumer+Offset+Management > > The code is in trunk, and it is running in production at LinkedIn now. > > Guozhang > > > On Wed, Jun 4, 2014 at 10:00 AM, S Ahmed wrote: > > >

are consumer offsets stored in a log?

2014-06-04 Thread S Ahmed
I swear I read that Jay Kreps wrote somewhere that consumers now write their offsets in a logfile (not in zookeeper). Is this true or did I misread? Sorry I can't find the article I was reading.

Re: starting of at a small scale, single ec2 instance with 7.5 GB RAM with kafka

2014-05-20 Thread S Ahmed
ended to install both kafka and zookeeper on the same box > as both would fight for the available memory and performance will degrade. > > Thanks > Neha > > > On Mon, May 19, 2014 at 7:29 AM, S Ahmed wrote: > > > Hi, > > > > I like how kafka op

RE: starting of at a small scale, single ec2 instance with 7.5 GB RAM with kafka

2014-05-19 Thread S Ahmed
Hi, I like how kafka operates, but I'm wondering if it is possible to run everything on a single ec2 instance with 7.5 GB RAM. So that would be zookeeper and a single kafka broker. I would have a separate server to consume from the broker. Producers would be from my web servers. I don't want

Re: New Consumer API discussion

2014-02-28 Thread S Ahmed
Few clarifications: 1. "The new consumer API is non blocking and instead of returning a blocking iterator, the consumer provides a poll() API that returns a list of records. " So this means the consumer polls, and if there are new messages it pulls them down and then disconnects? 2. " The consum

code + sbt tips

2014-02-10 Thread S Ahmed
Few quick questions that I hope people can help me with: 1. most of you guys use intellij, do you always build using sbt? i.e. you lose out on the bulid with IDE features like clicking on an error that jumps to that part of the code etc. 2. do you just build using the default scala version 2.8

Re: New Producer Public API

2014-02-06 Thread S Ahmed
How about the following use case: Just before the producer actually sends the payload to kakfa, could an event be exposed that would allow one to loop through the messages and potentially delete some of them? Example: Say you have 100 messages, but before you send these messages to kakfa, you ca

Re: Surprisingly high network traffic between kafka servers

2014-02-05 Thread S Ahmed
Sorry I'm not a ops person, but what tools do you use to monitor traffic between servers? On Tue, Feb 4, 2014 at 11:46 PM, Carl Lerche wrote: > Hello, > > I'm running a 0.8.0 Kafka cluster of 3 servers. The service that it is > for is not in full production yet, so the data written to cluster i

Why does the high level consumer block, or rather where does it?

2014-01-05 Thread S Ahmed
I'm trying to trace through the codebase and figure out where exactly the block occurs in the high level consumer? public void run() { ConsumerIterator it = m_stream.iterator(); while (it.hasNext()) System.out.println("Thread " + m_threadNumber + ": " + new String(it.next().mess

kafka + storm

2014-01-01 Thread S Ahmed
I have briefly looked at storm, but just a quick question, storm seems to have all these workers but they way it seems to me the order in which these items are processed off the queue is very random correct? In my use case order is very important so using something like storm would not be suitable

Re: redis versus zookeeper to track consumer offsets

2013-12-17 Thread S Ahmed
lustered, consistent, highly available > store for this sort of data and it works extremely well. Redis wasn't and I > don't know anyone using Redis in production, including me, who doesn't have > stories of Redis losing data. I'm sticking with ZK. > > > On

redis versus zookeeper to track consumer offsets

2013-12-17 Thread S Ahmed
I am leaning towards using redis to track consumer offsets etc., but I see how using zookeeper makes sense since it already part of the kafka infra. One thing which bothers me is, how are you guys keeping track of the load on zookeeper? How do you get an idea when your zookeeper cluster is underp

Re: storing last processed offset, recovery of failed message processing etc.

2013-12-11 Thread S Ahmed
er-case, or simply a restart, it doesn't matter. > > Philip > > > On Mon, Dec 9, 2013 at 12:28 PM, S Ahmed wrote: > > > I was hoping people could comment on how they handle the following > > scenerios: > > > > 1. Storing the last successfully processe

Re: Anyone working on a Kafka book?

2013-12-10 Thread S Ahmed
; > > > > > > > On Tue, Dec 10, 2013 at 8:40 AM, David Arthur wrote: > > > >> There was some talk a few months ago, not sure what the current status > is. > >> > >> > >> On 12/10/13 10:01 AM, S Ahmed wrote: > >> > &

Re: Anyone working on a Kafka book?

2013-12-10 Thread S Ahmed
Is there a book or this was just an idea? On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin wrote: > Thanks Jun, > > I've updated the example with this information. > > I've also removed some of the unnecessary newlines. > > Thanks, > > Chris > > > On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao wrote:

Re: storing last processed offset, recovery of failed message processing etc.

2013-12-09 Thread S Ahmed
standard with Kafka. > > Our systems are idempotent, so we only store offsets when the message is > fully processed. If this means we occasionally replay a message due to some > corner-case, or simply a restart, it doesn't matter. > > Philip > > > On Mon, Dec 9, 2013 at

RE: storing last processed offset, recovery of failed message processing etc.

2013-12-09 Thread S Ahmed
I was hoping people could comment on how they handle the following scenerios: 1. Storing the last successfully processed messageId/Offset. Are people using mysql, redis, etc.? What are the tradeoffs here? 2. How do you handle recovering from an error while processesing a given event? There are

Re: Loggly's use of Kafka on AWS

2013-12-02 Thread S Ahmed
Interesting. So twitter storm is used to basically process the messages on kafka? I'll have to read-up on storm b/c I always thought the use case was a bit different. On Sun, Dec 1, 2013 at 9:59 PM, Joe Stein wrote: > Awesome Philip, thanks for sharing! > > On Sun, Dec 1, 2013 at 9:17 PM, P

producer (or consumer?) statistics that was using metrics

2013-10-17 Thread S Ahmed
I remember a while back Jay was looking for someone to work on producer (or was it consumer) statisitics which was going to use metrics. Was this ever implemented?

Anyone running kafka with a single broker in production? what about only 8GB ram?

2013-10-10 Thread S Ahmed
Is anyone out there running a single broker kafka setup? How about with only 8 GB RAM? I'm looking at one of the better dedicated server prodivers, and a 8GB server is pretty much what I want to spend at the moment, would it make sense going this route? This same server would also potentially be

Re: who is using kafka to stare large messages?

2013-10-07 Thread S Ahmed
ing lots of small messages with > the same content, should not be any slower. But you want to be careful on > the batch size since you don't want the compressed message to exceed the > message size limit. > > Thanks, > Neha > > > On Mon, Oct 7, 2013 at 9:10 AM, S Ahme

Re: who is using kafka to stare large messages?

2013-10-07 Thread S Ahmed
0s of KB. This is mostly because we > batch a set of messages and send them as a single compressed message. > > Thanks, > > Jun > > > On Mon, Oct 7, 2013 at 7:44 AM, S Ahmed wrote: > > > When people using message queues, the message size is usually pretty > sm

who is using kafka to stare large messages?

2013-10-07 Thread S Ahmed
When people using message queues, the message size is usually pretty small. I want to know who out there is using kafka with larger payload sizes? In the configuration, the maximum message size by default is set to 1 megabyte ( message.max.bytes100) My message sizes will be probably be aroun

use case with high rate of duplicate messages

2013-10-01 Thread S Ahmed
I have a use case where thousands of servers send status type messages, which I am currently handling real-time w/o any kind of queueing system. So currently when I receive a message, and perform a md5 hash of the message, perform a lookup in my database to see if this is a duplicate, if not, I st

Re: Logo

2013-07-22 Thread S Ahmed
Similar, yet different. I like it! On Mon, Jul 22, 2013 at 1:25 PM, Jay Kreps wrote: > Yeah, good point. I hadn't seen that before. > > -Jay > > > On Mon, Jul 22, 2013 at 10:20 AM, Radek Gruchalski < > radek.gruchal...@portico.io> wrote: > > > 296 looks familiar: https://www.nodejitsu.com/ > >

byte offset -> sequential

2013-07-03 Thread S Ahmed
When you guys refactored the offset to be human friendly i.e. numerical versus the byte offset, what was involved in that refactor? Is there a wiki page for that? I'm guessing there was a index file that was created for this, or is this currently managed in zookeeper? This wiki is related to th

Re: Kafka User Group Meeting Jun. 27

2013-07-02 Thread S Ahmed
Was this recorded by any chance? On Wed, Jun 26, 2013 at 11:22 AM, Jun Rao wrote: > Hi, Everyone, > > We have finalized our agenda of the meetup Thursday evening, with Speakers > from LinkedIn, RichRelevance, Netflix and Square. Please RSVP to the meetup > link below. For remote people, we will

RE: production consumer

2013-06-17 Thread S Ahmed
Greetings, Are there any code samples of consumer that is used in production? I'm looking for a something that is a daemon that has x number of threads and reads messages of kafka. I've looked at the sampleconsumer etc., but those are just one of runs that exit. i'm guessing that you would also

RE: shipping logs to s3 or other servers for backups

2013-06-13 Thread S Ahmed
Hi, In my application, I am storing user events, and I want to partition the storage by day. So at the end of a day, I want to take that "file" and ship it to s3 or another server as a backup. This way I can replay the events for a specific day if needed. These events also have to be in order.

Re: kafka 0.8

2013-06-11 Thread S Ahmed
Yeah probably a good idea to upgrade your end first, I'm sure some things might come up :) On Tue, Jun 11, 2013 at 1:02 PM, Soby Chacko wrote: > Jun, > > That is great to hear. Looking forward to it. > > Thanks, > Soby > > > On Tue, Jun 11, 2013 at 12:20 PM, Jun Rao wrote: > > > Soby, > > > >

RE: message order, guarenteed?

2013-06-09 Thread S Ahmed
I understand that there are no guarantees per say that a message may be a duplicate (its the consumer's job to guarantee that), but when it comes to message order, is kafka built in such a way that it is impossible to get messages in the wrong order? Certain use cases might not be sensitive to ord

Re: macbook air and kafka

2013-06-04 Thread S Ahmed
purposes it works like a charm. I'm only doing simple tests (ie no perf > testing) and at least in my case, heat is pretty low. I have more issues > with heat when I run Windows on Fusion ;-) > > > On Jun 3, 2013, at 7:17 PM, S Ahmed wrote: > > > So what are your s

Re: macbook air and kafka

2013-06-03 Thread S Ahmed
ile > from scratch on my macbook air. > > Thanks, > Neha > > > On Mon, Jun 3, 2013 at 6:33 PM, S Ahmed wrote: > > > Hi, Curious if anyone uses a macbook air to develop with kafka. > > > > How are compile times? Does the fan go crazy after a while along

RE: macbook air and kafka

2013-06-03 Thread S Ahmed
Hi, Curious if anyone uses a macbook air to develop with kafka. How are compile times? Does the fan go crazy after a while along with a fairly hot keyboard? Just pondering getting an macbookair, and wanted your experiences.

Re: Apache Kafka in AWS

2013-05-29 Thread S Ahmed
Is the code you used to benchmark open source by any chance? On Tue, May 28, 2013 at 4:29 PM, Jason Weiss wrote: > Nope, sorry. > > > ____ > From: S Ahmed [sahmed1...@gmail.com] > Sent: Tuesday, May 28, 2013 15:47 > To: users@kafka.apa

Re: Apache Kafka in AWS

2013-05-28 Thread S Ahmed
Curious if you tested with larger message sizes, like around 20-30kb (you mentioned 2kb). Any numbers on that size? On Thu, May 23, 2013 at 10:12 AM, Jason Weiss wrote: > Bummer. > > Yes, but it will be several days. I'll post back to the forum with a URL > once I'm done. > > Jason > > > > On 5

is 0.8 stable?

2013-05-27 Thread S Ahmed
Hi, Sorry if I missed the announcement, but is 0.8 stable/production worthy as of yet? Is anyone using it in the wild?

Re: Analysis of producer performance -- and Producer-Kafka reliability

2013-04-12 Thread S Ahmed
Interesting topic. How would buffering in RAM help in reality though (just trying to work through the scenerio in my head): producer tries to connect to a broker, it fails, so it appends the message to a in-memory store. If the broker is down for say 20 minutes and then comes back online, won't

Re: log.file.size limit?

2013-03-25 Thread S Ahmed
nnel.html#map(java.nio.**channels.FileChannel.MapMode<http://docs.oracle.com/javase/6/docs/api/java/nio/channels/FileChannel.html#map(java.nio.channels.FileChannel.MapMode>, > long, long) > > > > On 3/25/13 2:42 PM, S Ahmed wrote: > >> Is there any limit to how large a log

log.file.size limit?

2013-03-25 Thread S Ahmed
Is there any limit to how large a log file can be? I swear I read somewhere that java's memory mapped implementation is limited to 2GB but I'm not sure.

Re: Anyone working on a Kafka book?

2013-03-19 Thread S Ahmed
I guess the challenge would be that kafka is still in version 0.8, so by the time your book comes out they might be at version 1.0 i.e. its a moving target Sounds like a great idea though! On Tue, Mar 19, 2013 at 12:20 PM, Jun Rao wrote: > Hi, David, > > At LinkedIn, committers are too busy to

Re: Consume from X messages ago

2013-03-19 Thread S Ahmed
I thought since the offsets in .8 are numeric and not byte offsets like in 0.7x, you can simply just take say the current offset - 1. On Tue, Mar 19, 2013 at 12:16 PM, Neha Narkhede wrote: > Jim, > > You can leverage the ExportZkOffsets/ImportZkOffsets tools to do this. > ExportZkOffsets exp

Re: Kafka replication presentation at ApacheCon

2013-02-28 Thread S Ahmed
Excellent thanks! BTW, in the slides, it shows the the message 'm3' is lost. I guess the leader is the single point of failure then when a producer sends a message, meaning it can never bypass the leader and write to the followers in case of leader failure right? On Thu, Feb 28, 2013 at 8:35 AM

Re: long running (continous) benchmark test

2013-02-06 Thread S Ahmed
Another reason is to test kafka will potentially larger payloads (10-100K) On Wed, Feb 6, 2013 at 2:26 PM, S Ahmed wrote: > I plan on creating some sort of a long running test with kafka, curiuos if > anyone has already done this and released the code. > > I want to setup some e

Re: Kafka 0.8 producer within Play Framework?

2013-02-05 Thread S Ahmed
Shouldn't your producer be at the controller scope? Instantiating it every time is probably nto the correct pattern. You probably want to use a asych producer also right? On Mon, Feb 4, 2013 at 7:12 PM, charlie w wrote: > It seems the issue is related to Scala versions. I grabbed from > ht

Re: S3 Archiving for Kafka topics (with Zookeeper resume)

2013-01-31 Thread S Ahmed
Great thanks. BTW, that's not a daemon server is it? Is it something you have to wrap to run as a service/daemon? On Thu, Jan 31, 2013 at 4:29 AM, Jibran Saithi wrote: > Hey, > > I know this has come up a few times, so thought I'd share a bit of code > we've been using to archive topics to S3.

Re: anecdotal uptime and service monitoring

2013-01-29 Thread S Ahmed
mers). For ZK, ideally, one should monitor ZK > request latency and GCs. > > Thanks, > > Jun > > On Fri, Dec 28, 2012 at 7:27 AM, S Ahmed wrote: > > > Curious what kind of uptime have you guys experienced using kafka? > > > > What sort of monitoring do you s

Re: Payload size exception

2013-01-29 Thread S Ahmed
Ok so it might be an issue somewhere in the pipeline (I'm guessing memory issues?). They are xml files, and that 30-100 was uncompressed. On Tue, Jan 29, 2013 at 12:28 PM, Neha Narkhede wrote: > > At linkedin, what is the largest payload size per message you guys have > in > > production? > > >

Re: Payload size exception

2013-01-29 Thread S Ahmed
Neha/Jay, At linkedin, what is the largest payload size per message you guys have in production? My app might have like 20-100 kilobytes in size and I am hoping to get an idea if others have large messages like this for any production use case. On Tue, Jan 29, 2013 at 11:35 AM, Neha Narkhede wr

Re: How do you keep track of offset in a partition

2013-01-28 Thread S Ahmed
Once you have an offset, is it possible to know how many messages there are from that point to the end? (or least for the particular topic partition that you are requested data from?). The idea is to get an idea how far behind the consumers are from the # of messages coming in etc. I'm guessing t

Re: are topics and partitions dynamic?

2013-01-27 Thread S Ahmed
es that topic with > num.partitions and default.replication.factor. > 2. Use the admin command bin/kafka-create-topic.sh > > Thanks, > Neha > > > On Sun, Jan 27, 2013 at 12:23 PM, S Ahmed wrote: > > > I'm looking at the kafka server.properties that is in /conf

Re: first steps with the codebase

2012-12-12 Thread S Ahmed
y invoke error > conditions, and some do their cleanup by interrupting I/O both of which may > produce exceptions. But provided they are just logged that is fine. > > -Jay > > > On Tue, Dec 11, 2012 at 2:16 PM, S Ahmed wrote: > > > And my question before that regarding wh

Re: first steps with the codebase

2012-12-11 Thread S Ahmed
eck in config/server.properties. By default it writes in > /tmp/kafka-logs/ . > > -Original Message- > From: S Ahmed [mailto:sahmed1...@gmail.com] > Sent: 12 December 2012 02:51 > To: users@kafka.apache.org > Subject: Re: first steps with the codebase > > help anyone? :)

Re: first steps with the codebase

2012-12-11 Thread S Ahmed
help anyone? :) Much much appreciated! On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed wrote: > BTW, where exactly will the broker be writing these messages? Is it in a > /tmp folder? > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > >> Neha, >> >>

Re: first steps with the codebase

2012-12-10 Thread S Ahmed
BTW, where exactly will the broker be writing these messages? Is it in a /tmp folder? On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > Neha, > > But what do I need to start before running the tests, I tried to run the > test "testAsyncSendCanCorrectlyFailWithTimeou

Re: first steps with the codebase

2012-12-10 Thread S Ahmed
t option. > > Thanks, > Neha > > On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: > > Hi, > > > > So I followed the instructions from here: > > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup > > > > So I pulled down the latest fro

Re: tracking page views at linkedin

2012-12-10 Thread S Ahmed
it works. We do not log out to disk on the web service > machines, rather we use the async setting in the kafka producer from the > app and it directly sends all tracking and monitoring data to the kafka > cluster. > > > On Sun, Dec 9, 2012 at 12:47 PM, S Ahmed wrote: > > &

Re: steps to mitigate hardware failure in a replicate

2012-11-30 Thread S Ahmed
> Joel > > > > On Fri, Nov 30, 2012 at 11:22 AM, S Ahmed wrote: > > > Hello, > > > > I am watching the video of your meetup, where Jun is going over the new > .8 > > replica feature. > > > > Since the replica's for a master broker ar

Re: consumer read process

2012-11-29 Thread S Ahmed
testing new list On Wed, Nov 28, 2012 at 10:44 AM, Jun Rao wrote: > You can find the information at > http://incubator.apache.org/kafka/design.html > > Look for consumer registration algorithm and consumer rebalancing > algorithm. > > Thanks, > > Jun > > >

Re: consumer read process

2012-11-28 Thread S Ahmed
If you read from offset x last, what information can you get regarding how many messages are left to process? On Wed, Nov 28, 2012 at 10:13 AM, S Ahmed wrote: > Can someone go over how a consumer goes about reading from a broker? > > example: > > 1. connect to zookeeper, get inf