Fwd: Pulling metrics from kafka-console-producer.

2015-09-09 Thread Pavan Kumar
-- Forwarded message --
From: Pavan Kumar 
Date: Tue, Sep 8, 2015 at 3:23 PM
Subject: Pulling metrics from kafka-console-producer.
To: jmxtrans 


Hi,
  I have been trying to pull metrics from kafka-console-producer with
jmxtrans as follows :





* --> bin/kafka-run-class kafka.tools.JmxTool --object-name
'kafka.producer:type=producer-metrics,*' --jmx-url
service:jmx:rmi:///jndi/rmi://localhost:2002/jmxrmi I can be able to pull
metrics from broker (kafka-server) but not from consumer and producers
(Using console-producer and console-consumer as a prodcuers and consumers).
Please reply if someone is awre of this situation. your help is so much
appreciated.Thanks in advance...*

-- 
You received this message because you are subscribed to a topic in the
Google Groups "jmxtrans" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/jmxtrans/R6HuDurhS4U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
jmxtrans+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



-- 


Thanks & Regards,
Pavan Kumar Reddy Sannadi


Issue in pulling metrics from kafka-console-producer.

2015-09-09 Thread Pavan Kumar
*I have installed JMXReporter to pull metrics from Apache
Kafka-2.11-0.8.2.0 which is extracting allthe Server Metrics as seen in
page http://docs.confluent.io/1.0/kafka/monitoring.html
 and not the Producer
and Consumer Metrics. Is there any way I can fetch them from the kafka
serverside or do the Producer/Consumer side need to do something, to be
able tofetch/emit them.*
I will be very thankful if you could share your thoughts on this.

Thanks In Advance!!



Thanks & Regards,
Pavan Kumar Reddy Sannadi


Fwd: Issue in pulling metrics from kafka-console-producer.

2015-09-09 Thread Pavan Kumar
-- Forwarded message --
From: Pavan Kumar 
Date: Thu, Sep 10, 2015 at 12:16 PM
Subject: Issue in pulling metrics from kafka-console-producer.
To: users@kafka.apache.org






*I have installed JMXReporter to pull metrics from Apache
Kafka-2.11-0.8.2.0 which is extracting allthe Server Metrics as seen in
page http://docs.confluent.io/1.0/kafka/monitoring.html
 and not the Producer
and Consumer Metrics. Is there any way I can fetch them from the kafka
serverside or do the Producer/Consumer side need to do something, to be
able tofetch/emit them.*
I will be very thankful if you could share your thoughts on this.

Thanks In Advance!!



Thanks & Regards,
Pavan Kumar Reddy Sannadi



-- 


Thanks & Regards,
Pavan Kumar Reddy Sannadi


Re: Resetting consumer offsets after moving to offset.storage=kafka

2015-09-09 Thread Ye Hong
Thank you very much, Erik.
Yes, the SimpleConsumer definitely would achieve the goal. It was easier with 
ImportZkOffset, as it only takes a couple of lines of commands. If no 
equivalent of ImportZkOffset, I will go with the SimpleConsumer.

As for the data loss, it took place in the downstream processing when data were 
accidentally deleted. It has nothing to do with Kafka. We just thought it might 
be a good idea to build tools enabling us to recoup future data loss. 

Best,

Ye

> On Sep 9, 2015, at 2:34 PM, Helleren, Erik  wrote:
> 
> It is possible to commit offsets using the SimpleConsumer API to kafka or
> zookeeper for any GroupID, topic, and partition tuple.  There are some
> difficulties with the SimpleConsumer, but it should be able to make the
> call within your app.  See the scala Doc here:
> http://apache.mirrorcatalogs.com/kafka/0.8.2-beta/scala-doc/index.html#kafk
> a.javaapi.consumer.SimpleConsumer And look for the commitOffsets function.
> 
> 
> I am curious, in what situations are there data loss?
> -Erik  
> 
> 
> On 9/9/15, 4:17 PM, "Ye Hong"  wrote:
> 
>> Hi,
>> 
>> We have a consumer that under certain circumstances may lose data. To
>> guard against such data loss, we have a tool that periodically pulls and
>> stores offsets from zk. Once a data loss takes place, we use our
>> historical offsets to reset the consumer offset on zk.
>> With offset.storage=zookeeper, the tool just simply calls
>> kafka-run-class.sh kafka.tools.ExportZkOffsets/ImportZkOffsets. However,
>> after moving to offset.storage=kafka, we can no longer call
>> ExportZkOffsets/ImportZkOffsets.
>> For offset export, I suppose we can call the REST API of Burrow to get
>> the same results. However, I couldn't find an easy way to reset offsets
>> that¹s comparable to ImportZkOffsets. Could someone shed some lights on
>> what we should do?
>> 
>> Thanks!
> 



Re: Resetting consumer offsets after moving to offset.storage=kafka

2015-09-09 Thread Helleren, Erik
It is possible to commit offsets using the SimpleConsumer API to kafka or
zookeeper for any GroupID, topic, and partition tuple.  There are some
difficulties with the SimpleConsumer, but it should be able to make the
call within your app.  See the scala Doc here:
http://apache.mirrorcatalogs.com/kafka/0.8.2-beta/scala-doc/index.html#kafk
a.javaapi.consumer.SimpleConsumer And look for the commitOffsets function.
 

I am curious, in what situations are there data loss?
-Erik  


On 9/9/15, 4:17 PM, "Ye Hong"  wrote:

>Hi,
>
>We have a consumer that under certain circumstances may lose data. To
>guard against such data loss, we have a tool that periodically pulls and
>stores offsets from zk. Once a data loss takes place, we use our
>historical offsets to reset the consumer offset on zk.
>With offset.storage=zookeeper, the tool just simply calls
>kafka-run-class.sh kafka.tools.ExportZkOffsets/ImportZkOffsets. However,
>after moving to offset.storage=kafka, we can no longer call
>ExportZkOffsets/ImportZkOffsets.
>For offset export, I suppose we can call the REST API of Burrow to get
>the same results. However, I couldn't find an easy way to reset offsets
>that¹s comparable to ImportZkOffsets. Could someone shed some lights on
>what we should do?
>
>Thanks!



Resetting consumer offsets after moving to offset.storage=kafka

2015-09-09 Thread Ye Hong
Hi,

We have a consumer that under certain circumstances may lose data. To guard 
against such data loss, we have a tool that periodically pulls and stores 
offsets from zk. Once a data loss takes place, we use our historical offsets to 
reset the consumer offset on zk.
With offset.storage=zookeeper, the tool just simply calls kafka-run-class.sh 
kafka.tools.ExportZkOffsets/ImportZkOffsets. However, after moving to 
offset.storage=kafka, we can no longer call ExportZkOffsets/ImportZkOffsets.
For offset export, I suppose we can call the REST API of Burrow to get the same 
results. However, I couldn't find an easy way to reset offsets that’s 
comparable to ImportZkOffsets. Could someone shed some lights on what we should 
do?

Thanks!

Re: latency test

2015-09-09 Thread Yuheng Du
Thank you Erik.

In my test I am using fixed 200bytes messages and I run 500k messages per
producer on 92 physically isolated producers. Each test run takes about 20
minutes. As the broker cluster is migrated into a new physical cluster, I
will perform my test and get the latency results in the next couple of
weeks.

I will keep you posted.

Thanks.

On Wed, Sep 9, 2015 at 4:58 PM, Helleren, Erik 
wrote:

> Yes, and that can really hurt average performance.  All the partitions
> were nearly identical up to the 99%’ile, and had very good performance at
> that level hovering around a few milli’s.  But when looking beyond the
> 99%’ile, there was that clear fork in the distribution where a set of 3
> partitions surged upwards.  This could be for a dozen different reasons:
> Network blips, noisy networks, location in the network, resource
> contention on that broker, etc.  But it effected that one broker more than
> others.  And the reasons for my cluster displaying this behavior could be
> very different than the reason for any other cluster.
>
> Its worth noting that this was mostly a latency test over a stress test.
> There was a single kafka producer object, very small message sizes (100
> bytes), and it was only pushing through around 5MB/s worth of data. And
> the client was configured to minimize the amount of data that would be on
> the internal queue/buffer waiting to be sent.  The messages that were
> being sent were compromised of 10 byte ascii ‘words’ selected randomly
> from a dictionary of 1000 words, which benefits compression while still
> resulting in likely unique messages.  And the test I ran was only for 6
> min, and I did not do the work required to see if there was a burst of
> slower messages which caused this behavior, or if it was a consistent
> issue with that node.
> -Erik
>
>
> On 9/9/15, 2:24 PM, "Yuheng Du"  wrote:
>
> >So are you suggesting that the long delays happened in %1 percentile
> >happens in the slower partitions that are further away? Thanks.
> >
> >On Wed, Sep 9, 2015 at 3:15 PM, Helleren, Erik
> >
> >wrote:
> >
> >> So, I did my own latency test on a cluster of 3 nodes, and there is a
> >> significant difference around the 99%’ile and higher for partitions when
> >> measuring the the ack time when configured for a single ack.  The graph
> >> that I wish I could attach or post clearly shows that around 1/3 of the
> >> partitions significantly diverge from the other two.  So, at least in my
> >> case, one of my brokers is further than the others.
> >> -Erik
> >>
> >> On 9/4/15, 1:06 PM, "Yuheng Du"  wrote:
> >>
> >> >No problem. Thanks for your advice. I think it would be fun to
> >>explore. I
> >> >only know how to program in java though. Hope it will work.
> >> >
> >> >On Fri, Sep 4, 2015 at 2:03 PM, Helleren, Erik
> >> >
> >> >wrote:
> >> >
> >> >> I thing the suggestion is to have partitions/brokers >=1, so 32
> >>should
> >> >>be
> >> >> enough.
> >> >>
> >> >> As for latency tests, there isn’t a lot of code to do a latency test.
> >> >>If
> >> >> you just want to measure ack time its around 100 lines.  I will try
> >>to
> >> >> push out some good latency testing code to github, but my company is
> >> >> scared of open sourcing code… so it might be a while…
> >> >> -Erik
> >> >>
> >> >>
> >> >> On 9/4/15, 12:55 PM, "Yuheng Du"  wrote:
> >> >>
> >> >> >Thanks for your reply Erik. I am running some more tests according
> >>to
> >> >>your
> >> >> >suggestions now and I will share with my results here. Is it
> >>necessary
> >> >>to
> >> >> >use a fixed number of partitions (32 partitions maybe) for my test?
> >> >> >
> >> >> >I am testing 2, 4, 8, 16 and 32 brokers scenarios, all of them are
> >> >>running
> >> >> >on individual physical nodes. So I think using at least 32
> >>partitions
> >> >>will
> >> >> >make more sense? I have seen latencies increase as the number of
> >> >> >partitions
> >> >> >goes up in my experiments.
> >> >> >
> >> >> >To get the latency of each event data recorded, are you suggesting
> >> >>that I
> >> >> >rewrite my own test program (in Java perhaps) or I can just modify
> >>the
> >> >> >standard test program provided by kafka (
> >> >> >https://gist.github.com/jkreps/c7ddb4041ef62a900e6c )? I guess I
> >>need
> >> >>to
> >> >> >rebuild the source if I modify the standard java test program
> >> >> >ProducerPerformance provided in kafka, right? Now this standard
> >>program
> >> >> >only has average latencies and percentile latencies but no per event
> >> >> >latencies.
> >> >> >
> >> >> >Thanks.
> >> >> >
> >> >> >On Fri, Sep 4, 2015 at 1:42 PM, Helleren, Erik
> >> >> >
> >> >> >wrote:
> >> >> >
> >> >> >> That is an excellent question!  There are a bunch of ways to
> >>monitor
> >> >> >> jitter and see when that is happening.  Here are a few:
> >> >> >>
> >> >> >> - You could slice the histogram every few seconds, save it out
> >>with a
> >> >> >> timestamp, and then look at how they compare.  This would be
> >>mostly
> >> >> >> manual, or

Re: latency test

2015-09-09 Thread Helleren, Erik
Yes, and that can really hurt average performance.  All the partitions
were nearly identical up to the 99%’ile, and had very good performance at
that level hovering around a few milli’s.  But when looking beyond the
99%’ile, there was that clear fork in the distribution where a set of 3
partitions surged upwards.  This could be for a dozen different reasons:
Network blips, noisy networks, location in the network, resource
contention on that broker, etc.  But it effected that one broker more than
others.  And the reasons for my cluster displaying this behavior could be
very different than the reason for any other cluster.

Its worth noting that this was mostly a latency test over a stress test.
There was a single kafka producer object, very small message sizes (100
bytes), and it was only pushing through around 5MB/s worth of data. And
the client was configured to minimize the amount of data that would be on
the internal queue/buffer waiting to be sent.  The messages that were
being sent were compromised of 10 byte ascii ‘words’ selected randomly
from a dictionary of 1000 words, which benefits compression while still
resulting in likely unique messages.  And the test I ran was only for 6
min, and I did not do the work required to see if there was a burst of
slower messages which caused this behavior, or if it was a consistent
issue with that node.
-Erik


On 9/9/15, 2:24 PM, "Yuheng Du"  wrote:

>So are you suggesting that the long delays happened in %1 percentile
>happens in the slower partitions that are further away? Thanks.
>
>On Wed, Sep 9, 2015 at 3:15 PM, Helleren, Erik
>
>wrote:
>
>> So, I did my own latency test on a cluster of 3 nodes, and there is a
>> significant difference around the 99%’ile and higher for partitions when
>> measuring the the ack time when configured for a single ack.  The graph
>> that I wish I could attach or post clearly shows that around 1/3 of the
>> partitions significantly diverge from the other two.  So, at least in my
>> case, one of my brokers is further than the others.
>> -Erik
>>
>> On 9/4/15, 1:06 PM, "Yuheng Du"  wrote:
>>
>> >No problem. Thanks for your advice. I think it would be fun to
>>explore. I
>> >only know how to program in java though. Hope it will work.
>> >
>> >On Fri, Sep 4, 2015 at 2:03 PM, Helleren, Erik
>> >
>> >wrote:
>> >
>> >> I thing the suggestion is to have partitions/brokers >=1, so 32
>>should
>> >>be
>> >> enough.
>> >>
>> >> As for latency tests, there isn’t a lot of code to do a latency test.
>> >>If
>> >> you just want to measure ack time its around 100 lines.  I will try
>>to
>> >> push out some good latency testing code to github, but my company is
>> >> scared of open sourcing code… so it might be a while…
>> >> -Erik
>> >>
>> >>
>> >> On 9/4/15, 12:55 PM, "Yuheng Du"  wrote:
>> >>
>> >> >Thanks for your reply Erik. I am running some more tests according
>>to
>> >>your
>> >> >suggestions now and I will share with my results here. Is it
>>necessary
>> >>to
>> >> >use a fixed number of partitions (32 partitions maybe) for my test?
>> >> >
>> >> >I am testing 2, 4, 8, 16 and 32 brokers scenarios, all of them are
>> >>running
>> >> >on individual physical nodes. So I think using at least 32
>>partitions
>> >>will
>> >> >make more sense? I have seen latencies increase as the number of
>> >> >partitions
>> >> >goes up in my experiments.
>> >> >
>> >> >To get the latency of each event data recorded, are you suggesting
>> >>that I
>> >> >rewrite my own test program (in Java perhaps) or I can just modify
>>the
>> >> >standard test program provided by kafka (
>> >> >https://gist.github.com/jkreps/c7ddb4041ef62a900e6c )? I guess I
>>need
>> >>to
>> >> >rebuild the source if I modify the standard java test program
>> >> >ProducerPerformance provided in kafka, right? Now this standard
>>program
>> >> >only has average latencies and percentile latencies but no per event
>> >> >latencies.
>> >> >
>> >> >Thanks.
>> >> >
>> >> >On Fri, Sep 4, 2015 at 1:42 PM, Helleren, Erik
>> >> >
>> >> >wrote:
>> >> >
>> >> >> That is an excellent question!  There are a bunch of ways to
>>monitor
>> >> >> jitter and see when that is happening.  Here are a few:
>> >> >>
>> >> >> - You could slice the histogram every few seconds, save it out
>>with a
>> >> >> timestamp, and then look at how they compare.  This would be
>>mostly
>> >> >> manual, or you can graph line charts of the percentiles over time
>>in
>> >> >>excel
>> >> >> where each percentile would be a series.  If you are using HDR
>> >> >>Histogram,
>> >> >> you should look at how to use the Recorder class to do this
>>coupled
>> >> >>with a
>> >> >> ScheduledExecutorService.
>> >> >>
>> >> >> - You can just save the starting timestamp of the event and the
>> >>latency
>> >> >>of
>> >> >> each event.  If you put it into a CSV, you can just load it up
>>into
>> >> >>excel
>> >> >> and graph as a XY chart.  That way you can see every point during
>>the
>> >> >> running of your program and you can see trends.

Re: latency test

2015-09-09 Thread Yuheng Du
So are you suggesting that the long delays happened in %1 percentile
happens in the slower partitions that are further away? Thanks.

On Wed, Sep 9, 2015 at 3:15 PM, Helleren, Erik 
wrote:

> So, I did my own latency test on a cluster of 3 nodes, and there is a
> significant difference around the 99%’ile and higher for partitions when
> measuring the the ack time when configured for a single ack.  The graph
> that I wish I could attach or post clearly shows that around 1/3 of the
> partitions significantly diverge from the other two.  So, at least in my
> case, one of my brokers is further than the others.
> -Erik
>
> On 9/4/15, 1:06 PM, "Yuheng Du"  wrote:
>
> >No problem. Thanks for your advice. I think it would be fun to explore. I
> >only know how to program in java though. Hope it will work.
> >
> >On Fri, Sep 4, 2015 at 2:03 PM, Helleren, Erik
> >
> >wrote:
> >
> >> I thing the suggestion is to have partitions/brokers >=1, so 32 should
> >>be
> >> enough.
> >>
> >> As for latency tests, there isn’t a lot of code to do a latency test.
> >>If
> >> you just want to measure ack time its around 100 lines.  I will try to
> >> push out some good latency testing code to github, but my company is
> >> scared of open sourcing code… so it might be a while…
> >> -Erik
> >>
> >>
> >> On 9/4/15, 12:55 PM, "Yuheng Du"  wrote:
> >>
> >> >Thanks for your reply Erik. I am running some more tests according to
> >>your
> >> >suggestions now and I will share with my results here. Is it necessary
> >>to
> >> >use a fixed number of partitions (32 partitions maybe) for my test?
> >> >
> >> >I am testing 2, 4, 8, 16 and 32 brokers scenarios, all of them are
> >>running
> >> >on individual physical nodes. So I think using at least 32 partitions
> >>will
> >> >make more sense? I have seen latencies increase as the number of
> >> >partitions
> >> >goes up in my experiments.
> >> >
> >> >To get the latency of each event data recorded, are you suggesting
> >>that I
> >> >rewrite my own test program (in Java perhaps) or I can just modify the
> >> >standard test program provided by kafka (
> >> >https://gist.github.com/jkreps/c7ddb4041ef62a900e6c )? I guess I need
> >>to
> >> >rebuild the source if I modify the standard java test program
> >> >ProducerPerformance provided in kafka, right? Now this standard program
> >> >only has average latencies and percentile latencies but no per event
> >> >latencies.
> >> >
> >> >Thanks.
> >> >
> >> >On Fri, Sep 4, 2015 at 1:42 PM, Helleren, Erik
> >> >
> >> >wrote:
> >> >
> >> >> That is an excellent question!  There are a bunch of ways to monitor
> >> >> jitter and see when that is happening.  Here are a few:
> >> >>
> >> >> - You could slice the histogram every few seconds, save it out with a
> >> >> timestamp, and then look at how they compare.  This would be mostly
> >> >> manual, or you can graph line charts of the percentiles over time in
> >> >>excel
> >> >> where each percentile would be a series.  If you are using HDR
> >> >>Histogram,
> >> >> you should look at how to use the Recorder class to do this coupled
> >> >>with a
> >> >> ScheduledExecutorService.
> >> >>
> >> >> - You can just save the starting timestamp of the event and the
> >>latency
> >> >>of
> >> >> each event.  If you put it into a CSV, you can just load it up into
> >> >>excel
> >> >> and graph as a XY chart.  That way you can see every point during the
> >> >> running of your program and you can see trends.  You want to be
> >>careful
> >> >> about this one, especially of writing to a file in the callback that
> >> >>kfaka
> >> >> provides.
> >> >>
> >> >> Also, I have noticed that most of the very slow observations are at
> >> >> startup.  But don’t trust me, trust the data and share your findings.
> >> >> Also, having a 99.9 percentile provides a pretty good standard for
> >> >>typical
> >> >> poor case performance.  Average is borderline useless, 50%’ile is a
> >> >>better
> >> >> typical case because that’s the number that says “half of events
> >>will be
> >> >> this slow or faster”, or for values that are high like 99.9%’ile,
> >>“0.1%
> >> >>of
> >> >> all events will be slower than this”.
> >> >> -Erik
> >> >>
> >> >> On 9/4/15, 12:05 PM, "Yuheng Du"  wrote:
> >> >>
> >> >> >Thank you Erik! That's is helpful!
> >> >> >
> >> >> >But also I see jitters of the maximum latencies when running the
> >> >> >experiment.
> >> >> >
> >> >> >The average end to acknowledgement latency from producer to broker
> >>is
> >> >> >around 5ms when using 92 producers and 4 brokers, and the 99.9
> >> >>percentile
> >> >> >latency is 58ms, but the maximum latency goes up to 1359 ms. How to
> >> >>locate
> >> >> >the source of this jitter?
> >> >> >
> >> >> >Thanks.
> >> >> >
> >> >> >On Fri, Sep 4, 2015 at 10:54 AM, Helleren, Erik
> >> >> >
> >> >> >wrote:
> >> >> >
> >> >> >> WellŠ not to be contrarian, but latency depends much more on the
> >> >>latency
> >> >> >> between the producer and the broker that is the leader for the
> 

Re: latency test

2015-09-09 Thread Helleren, Erik
So, I did my own latency test on a cluster of 3 nodes, and there is a
significant difference around the 99%’ile and higher for partitions when
measuring the the ack time when configured for a single ack.  The graph
that I wish I could attach or post clearly shows that around 1/3 of the
partitions significantly diverge from the other two.  So, at least in my
case, one of my brokers is further than the others.
-Erik

On 9/4/15, 1:06 PM, "Yuheng Du"  wrote:

>No problem. Thanks for your advice. I think it would be fun to explore. I
>only know how to program in java though. Hope it will work.
>
>On Fri, Sep 4, 2015 at 2:03 PM, Helleren, Erik
>
>wrote:
>
>> I thing the suggestion is to have partitions/brokers >=1, so 32 should
>>be
>> enough.
>>
>> As for latency tests, there isn’t a lot of code to do a latency test.
>>If
>> you just want to measure ack time its around 100 lines.  I will try to
>> push out some good latency testing code to github, but my company is
>> scared of open sourcing code… so it might be a while…
>> -Erik
>>
>>
>> On 9/4/15, 12:55 PM, "Yuheng Du"  wrote:
>>
>> >Thanks for your reply Erik. I am running some more tests according to
>>your
>> >suggestions now and I will share with my results here. Is it necessary
>>to
>> >use a fixed number of partitions (32 partitions maybe) for my test?
>> >
>> >I am testing 2, 4, 8, 16 and 32 brokers scenarios, all of them are
>>running
>> >on individual physical nodes. So I think using at least 32 partitions
>>will
>> >make more sense? I have seen latencies increase as the number of
>> >partitions
>> >goes up in my experiments.
>> >
>> >To get the latency of each event data recorded, are you suggesting
>>that I
>> >rewrite my own test program (in Java perhaps) or I can just modify the
>> >standard test program provided by kafka (
>> >https://gist.github.com/jkreps/c7ddb4041ef62a900e6c )? I guess I need
>>to
>> >rebuild the source if I modify the standard java test program
>> >ProducerPerformance provided in kafka, right? Now this standard program
>> >only has average latencies and percentile latencies but no per event
>> >latencies.
>> >
>> >Thanks.
>> >
>> >On Fri, Sep 4, 2015 at 1:42 PM, Helleren, Erik
>> >
>> >wrote:
>> >
>> >> That is an excellent question!  There are a bunch of ways to monitor
>> >> jitter and see when that is happening.  Here are a few:
>> >>
>> >> - You could slice the histogram every few seconds, save it out with a
>> >> timestamp, and then look at how they compare.  This would be mostly
>> >> manual, or you can graph line charts of the percentiles over time in
>> >>excel
>> >> where each percentile would be a series.  If you are using HDR
>> >>Histogram,
>> >> you should look at how to use the Recorder class to do this coupled
>> >>with a
>> >> ScheduledExecutorService.
>> >>
>> >> - You can just save the starting timestamp of the event and the
>>latency
>> >>of
>> >> each event.  If you put it into a CSV, you can just load it up into
>> >>excel
>> >> and graph as a XY chart.  That way you can see every point during the
>> >> running of your program and you can see trends.  You want to be
>>careful
>> >> about this one, especially of writing to a file in the callback that
>> >>kfaka
>> >> provides.
>> >>
>> >> Also, I have noticed that most of the very slow observations are at
>> >> startup.  But don’t trust me, trust the data and share your findings.
>> >> Also, having a 99.9 percentile provides a pretty good standard for
>> >>typical
>> >> poor case performance.  Average is borderline useless, 50%’ile is a
>> >>better
>> >> typical case because that’s the number that says “half of events
>>will be
>> >> this slow or faster”, or for values that are high like 99.9%’ile,
>>“0.1%
>> >>of
>> >> all events will be slower than this”.
>> >> -Erik
>> >>
>> >> On 9/4/15, 12:05 PM, "Yuheng Du"  wrote:
>> >>
>> >> >Thank you Erik! That's is helpful!
>> >> >
>> >> >But also I see jitters of the maximum latencies when running the
>> >> >experiment.
>> >> >
>> >> >The average end to acknowledgement latency from producer to broker
>>is
>> >> >around 5ms when using 92 producers and 4 brokers, and the 99.9
>> >>percentile
>> >> >latency is 58ms, but the maximum latency goes up to 1359 ms. How to
>> >>locate
>> >> >the source of this jitter?
>> >> >
>> >> >Thanks.
>> >> >
>> >> >On Fri, Sep 4, 2015 at 10:54 AM, Helleren, Erik
>> >> >
>> >> >wrote:
>> >> >
>> >> >> WellŠ not to be contrarian, but latency depends much more on the
>> >>latency
>> >> >> between the producer and the broker that is the leader for the
>> >>partition
>> >> >> you are publishing to.  At least when your brokers are not
>>saturated
>> >> >>with
>> >> >> messages, and acks are set to 1.  If acks are set to ALL, latency
>>on
>> >>an
>> >> >> non-saturated kafka cluster will be: Round Trip Latency from
>> >>producer to
>> >> >> leader for partition + Max( slowest Round Trip Latency to a
>>replicas
>> >>of
>> >> >> that partition).  If a cluster is saturated with messages, w

Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Jun Rao
Thanks everyone for voting.

The following are the results of the votes.

+1 binding = 3 votes
+1 non-binding = 3 votes
-1 = 0 votes
0 = 0 votes

The vote passes.

I will release artifacts to maven central, update the dist svn and download
site. Will send out an announce after that.

Thanks everyone who has contributed to the work in 0.8.2.2!

Jun

On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.8.2.2. This only
> fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy in
> 0.8.2.1.
>
> Release Notes for the 0.8.2.2 release
>
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
>
> * java-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
>
> /***
>
> Thanks,
>
> Jun
>
>


Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Jun Rao
+1 from me too.

Jun

On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.8.2.2. This only
> fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy in
> 0.8.2.1.
>
> Release Notes for the 0.8.2.2 release
>
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
>
> * java-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
>
> /***
>
> Thanks,
>
> Jun
>
>


Re: [kafka-clients] [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Joel Koshy
+1 binding

On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> This is the first candidate for release of Apache Kafka 0.8.2.2. This only
> fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy in
> 0.8.2.1.
>
> Release Notes for the 0.8.2.2 release
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
>
> * java-doc
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
>
> /***
>
> Thanks,
>
> Jun
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G-O56-Wb29W65fY1KFwA3Dy9Uok%3DpixdfboDD8xQhiMog%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.


Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Gwen Shapira
+1 non-binding - verified signatures and build.

On Wed, Sep 9, 2015 at 10:28 AM, Ewen Cheslack-Postava 
wrote:

> +1 non-binding. Verified artifacts, unit tests, quick start.
>
> On Wed, Sep 9, 2015 at 10:09 AM, Guozhang Wang  wrote:
>
> > +1 binding, verified unit tests and quick start.
> >
> > On Wed, Sep 9, 2015 at 4:12 AM, Manikumar Reddy 
> > wrote:
> >
> > > +1 (non-binding). verified the artifacts, quick start.
> > >
> > > On Wed, Sep 9, 2015 at 2:41 AM, Ashish 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > Ran the build, works fine. All test cases passed
> > > >
> > > > On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> > > > > This is the first candidate for release of Apache Kafka 0.8.2.2.
> This
> > > > only
> > > > > fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to
> > snappy
> > > > in
> > > > > 0.8.2.1.
> > > > >
> > > > > Release Notes for the 0.8.2.2 release
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > > > and sha2 (SHA256) checksum.
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
> > > > >
> > > > > * Maven artifacts to be voted upon prior to release:
> > > > > https://repository.apache.org/content/groups/staging/
> > > > >
> > > > > * scala-doc
> > > > >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
> > > > >
> > > > > * java-doc
> > > > >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
> > > > >
> > > > > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2
> tag
> > > > >
> > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
> > > > >
> > > > > /***
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > >
> > > >
> > > >
> > > > --
> > > > thanks
> > > > ashish
> > > >
> > > > Blog: http://www.ashishpaliwal.com/blog
> > > > My Photo Galleries: http://www.pbase.com/ashishpaliwal
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> Thanks,
> Ewen
>


Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Ewen Cheslack-Postava
+1 non-binding. Verified artifacts, unit tests, quick start.

On Wed, Sep 9, 2015 at 10:09 AM, Guozhang Wang  wrote:

> +1 binding, verified unit tests and quick start.
>
> On Wed, Sep 9, 2015 at 4:12 AM, Manikumar Reddy 
> wrote:
>
> > +1 (non-binding). verified the artifacts, quick start.
> >
> > On Wed, Sep 9, 2015 at 2:41 AM, Ashish  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Ran the build, works fine. All test cases passed
> > >
> > > On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> > > > This is the first candidate for release of Apache Kafka 0.8.2.2. This
> > > only
> > > > fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to
> snappy
> > > in
> > > > 0.8.2.1.
> > > >
> > > > Release Notes for the 0.8.2.2 release
> > > >
> > >
> >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
> > > >
> > > > *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > > and sha2 (SHA256) checksum.
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
> > > >
> > > > * Maven artifacts to be voted upon prior to release:
> > > > https://repository.apache.org/content/groups/staging/
> > > >
> > > > * scala-doc
> > > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
> > > >
> > > > * java-doc
> > > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
> > > >
> > > > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
> > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
> > > >
> > > > /***
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > >
> > >
> > >
> > > --
> > > thanks
> > > ashish
> > >
> > > Blog: http://www.ashishpaliwal.com/blog
> > > My Photo Galleries: http://www.pbase.com/ashishpaliwal
> > >
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thanks,
Ewen


Re: some producers stuck when one broker is bad

2015-09-09 Thread Mayuresh Gharat
1) any suggestion on how to identify the bad broker(s)?
---> At Linkedin we have alerts that are setup using our internal scripts
for detecting if a broker has gone bad. We also check the under replicated
partitions and that can tell us which broker has gone bad. By broker going
bad, it can mean different things. Like the broker is alive but not
responding and is completely isolated or the broker has gone down, etc.
Can you tell us what you meant by your BROKER went BAD?

2) why bouncing of the bad broker got the producers recovered automatically
> This is because as you bounced, the leaders for other partitions
changed and producer sent out a TopicMetadataRequest which tells the
producer who are the new leaders for the partitions and the producer
started sending messages to those brokers.

KAFKA-2120 will handle all of this for you automatically.

Thanks,

Mayuresh

On Tue, Sep 8, 2015 at 8:26 PM, Steven Wu  wrote:

> We have observed that some producer instances stopped sending traffic to
> brokers, because the memory buffer is full. those producers got stuck in
> this state permanently. Because we couldn't find out which broker is bad
> here. So I did a rolling restart the all brokers. after the bad broker got
> bounce, those stuck producers out of the woods automatically.
>
> I don't know the exact problem with that bad broker. it seems to me that
> some ZK states are inconsistent.
>
> I know timeout fix from KAFKA-2120 can probably avoid the permanent stuck.
> Here are some additional questions.
> 1) any suggestion on how to identify the bad broker(s)?
> 2) why bouncing of the bad broker got the producers recovered automatically
> (without restarting producers)
>
> producer: 0.8.2.1
> broker: 0.8.2.1
>
> Thanks,
> Steven
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: async producer callback not reliable

2015-09-09 Thread Mayuresh Gharat
Make sure you have inflight requests set to 1 if you want ordered messages.

Thanks,

Mayuresh

On Tue, Sep 8, 2015 at 5:55 AM, Damian Guy  wrote:

> Can you do:
> producer.send(...)
> ...
> producer.send(...)
> producer.flush()
>
> By the time the flush returns all of your messages should have been sent
>
> On 8 September 2015 at 11:50, jinxing  wrote:
>
> > if i wanna send the message syncronously i can do as below:
> > future=producer.send(producerRecord, callback);
> > future.get();
> >
> >
> > but the throughput decrease dramatically;
> >
> >
> > is there a method to send the messages by batch but synchronously ?
> >
> >
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Guozhang Wang
+1 binding, verified unit tests and quick start.

On Wed, Sep 9, 2015 at 4:12 AM, Manikumar Reddy 
wrote:

> +1 (non-binding). verified the artifacts, quick start.
>
> On Wed, Sep 9, 2015 at 2:41 AM, Ashish  wrote:
>
> > +1 (non-binding)
> >
> > Ran the build, works fine. All test cases passed
> >
> > On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> > > This is the first candidate for release of Apache Kafka 0.8.2.2. This
> > only
> > > fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy
> > in
> > > 0.8.2.1.
> > >
> > > Release Notes for the 0.8.2.2 release
> > >
> >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > and sha2 (SHA256) checksum.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
> > >
> > > * Maven artifacts to be voted upon prior to release:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * scala-doc
> > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
> > >
> > > * java-doc
> > > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
> > >
> > > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
> > >
> > > /***
> > >
> > > Thanks,
> > >
> > > Jun
> >
> >
> >
> > --
> > thanks
> > ashish
> >
> > Blog: http://www.ashishpaliwal.com/blog
> > My Photo Galleries: http://www.pbase.com/ashishpaliwal
> >
>



-- 
-- Guozhang


MirrorMaker - Not consuming from all partitions

2015-09-09 Thread Craig Swift
Hello,

Hope everyone is doing well. I was hoping to get some assistance with a
strange issue we're experiencing while using the MirrorMaker to pull data
down from an 8 node Kafka cluster in AWS into our data center. Both Kafka
clusters and the mirror are using version 0.8.1.1 with dedicated Zookeeper
clusters for each cluster respectively (running 3.4.5).

The problem we're seeing is that the mirror starts up and begins consuming
from the cluster on a specific topic. It correctly attaches to all 24
partitions for that topic - but inevitably there are a series of partitions
that either don't get read or are read at a very slow rate. Those
partitions are always associated with the same brokers. For example, all
partitions on broker 2 won't be read or all partitions on broker 2 and 4
won't be read. On restarting the mirror, these 'stuck' partitions may stay
the same or move. If they move the backlog is drained very quickly. If we
add more mirrors for additional capacity the same situation happens except
that each mirror has it's own set of stuck partitions. I've included the
mirror's configurations below along with samples from the logs.

1) The partition issue seems to happen when the mirror first starts up.
Once in a blue moon it reads from everything normally, but on restart it
can easily get back into this state.

2) We're fairly sure it isn't a processing/throughput issue. We can turn
the mirror off for a while, incur a large backlog of data, and when it is
enabled it chews through the data very quickly minus the handful of stuck
partitions.

3) We've looked at both the zookeeper and broker logs and there doesn't
seem to be anything out of the normal. We see the mirror connecting, there
are a few info messages about zookeeper nodes already existing, etc. No
specific errors.

4) We've enabled debugging on the mirror and we've noticed that during the
zk heartbeat/updates we're missing these messages for the 'stuck'
partitions:

[2015-09-08 18:38:12,157] DEBUG Reading reply sessionid:0x14f956bd57d21ee,
packet:: clientPath:null serverPath:null finished:false header:: 357,5
 replyHeader:: 357,8597251893,0  request::
'/consumers/mirror-kafkablk-kafka-gold-east-to-kafkablk-den/offsets/MessageHeadersBody/5,#34303537353838,-1
 response::
s{4295371756,8597251893,1439969185754,1441759092134,19500,0,0,0,7,0,4295371756}
 (org.apache.zookeeper.ClientCnxn)

i.e. we see this message for all the processing partitions, but never for
the stuck ones. There are no errors in the log prior to this though, and
once in a great while we might see a log entry for one of the stuck
partitions.

5) We've checked latency/response time with zookeeper from the brokers and
the mirror and it appears fine.

Mirror consumer config:
group.id=mirror-kafkablk-kafka-gold-east-to-kafkablk-den
consumer.id=mirror-kafkablk-mirror00-den-kafka-gold-east-to-kafkablk-den
zookeeper.connect=zk.strange.dev.net:2181
fetch.message.max.bytes=15728640
socket.receive.buffer.bytes=6400
socket.timeout.ms=6
zookeeper.connection.timeout.ms=6
zookeeper.session.timeout.ms=3
zookeeper.sync.time.ms=4000
auto.offset.reset=smallest
auto.commit.interval.ms=2

Mirror producer config:
client.id=mirror-kafkablk-mirror00-den-kafka-gold-east-to-kafkablk-den
metadata.broker.list=kafka00.lan.strange.dev.net:9092,
kafka01.lan.strange.dev.net:9092,kafka02.lan.strange.dev.net:9092,
kafka03.lan.strange.dev.net:9092,kafka04.lan.strange.dev.net:9092
request.required.acks=1
producer.type=async
request.timeout.ms=2
retry.backoff.ms=1000
message.send.max.retries=6
serializer.class=kafka.serializer.DefaultEncoder
send.buffer.bytes=134217728
compression.codec=gzip

Mirror startup settings:
--num.streams 2 --num.producers 4

Any thoughts/suggestions would be very helpful. At this point we're running
out of things to try.


Craig J. Swift
Software Engineer


Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Manikumar Reddy
+1 (non-binding). verified the artifacts, quick start.

On Wed, Sep 9, 2015 at 2:41 AM, Ashish  wrote:

> +1 (non-binding)
>
> Ran the build, works fine. All test cases passed
>
> On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> > This is the first candidate for release of Apache Kafka 0.8.2.2. This
> only
> > fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy
> in
> > 0.8.2.1.
> >
> > Release Notes for the 0.8.2.2 release
> >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/
> >
> > * scala-doc
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
> >
> > * java-doc
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
> >
> > /***
> >
> > Thanks,
> >
> > Jun
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>