[jira] [Commented] (KAFKA-3492) support quota based on authenticated user name

2016-04-08 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15232520#comment-15232520
 ] 

Aditya Auradkar commented on KAFKA-3492:


[~rsivaram] - Feel free to take this. I'll help with comments and reviews.

> support quota based on authenticated user name
> --
>
> Key: KAFKA-3492
> URL: https://issues.apache.org/jira/browse/KAFKA-3492
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Jun Rao
>
> Currently, quota is based on the client.id set in the client configuration, 
> which can be changed easily. Ideally, quota should be set on the 
> authenticated user name. We will need to have a KIP proposal/discussion on 
> this first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3492) support quota based on authenticated user name

2016-04-01 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15222349#comment-15222349
 ] 

Aditya Auradkar commented on KAFKA-3492:


Very cool. Jun - are you planning to drive this?

> support quota based on authenticated user name
> --
>
> Key: KAFKA-3492
> URL: https://issues.apache.org/jira/browse/KAFKA-3492
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Jun Rao
>
> Currently, quota is based on the client.id set in the client configuration, 
> which can be changed easily. Ideally, quota should be set on the 
> authenticated user name. We will need to have a KIP proposal/discussion on 
> this first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3456) In-house KafkaMetric misreports metrics when periodically observed

2016-03-25 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15212514#comment-15212514
 ] 

Aditya Auradkar commented on KAFKA-3456:


Accidentally changed the thread title to gibberish :). Changed it back.

I think that the problem (and I'm not convinced it is a problem) is that when 
you have 2 windows, the rate can change significantly when a new window is 
created. You effectively throw away half of your samples and start seeding that 
data again. This can skew measurement. IIUC, the problem isn't that an 
incorrect rate is being reported, it is simply being reported over a 
potentially variable interval. Configuring a larger number of samples will 
reduce the time interval variability and can smooth this significantly. 

Extending your example, if you have 10 windows (6 seconds each), and you 
alternate between 999 and 1 req/sec in each of these samples. Your rate over 60 
seconds will be 500. If you roll over your first sample of 999, the rate 
changes to ~450 which seems closer to what you want?

> In-house KafkaMetric misreports metrics when periodically observed
> --
>
> Key: KAFKA-3456
> URL: https://issues.apache.org/jira/browse/KAFKA-3456
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: The Data Lorax
>Assignee: Neha Narkhede
>Priority: Minor
>
> The metrics captured by Kafka through the in-house {{SampledStat}} suffer 
> from misreporting metrics if observed in a periodic manner.
> Consider a {{Rate}} metric that is using the default 2 samples and 30 second 
> sample window i.e. the {{Rate}} is capturing 60 seconds worth of data.  So, 
> to report this metric to some external system we might poll it every 60 
> seconds to observe the current value. Using a shorter period would, in the 
> case of a {{Rate}}, lead to smoothing of the plotted data, and worse, in the 
> case of a {{Count}}, would lead to double counting - so 60 seconds is the 
> only period at which we can poll the metrics if we are to report accurate 
> metrics.
> To demonstrate the issue consider the following somewhat extreme case:
> The {{Rate}}  is capturing data from a system which alternates between a 999 
> per sec rate and a 1 per sec rate every 30 seconds, with the different rates 
> aligned with the sample boundaries within the {{Rate}} instance i.e. after 60 
> seconds the first sample within the {{Rate}} instance will have a rate of 999 
> per sec, and the second 1 per sec. 
> If we were to ask the metric for its value at this 60 second boundary it 
> would correctly report 500 per sec. However, if we asked it again 1 
> millisecond later it would report 1 per sec, as the first sample window has 
> been aged out. Depending on how retarded into the 60 sec period of the metric 
> our periodic poll of the metric was, we would observe a constant rate 
> somewhere in the range of 1 to 500 per second, most likely around the 250 
> mark. 
> Other metrics based off of the {{SampledStat}} type suffer from the same 
> issue e.g. the {{Count}} metric, given a constant rate of 1 per second, will 
> report a constant count somewhere between 30 and 60, rather than the correct 
> 60.
> This can be seen in the following test code:
> {code:java}
> public class MetricsTest {
> private MetricConfig metricsConfig;
> @Before
> public void setUp() throws Exception {
> metricsConfig = new MetricConfig();
> }
> private long t(final int bucket) {
> return metricsConfig.timeWindowMs() * bucket;
> }
> @Test
> public void testHowRateDropsMetrics() throws Exception {
> Rate rate = new Rate();
> metricsConfig.samples(2);
> metricsConfig.timeWindow(30, TimeUnit.SECONDS);
> // First sample window from t0 -> (t1 -1), with rate 999 per second:
> for (long time = t(0); time != t(1); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t1 -> (t2 -1), with rate 1 per second:
> for (long time = t(1); time != t(2); time += 1000) {
> rate.record(metricsConfig, 1, time);
> }
> // Measure at bucket boundary, (though same issue exists all periodic 
> measurements)
> final double m1 = rate.measure(metricsConfig, t(2));// m1 = 1.0
> // Third sample window from t2 -> (t3 -1), with rate 999 per second:
> for (long time = t(2); time != t(3); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t3 -> (t4 -1), with rate 1 per second:
> for (long time = t(3); time != t(4); time += 1000) {
> rate.record(

[jira] [Updated] (KAFKA-3456) bihtfbucbcceinvujekclljidcuf

2016-03-25 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-3456:
---
Summary: bihtfbucbcceinvujekclljidcuf  (was: In-house KafkaMetric 
misreports metrics when periodically observed)

> bihtfbucbcceinvujekclljidcuf
> 
>
> Key: KAFKA-3456
> URL: https://issues.apache.org/jira/browse/KAFKA-3456
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: The Data Lorax
>Assignee: Neha Narkhede
>Priority: Minor
>
> The metrics captured by Kafka through the in-house {{SampledStat}} suffer 
> from misreporting metrics if observed in a periodic manner.
> Consider a {{Rate}} metric that is using the default 2 samples and 30 second 
> sample window i.e. the {{Rate}} is capturing 60 seconds worth of data.  So, 
> to report this metric to some external system we might poll it every 60 
> seconds to observe the current value. Using a shorter period would, in the 
> case of a {{Rate}}, lead to smoothing of the plotted data, and worse, in the 
> case of a {{Count}}, would lead to double counting - so 60 seconds is the 
> only period at which we can poll the metrics if we are to report accurate 
> metrics.
> To demonstrate the issue consider the following somewhat extreme case:
> The {{Rate}}  is capturing data from a system which alternates between a 999 
> per sec rate and a 1 per sec rate every 30 seconds, with the different rates 
> aligned with the sample boundaries within the {{Rate}} instance i.e. after 60 
> seconds the first sample within the {{Rate}} instance will have a rate of 999 
> per sec, and the second 1 per sec. 
> If we were to ask the metric for its value at this 60 second boundary it 
> would correctly report 500 per sec. However, if we asked it again 1 
> millisecond later it would report 1 per sec, as the first sample window has 
> been aged out. Depending on how retarded into the 60 sec period of the metric 
> our periodic poll of the metric was, we would observe a constant rate 
> somewhere in the range of 1 to 500 per second, most likely around the 250 
> mark. 
> Other metrics based off of the {{SampledStat}} type suffer from the same 
> issue e.g. the {{Count}} metric, given a constant rate of 1 per second, will 
> report a constant count somewhere between 30 and 60, rather than the correct 
> 60.
> This can be seen in the following test code:
> {code:java}
> public class MetricsTest {
> private MetricConfig metricsConfig;
> @Before
> public void setUp() throws Exception {
> metricsConfig = new MetricConfig();
> }
> private long t(final int bucket) {
> return metricsConfig.timeWindowMs() * bucket;
> }
> @Test
> public void testHowRateDropsMetrics() throws Exception {
> Rate rate = new Rate();
> metricsConfig.samples(2);
> metricsConfig.timeWindow(30, TimeUnit.SECONDS);
> // First sample window from t0 -> (t1 -1), with rate 999 per second:
> for (long time = t(0); time != t(1); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t1 -> (t2 -1), with rate 1 per second:
> for (long time = t(1); time != t(2); time += 1000) {
> rate.record(metricsConfig, 1, time);
> }
> // Measure at bucket boundary, (though same issue exists all periodic 
> measurements)
> final double m1 = rate.measure(metricsConfig, t(2));// m1 = 1.0
> // Third sample window from t2 -> (t3 -1), with rate 999 per second:
> for (long time = t(2); time != t(3); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t3 -> (t4 -1), with rate 1 per second:
> for (long time = t(3); time != t(4); time += 1000) {
> rate.record(metricsConfig, 1, time);
> }
> // Measure second pair of samples:
> final double m2 = rate.measure(metricsConfig, t(4));// m2 = 1.0
> assertEquals("Measurement of the rate over the first two samples", 
> 500.0, m1, 2.0);
> assertEquals("Measurement of the rate over the last two samples", 
> 500.0, m2, 2.0);
> }
> @Test
> public void testHowRateDropsMetricsWithRetardedObservations() throws 
> Exception {
> final long retardation = 1000;
> Rate rate = new Rate();
> metricsConfig.samples(2);
> metricsConfig.timeWindow(30, TimeUnit.SECONDS);
> // First sample window from t0 -> (t1 -1), with rate 999 per second:
> for (long time = t(0); time != t(1); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t1 -> (t2 -1), with r

[jira] [Updated] (KAFKA-3456) In-house KafkaMetric misreports metrics when periodically observed

2016-03-25 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-3456:
---
Summary: In-house KafkaMetric misreports metrics when periodically observed 
 (was: bihtfbucbcceinvujekclljidcuf)

> In-house KafkaMetric misreports metrics when periodically observed
> --
>
> Key: KAFKA-3456
> URL: https://issues.apache.org/jira/browse/KAFKA-3456
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: The Data Lorax
>Assignee: Neha Narkhede
>Priority: Minor
>
> The metrics captured by Kafka through the in-house {{SampledStat}} suffer 
> from misreporting metrics if observed in a periodic manner.
> Consider a {{Rate}} metric that is using the default 2 samples and 30 second 
> sample window i.e. the {{Rate}} is capturing 60 seconds worth of data.  So, 
> to report this metric to some external system we might poll it every 60 
> seconds to observe the current value. Using a shorter period would, in the 
> case of a {{Rate}}, lead to smoothing of the plotted data, and worse, in the 
> case of a {{Count}}, would lead to double counting - so 60 seconds is the 
> only period at which we can poll the metrics if we are to report accurate 
> metrics.
> To demonstrate the issue consider the following somewhat extreme case:
> The {{Rate}}  is capturing data from a system which alternates between a 999 
> per sec rate and a 1 per sec rate every 30 seconds, with the different rates 
> aligned with the sample boundaries within the {{Rate}} instance i.e. after 60 
> seconds the first sample within the {{Rate}} instance will have a rate of 999 
> per sec, and the second 1 per sec. 
> If we were to ask the metric for its value at this 60 second boundary it 
> would correctly report 500 per sec. However, if we asked it again 1 
> millisecond later it would report 1 per sec, as the first sample window has 
> been aged out. Depending on how retarded into the 60 sec period of the metric 
> our periodic poll of the metric was, we would observe a constant rate 
> somewhere in the range of 1 to 500 per second, most likely around the 250 
> mark. 
> Other metrics based off of the {{SampledStat}} type suffer from the same 
> issue e.g. the {{Count}} metric, given a constant rate of 1 per second, will 
> report a constant count somewhere between 30 and 60, rather than the correct 
> 60.
> This can be seen in the following test code:
> {code:java}
> public class MetricsTest {
> private MetricConfig metricsConfig;
> @Before
> public void setUp() throws Exception {
> metricsConfig = new MetricConfig();
> }
> private long t(final int bucket) {
> return metricsConfig.timeWindowMs() * bucket;
> }
> @Test
> public void testHowRateDropsMetrics() throws Exception {
> Rate rate = new Rate();
> metricsConfig.samples(2);
> metricsConfig.timeWindow(30, TimeUnit.SECONDS);
> // First sample window from t0 -> (t1 -1), with rate 999 per second:
> for (long time = t(0); time != t(1); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t1 -> (t2 -1), with rate 1 per second:
> for (long time = t(1); time != t(2); time += 1000) {
> rate.record(metricsConfig, 1, time);
> }
> // Measure at bucket boundary, (though same issue exists all periodic 
> measurements)
> final double m1 = rate.measure(metricsConfig, t(2));// m1 = 1.0
> // Third sample window from t2 -> (t3 -1), with rate 999 per second:
> for (long time = t(2); time != t(3); time += 1000) {
> rate.record(metricsConfig, 999, time);
> }
> // Second sample window from t3 -> (t4 -1), with rate 1 per second:
> for (long time = t(3); time != t(4); time += 1000) {
> rate.record(metricsConfig, 1, time);
> }
> // Measure second pair of samples:
> final double m2 = rate.measure(metricsConfig, t(4));// m2 = 1.0
> assertEquals("Measurement of the rate over the first two samples", 
> 500.0, m1, 2.0);
> assertEquals("Measurement of the rate over the last two samples", 
> 500.0, m2, 2.0);
> }
> @Test
> public void testHowRateDropsMetricsWithRetardedObservations() throws 
> Exception {
> final long retardation = 1000;
> Rate rate = new Rate();
> metricsConfig.samples(2);
> metricsConfig.timeWindow(30, TimeUnit.SECONDS);
> // First sample window from t0 -> (t1 -1), with rate 999 per second:
> for (long time = t(0); time != t(1); time += 1000) {
> rate.record(metricsConfig, 999, tim

[jira] [Commented] (KAFKA-3427) broker can return incorrect version of fetch response when the broker hits an unknown exception

2016-03-20 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15203370#comment-15203370
 ] 

Aditya Auradkar commented on KAFKA-3427:


[~junrao] - Certainly, I can patch 0.9. 

> broker can return incorrect version of fetch response when the broker hits an 
> unknown exception
> ---
>
> Key: KAFKA-3427
> URL: https://issues.apache.org/jira/browse/KAFKA-3427
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> In FetchResponse.handleError(), we generate FetchResponse like the following, 
> which always defaults to version 0 of the response. 
> FetchResponse(correlationId, fetchResponsePartitionData)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-03-01 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15174366#comment-15174366
 ] 

Aditya Auradkar commented on KAFKA-3310:


[~junrao] - can you take a look?

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2016-03-01 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15174090#comment-15174090
 ] 

Aditya Auradkar commented on KAFKA-1215:


Thanks Allan. I'll review it this week.

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Allen Wang
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173223#comment-15173223
 ] 

Aditya Auradkar commented on KAFKA-3310:


[~junrao] - Just making sure, you observe that the response is still delayed 
right? The throttle time sensor is the last thing that is recorded and the 
element has been added to the delay queue, so the fetchResponseCallback should 
fire after the throttle time. 

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173204#comment-15173204
 ] 

Aditya Auradkar commented on KAFKA-3310:


[~junrao] - Let me investigate this. If this is a problem, it should be easy to 
fix by recording 0 on the throttle time sensor everytime. 

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2016-02-23 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159491#comment-15159491
 ] 

Aditya Auradkar commented on KAFKA-1215:


[~allenxwang] - Is this patch ready for review? I noticed you add several 
commits recently but I'm not sure if you are done.

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Allen Wang
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-01-21 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15111924#comment-15111924
 ] 

Aditya Auradkar commented on KAFKA-3088:


I personally prefer preserving the old behavior i.e. option (1). Everyone using 
an empty client-id receives a default quota shared by all such instances.

[~dspeterson] - Are you planning to submit a patch for this? If not, I can.

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Jun Rao
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2015-12-09 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049357#comment-15049357
 ] 

Aditya Auradkar commented on KAFKA-2966:


I'll work on it since I made those changes.

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2015-12-09 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2966:
--

Assignee: Aditya Auradkar

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2310) Add config to prevent broker becoming controller

2015-12-02 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036359#comment-15036359
 ] 

Aditya Auradkar commented on KAFKA-2310:


[~abiletskyi] - I see you submitted a pull request for this recently. 
https://github.com/apache/kafka/pull/614/files

Can you actually elaborate on the reasoning behind this change a bit more? I 
actually think we need a KIP to discuss this at least (given that it adds a new 
config). I'm not really sure preventing a broker from becoming a controller 
solves the underlying problem of a broker being overloaded.

> Add config to prevent broker becoming controller
> 
>
> Key: KAFKA-2310
> URL: https://issues.apache.org/jira/browse/KAFKA-2310
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrii Biletskyi
>Assignee: Andrii Biletskyi
> Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, 
> KAFKA-2310_0.8.2.patch
>
>
> The goal is to be able to specify which cluster brokers can serve as a 
> controller and which cannot. This way it will be possible to "reserve" 
> particular, not overloaded with partitions and other operations, broker as 
> controller.
> Proposed to add config _controller.eligibility_ defaulted to true (for 
> backward compatibility, since now any broker can become a controller)
> Patch will be available for trunk, 0.8.2 and 0.8.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-29 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14980712#comment-14980712
 ] 

Aditya Auradkar commented on KAFKA-2502:


[~gwenshap] - Published a patch. Please take a look.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-28 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14979155#comment-14979155
 ] 

Aditya Auradkar commented on KAFKA-2502:


[~gwenshap][~ijuma] - Sorry for the delay.. been dealing with some internal 
stuff. I'll submit something by tomorrow.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2699) Add test to validate times in RequestMetrics

2015-10-27 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2699:
--

 Summary: Add test to validate times in RequestMetrics
 Key: KAFKA-2699
 URL: https://issues.apache.org/jira/browse/KAFKA-2699
 Project: Kafka
  Issue Type: Test
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


No tests exist to validate the reported times in RequestMetrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2664) Adding a new metric with several pre-existing metrics is very expensive

2015-10-20 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2664:
--

Assignee: Aditya Auradkar  (was: Onur Karaman)

> Adding a new metric with several pre-existing metrics is very expensive
> ---
>
> Key: KAFKA-2664
> URL: https://issues.apache.org/jira/browse/KAFKA-2664
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.1
>
>
> I know the summary sounds expected, but we recently ran into a socket server 
> request queue backup that I suspect was caused by a combination of improperly 
> implemented applications that reconnect with a different (random) client-id 
> each time; and the fact that for quotas we now register a new quota 
> metric-set for each client-id.
> So here is what happened: a broker went down and a handful of other brokers 
> starting seeing queue times go up significantly. This caused the request 
> queue to backup, which caused socket timeouts and a further deluge of 
> reconnects. The only way we could get out of this was to fire-wall the broker 
> and downgrade to a version without quotas (or I think it would have worked to 
> just restart the broker).
> My guess is that there were a ton of pre-existing client-id metrics. I don’t 
> know for sure but I’m basing that on the fact that there were several new 
> unique client-ids showing up in the public access logs and request local 
> times for fetches started going up inexplicably. (It would have been useful 
> to have a metric for the number of metrics.) So it turns out that in the 
> above scenario (with say 50k pre-existing client-ids), the avg local time for 
> fetch can go up to the order of 50-100ms (at least with tests on a linux box) 
> largely due to the time taken to create new metrics; and that’s because we 
> use a copy-on-write map underneath. If you have enough (say, hundreds) of 
> clients re-connecting at the same time with new client-id's, that can cause 
> the request queues to start backing up and the overall queuing system to 
> become unstable; and the line starts to spill out of the building.
> I think this is a fairly new scenario with quotas - i.e., I don’t think the 
> past per-X metrics (per-topic for e.g.,) creation rate would ever come this 
> close.
> To be clear, the clients are clearly doing the wrong thing but I think the 
> broker can and should protect itself adequately against such rogue scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963909#comment-14963909
 ] 

Aditya Auradkar commented on KAFKA-2502:


Thanks Gwen and Ismael. I'll have send a patch to review later this week.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963605#comment-14963605
 ] 

Aditya Auradkar commented on KAFKA-2502:


[~ijuma] - I assume I need to submit changes to the 0.9 site here: 
http://kafka.apache.org/documentation.html

I'll add the following changes:
1. Add newly introduced configs to the "Configuration" section
2. Add a section on quota design to the "Design" section
3. Add a piece on setting quotas dynamically via ConfigCommand in "Basic Kafka 
Operations"
4. In "Monitoring" add suggested metrics to monitor.

Sound ok?

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14960132#comment-14960132
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~junrao] - Submitted a pull request. Take a look please.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reopened KAFKA-2419:


> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959588#comment-14959588
 ] 

Aditya Auradkar commented on KAFKA-2419:


I agree it would be nice to not need a tick thread.. but it seems like we can 
avoid this altogether on clients assuming we don't need sensors that can be 
GCed. 

You are right that we can check for expiry when creating new sensors or even 
when calling record() (though that might slow down the record quite a bit). But 
we can have a situation where a chunk of sensors get recorded during a short 
window and we have no new metrics for a long time. These sensor objects 
continue to occupy memory which could otherwise be freed.. this can be 
significant if they have several samples. 

[~junrao] - There isn't a strong reason (or use case) to support per-sensor 
expiration right now. I added it because it didn't seem to add any complexity 
to the implementation and could be useful down the road.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959484#comment-14959484
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~junrao] - That is indeed the current implementation i.e. sensors are expired 
whenever the expiry task runs because it is fine to not be super precise. 

My comment was a bit different. Currently each sensor has the ability to 
specify the "inactivity period" i.e. time after which it is eligible for GC. 
Are you saying we should have a single such value on the base metrics instance?

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959409#comment-14959409
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~junrao] - I concur that option 2 is simpler. One minor comment is that it may 
be better to leave the sensor objects as they are. Instead of a boolean, they 
are configured with a numeric expiration delay. By default, they won't ever 
expire.. basically the same as not being enabled for time based GC.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959409#comment-14959409
 ] 

Aditya Auradkar edited comment on KAFKA-2419 at 10/15/15 6:52 PM:
--

[~junrao] - I concur that option 2 is simpler. One minor comment is that it may 
be better to leave the sensor objects as they are. Instead of a boolean, they 
are configured with a numeric expiration delay. By default, they won't ever 
expire.. basically the same as not being enabled for time based GC.

I can submit a patch for this today.


was (Author: aauradkar):
[~junrao] - I concur that option 2 is simpler. One minor comment is that it may 
be better to leave the sensor objects as they are. Instead of a boolean, they 
are configured with a numeric expiration delay. By default, they won't ever 
expire.. basically the same as not being enabled for time based GC.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-10-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958962#comment-14958962
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~ijuma] - We create only a single metrics instance for the KafkaServer. We do 
create a separate Sensor per consumer and producer.

As for the ExpireSensor task, let me fix that. My patch added that to the 
thread pool executor, I nuked that while refactoring. Shall submit a fix today.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.9.0.0
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2527) System Test for Quotas in Ducktape

2015-10-13 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14955720#comment-14955720
 ] 

Aditya Auradkar commented on KAFKA-2527:


[~gwenshap] - Thanks!

> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>  Labels: quota
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2536) topics tool should allow users to alter topic configuration

2015-10-13 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14955375#comment-14955375
 ] 

Aditya Auradkar commented on KAFKA-2536:


[~gwenshap] - Thanks for reporting. Do we plan to keep this functionality in 
kafka-topic for future releases? In the future it will have to change to use 
the AlterConfig command to the brokers.

> topics tool should allow users to alter topic configuration
> ---
>
> Key: KAFKA-2536
> URL: https://issues.apache.org/jira/browse/KAFKA-2536
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> When we added dynamic config, we added a kafka-config tool (which can be used 
> to maintain configs for non-topic entities), and remove the capability from 
> kafka-topic tool.
> Removing the capability from kafka-topic is:
> 1. Breaks backward compatibility in our most essential tools. This has 
> significant impact on usability.
> 2. Kinda confusing that --create --config works but --alter --config does 
> not. 
> I suggest fixing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2606) Remove kafka.utils.Time in favour of o.a.kafka.common.utils.Time

2015-10-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2606.

Resolution: Duplicate

Duplicate of: https://issues.apache.org/jira/browse/KAFKA-2247

Ismael - I copied the piece about the scheduler from this ticket onto 2247. 
Thanks!

> Remove kafka.utils.Time in favour of o.a.kafka.common.utils.Time
> 
>
> Key: KAFKA-2606
> URL: https://issues.apache.org/jira/browse/KAFKA-2606
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Priority: Minor
>  Labels: newbie
>
> They duplicate each other at the moment and some server classes actually need 
> an instance of both types, which is annoying.
> It's worth noting that `kafka.utils.MockTime` includes a `scheduler` that is 
> used by some tests while `o.a.kafka.common.utils.Time` does not. We either 
> need to add this functionality or change the tests not to need it anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2247) Merge kafka.utils.Time and kafka.common.utils.Time

2015-10-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2247:
---
Description: 
We currently have 2 different versions of Time in clients and core. These need 
to be merged.

It's worth noting that `kafka.utils.MockTime` includes a `scheduler` that is 
used by some tests while `o.a.kafka.common.utils.Time` does not. We either need 
to add this functionality or change the tests not to need it anymore.

  was:We currently have 2 different versions of Time in clients and core. These 
need to be merged


> Merge kafka.utils.Time and kafka.common.utils.Time
> --
>
> Key: KAFKA-2247
> URL: https://issues.apache.org/jira/browse/KAFKA-2247
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Minor
>
> We currently have 2 different versions of Time in clients and core. These 
> need to be merged.
> It's worth noting that `kafka.utils.MockTime` includes a `scheduler` that is 
> used by some tests while `o.a.kafka.common.utils.Time` does not. We either 
> need to add this functionality or change the tests not to need it anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2606) Remove kafka.utils.Time in favour of o.a.kafka.common.utils.Time

2015-10-02 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14941984#comment-14941984
 ] 

Aditya Auradkar commented on KAFKA-2606:


[~ijuma] This is a duplicate of https://issues.apache.org/jira/browse/KAFKA-2247
If you plan to work on this, I'll close the other one. Otherwise, I can close 
this.

> Remove kafka.utils.Time in favour of o.a.kafka.common.utils.Time
> 
>
> Key: KAFKA-2606
> URL: https://issues.apache.org/jira/browse/KAFKA-2606
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Priority: Minor
>  Labels: newbie
>
> They duplicate each other at the moment and some server classes actually need 
> an instance of both types, which is annoying.
> It's worth noting that `kafka.utils.MockTime` includes a `scheduler` that is 
> used by some tests while `o.a.kafka.common.utils.Time` does not. We either 
> need to add this functionality or change the tests not to need it anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2015-09-25 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908460#comment-14908460
 ] 

Aditya Auradkar commented on KAFKA-1215:


[~allenxwang] - bump. 

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Jun Rao
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2567) throttle-time shouldn't be NaN

2015-09-22 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14903101#comment-14903101
 ] 

Aditya Auradkar commented on KAFKA-2567:


[~junrao] - I'l fix this in my next patch

> throttle-time shouldn't be NaN
> --
>
> Key: KAFKA-2567
> URL: https://issues.apache.org/jira/browse/KAFKA-2567
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Aditya Auradkar
>Priority: Minor
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, if throttling never happens, we get the NaN for throttle-time. It 
> seems it's better to default to 0.
> "kafka.server:client-id=eventsimgroup200343,type=Fetch" : { "byte-rate": 0.0, 
> "throttle-time": NaN }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2567) throttle-time shouldn't be NaN

2015-09-22 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2567:
--

Assignee: Aditya Auradkar

> throttle-time shouldn't be NaN
> --
>
> Key: KAFKA-2567
> URL: https://issues.apache.org/jira/browse/KAFKA-2567
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Aditya Auradkar
>Priority: Minor
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, if throttling never happens, we get the NaN for throttle-time. It 
> seems it's better to default to 0.
> "kafka.server:client-id=eventsimgroup200343,type=Fetch" : { "byte-rate": 0.0, 
> "throttle-time": NaN }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2567) throttle-time shouldn't be NaN

2015-09-22 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2567:
---
Labels: quotas  (was: )

> throttle-time shouldn't be NaN
> --
>
> Key: KAFKA-2567
> URL: https://issues.apache.org/jira/browse/KAFKA-2567
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Aditya Auradkar
>Priority: Minor
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Currently, if throttling never happens, we get the NaN for throttle-time. It 
> seems it's better to default to 0.
> "kafka.server:client-id=eventsimgroup200343,type=Fetch" : { "byte-rate": 0.0, 
> "throttle-time": NaN }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1599) Change preferred replica election admin command to handle large clusters

2015-09-21 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901142#comment-14901142
 ] 

Aditya Auradkar commented on KAFKA-1599:


[~anigam] - Perhaps you can write up your proposal here? Based on what the 
committers say, you write a KIP if required.

> Change preferred replica election admin command to handle large clusters
> 
>
> Key: KAFKA-1599
> URL: https://issues.apache.org/jira/browse/KAFKA-1599
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Todd Palino
>Assignee: Abhishek Nigam
>  Labels: newbie++
>
> We ran into a problem with a cluster that has 70k partitions where we could 
> not trigger a preferred replica election for all topics and partitions using 
> the admin tool. Upon investigation, it was determined that this was because 
> the JSON object that was being written to the admin znode to tell the 
> controller to start the election was 1.8 MB in size. As the default Zookeeper 
> data size limit is 1MB, and it is non-trivial to change, we should come up 
> with a better way to represent the list of topics and partitions for this 
> admin command.
> I have several thoughts on this so far:
> 1) Trigger the command for all topics and partitions with a JSON object that 
> does not include an explicit list of them (i.e. a flag that says "all 
> partitions")
> 2) Use a more compact JSON representation. Currently, the JSON contains a 
> 'partitions' key which holds a list of dictionaries that each have a 'topic' 
> and 'partition' key, and there must be one list item for each partition. This 
> results in a lot of repetition of key names that is unneeded. Changing this 
> to a format like this would be much more compact:
> {'topics': {'topicName1': [0, 1, 2, 3], 'topicName2': [0,1]}, 'version': 1}
> 3) Use a representation other than JSON. Strings are inefficient. A binary 
> format would be the most compact. This does put a greater burden on tools and 
> scripts that do not use the inbuilt libraries, but it is not too high.
> 4) Use a representation that involves multiple znodes. A structured tree in 
> the admin command would probably provide the most complete solution. However, 
> we would need to make sure to not exceed the data size limit with a wide tree 
> (the list of children for any single znode cannot exceed the ZK data size of 
> 1MB)
> Obviously, there could be a combination of #1 with a change in the 
> representation, which would likely be appropriate as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2015-09-15 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14745803#comment-14745803
 ] 

Aditya Auradkar commented on KAFKA-1215:


[~allenxwang] One of the committers can provide you write access once you 
provide your confluence apache id. Please let me know if you need any help with 
the KIP/reviews etc. Thanks!

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Jun Rao
> Fix For: 0.10.0.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2443) Expose windowSize on Rate

2015-09-14 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2443:
---
Summary: Expose windowSize on Rate  (was: Expose windowSize on Measurable)

> Expose windowSize on Rate
> -
>
> Key: KAFKA-2443
> URL: https://issues.apache.org/jira/browse/KAFKA-2443
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> Currently, we dont have a means to measure the size of the metric window 
> since the final sample can be incomplete.
> Expose windowSize(now) on Measurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2443) Expose windowSize on Rate

2015-09-14 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2443:
---
Description: 
Currently, we dont have a means to measure the size of the metric window since 
the final sample can be incomplete.

Expose windowSize(now) on Rate


  was:
Currently, we dont have a means to measure the size of the metric window since 
the final sample can be incomplete.

Expose windowSize(now) on Measurable


> Expose windowSize on Rate
> -
>
> Key: KAFKA-2443
> URL: https://issues.apache.org/jira/browse/KAFKA-2443
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> Currently, we dont have a means to measure the size of the metric window 
> since the final sample can be incomplete.
> Expose windowSize(now) on Rate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2528) Quota Performance Evaluation

2015-09-10 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739899#comment-14739899
 ] 

Aditya Auradkar commented on KAFKA-2528:


One possible explanation for the difference is that we append to the log when 
the produce request is received. For example, in your experiment you have 12 
mirror makers each sending a batch of data. When a batch is recorded the 
clients get throttled until the quota is within the limit. After receiving a 
response, each of them immediately sends a large batch to the brokers. Because 
the quota is so low and the request size can be much larger, there is a small 
absolute difference in this example which corresponds to the maximum size of 
the received request. I think if measured over a period of time from the client 
perspective, the actual throughput will be very similar to the 1MB quota.



> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluation.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2528) Quota Performance Evaluation

2015-09-10 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739438#comment-14739438
 ] 

Aditya Auradkar commented on KAFKA-2528:


I'm not quite sure why the actual rate is higher in this particular case. It 
seems to be a lot closer in the other tests Dong has posted. The difference is 
likely because of some measurement issue.. perhaps a test issue. It should be 
straightforward to reproduce this.

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluation.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-09-10 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739324#comment-14739324
 ] 

Aditya Auradkar commented on KAFKA-2419:


Thanks [~ijuma] I'll take a look today. If I indeed can use your patch, when do 
you expect your commit to go through ?   

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.3
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-10 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739003#comment-14739003
 ] 

Aditya Auradkar commented on KAFKA-2500:


[~hachikuji] - Are you planning on supporting this in the new consumer for the 
first release? This is quite useful for a few teams within LinkedIn.

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.8.3
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.8.3
>
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1215) Rack-Aware replica assignment option

2015-09-09 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14737918#comment-14737918
 ] 

Aditya Auradkar commented on KAFKA-1215:


[~allenxwang] Hi Allen. Thanks for the patch. Can you create a KIP to discuss 
the changes being proposed (since this patch adds configs and ZK structures) ? 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

We are hoping to leverage this patch within LinkedIn as well.

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Jun Rao
> Fix For: 0.9.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-09-09 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14737756#comment-14737756
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~junrao][~jjkoshy] We have a couple of ways to solve this problem. Let me know 
which approach you prefer.

1. Generic Time based
- In this approach, we can mark any sensor as "transient". After a certain 
period of inactivity i.e. no record called for x minutes, we can remove the 
sensor and its associated metrics, unregister from all MetricReporters. Here, 
we can build the notion of time based retention into the root Metrics object 
itself i.e. periodically iterate through sensors and remove transient sensors 
that are deemed inactive. Is this a useful addition to the metrics library or 
does it only make sense for Quotas?

2. Time Based but keep time computation in Quota code.
- In this approach, the ClientQuotaManager can keep track of the last record 
time and trigger deleteSensor requests to the Metrics object. The metrics 
library will only understand creation and deletion of sensors.

3. Explicit deletion 
- (First suggested by Joel in 2084) Only delete sensors explicitly. For quotas, 
this involves keeping track of client disconnections. This is tricky to do 
correctly because connections/disconnections are handled in the Selector per 
SocketChannel and we need to start tracking number of connections per unique 
clientId.

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.3
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-09-09 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14737497#comment-14737497
 ] 

Aditya Auradkar commented on KAFKA-2419:


[~junrao] I'm working on this now. Shall submit a patch within a few days

> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.3
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2209) Change client quotas dynamically using DynamicConfigManager

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2209:
---
Labels: quotas  (was: )

> Change client quotas dynamically using DynamicConfigManager
> ---
>
> Key: KAFKA-2209
> URL: https://issues.apache.org/jira/browse/KAFKA-2209
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2443) Expose windowSize on Measurable

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2443:
---
Labels: quotas  (was: )

> Expose windowSize on Measurable
> ---
>
> Key: KAFKA-2443
> URL: https://issues.apache.org/jira/browse/KAFKA-2443
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> Currently, we dont have a means to measure the size of the metric window 
> since the final sample can be incomplete.
> Expose windowSize(now) on Measurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2502) Quotas documentation for 0.8.3

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2502:
---
Labels: quotas  (was: )

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Complete quotas documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2502) Quotas documentation for 0.8.3

2015-09-02 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2502:
--

 Summary: Quotas documentation for 0.8.3
 Key: KAFKA-2502
 URL: https://issues.apache.org/jira/browse/KAFKA-2502
 Project: Kafka
  Issue Type: Task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
Priority: Blocker
 Fix For: 0.8.3


Complete quotas documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2136) Client side protocol changes to return quota delays

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2136.

Resolution: Fixed

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2015-08-21 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14707492#comment-14707492
 ] 

Aditya Auradkar commented on KAFKA-2442:


[~jjkoshy] - Can you take a look?

> QuotasTest should not fail when cpu is busy
> ---
>
> Key: KAFKA-2442
> URL: https://issues.apache.org/jira/browse/KAFKA-2442
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Aditya Auradkar
> Fix For: 0.8.3
>
>
> We observed that testThrottledProducerConsumer in QuotasTest may fail or 
> succeed randomly. It appears that the test may fail when the system is slow. 
> We can add timer in the integration test to avoid random failure.
> See an example failure at 
> https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
> https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2015-08-21 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2442:
--

Assignee: Aditya Auradkar  (was: Dong Lin)

> QuotasTest should not fail when cpu is busy
> ---
>
> Key: KAFKA-2442
> URL: https://issues.apache.org/jira/browse/KAFKA-2442
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Aditya Auradkar
> Fix For: 0.8.3
>
>
> We observed that testThrottledProducerConsumer in QuotasTest may fail or 
> succeed randomly. It appears that the test may fail when the system is slow. 
> We can add timer in the integration test to avoid random failure.
> See an example failure at 
> https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
> https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2446) KAFKA-2205 causes existing Topic config changes to be lost

2015-08-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704037#comment-14704037
 ] 

Aditya Auradkar commented on KAFKA-2446:


[~junrao][~jjkoshy] - Can one of you review this quickly?
https://github.com/apache/kafka/pull/152/



> KAFKA-2205 causes existing Topic config changes to be lost
> --
>
> Key: KAFKA-2446
> URL: https://issues.apache.org/jira/browse/KAFKA-2446
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> The path was changed from "/config/topics/" to "/config/topic". This causes 
> existing config overrides to not get read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2293) IllegalFormatConversionException in Partition.scala

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2293.

Resolution: Fixed

Committed to trunk

> IllegalFormatConversionException in Partition.scala
> ---
>
> Key: KAFKA-2293
> URL: https://issues.apache.org/jira/browse/KAFKA-2293
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2293.patch
>
>
> ERROR [KafkaApis] [kafka-request-handler-9] [kafka-server] [] [KafkaApi-306] 
> error when handling request Name: 
> java.util.IllegalFormatConversionException: d != 
> kafka.server.LogOffsetMetadata
> at 
> java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4302)
> at 
> java.util.Formatter$FormatSpecifier.printInteger(Formatter.java:2793)
> at java.util.Formatter$FormatSpecifier.print(Formatter.java:2747)
> at java.util.Formatter.format(Formatter.java:2520)
> at java.util.Formatter.format(Formatter.java:2455)
> at java.lang.String.format(String.java:2925)
> at 
> scala.collection.immutable.StringLike$class.format(StringLike.scala:266)
> at scala.collection.immutable.StringOps.format(StringOps.scala:31)
> at 
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:253)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:788)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:788)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:433)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2097) Implement request delays for quota violations

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2097.

Resolution: Fixed

Resolved in KAFKA-2084

> Implement request delays for quota violations
> -
>
> Key: KAFKA-2097
> URL: https://issues.apache.org/jira/browse/KAFKA-2097
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> As defined in the KIP, implement delays on a per-request basis for both 
> producer and consumer. This involves either modifying the existing purgatory 
> or adding a new delay queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2085) Return delay time in QuotaViolationException

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2085.

Resolution: Fixed

Resolved in KAFKA-2084

> Return delay time in QuotaViolationException
> 
>
> Key: KAFKA-2085
> URL: https://issues.apache.org/jira/browse/KAFKA-2085
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> As described in KIP-13, we need to be able to return a delay in 
> QuotaViolationException. Compute delay in Sensor and return in the thrown 
> exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2097) Implement request delays for quota violations

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2097 started by Aditya Auradkar.
--
> Implement request delays for quota violations
> -
>
> Key: KAFKA-2097
> URL: https://issues.apache.org/jira/browse/KAFKA-2097
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> As defined in the KIP, implement delays on a per-request basis for both 
> producer and consumer. This involves either modifying the existing purgatory 
> or adding a new delay queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703701#comment-14703701
 ] 

Aditya Auradkar commented on KAFKA-2084:


[~guozhang] Will do

> byte rate metrics per client ID (producer and consumer)
> ---
>
> Key: KAFKA-2084
> URL: https://issues.apache.org/jira/browse/KAFKA-2084
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
> KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
> KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
> KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
> KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
> KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
> KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
> KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
> KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch, 
> KAFKA-2084_2015-08-10_13:48:50.patch, KAFKA-2084_2015-08-10_21:57:48.patch, 
> KAFKA-2084_2015-08-12_12:02:33.patch, KAFKA-2084_2015-08-12_12:04:51.patch, 
> KAFKA-2084_2015-08-12_12:08:17.patch, KAFKA-2084_2015-08-12_21:24:07.patch, 
> KAFKA-2084_2015-08-13_19:08:27.patch, KAFKA-2084_2015-08-13_19:19:16.patch, 
> KAFKA-2084_2015-08-14_17:43:00.patch
>
>
> We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
> basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2446) KAFKA-2205 causes existing Topic config changes to be lost

2015-08-19 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2446:
--

 Summary: KAFKA-2205 causes existing Topic config changes to be lost
 Key: KAFKA-2446
 URL: https://issues.apache.org/jira/browse/KAFKA-2446
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


The path was changed from "/config/topics/" to "/config/topic". This causes 
existing config overrides to not get read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2444) Fail test: kafka.api.QuotasTest > testThrottledProducerConsumer FAILED

2015-08-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703192#comment-14703192
 ] 

Aditya Auradkar commented on KAFKA-2444:


[~gwenshap] will do

> Fail test: kafka.api.QuotasTest > testThrottledProducerConsumer FAILED
> --
>
> Key: KAFKA-2444
> URL: https://issues.apache.org/jira/browse/KAFKA-2444
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> This test has been failing on Jenkins builds several times in the last few 
> days. For example: https://builds.apache.org/job/Kafka-trunk/591/console
> kafka.api.QuotasTest > testThrottledProducerConsumer FAILED
> junit.framework.AssertionFailedError: Should have been throttled
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2444) Fail test: kafka.api.QuotasTest > testThrottledProducerConsumer FAILED

2015-08-19 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2444:
--

Assignee: Aditya Auradkar

> Fail test: kafka.api.QuotasTest > testThrottledProducerConsumer FAILED
> --
>
> Key: KAFKA-2444
> URL: https://issues.apache.org/jira/browse/KAFKA-2444
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> This test has been failing on Jenkins builds several times in the last few 
> days. For example: https://builds.apache.org/job/Kafka-trunk/591/console
> kafka.api.QuotasTest > testThrottledProducerConsumer FAILED
> junit.framework.AssertionFailedError: Should have been throttled
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.api.QuotasTest.testThrottledProducerConsumer(QuotasTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2443) Expose windowSize on Measurable

2015-08-18 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2443:
--

 Summary: Expose windowSize on Measurable
 Key: KAFKA-2443
 URL: https://issues.apache.org/jira/browse/KAFKA-2443
 Project: Kafka
  Issue Type: Task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


Currently, we dont have a means to measure the size of the metric window since 
the final sample can be incomplete.

Expose windowSize(now) on Measurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2420) Merge the Throttle time computation for Quotas and Throttler

2015-08-10 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2420:
--

 Summary: Merge the Throttle time computation for Quotas and 
Throttler
 Key: KAFKA-2420
 URL: https://issues.apache.org/jira/browse/KAFKA-2420
 Project: Kafka
  Issue Type: Improvement
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


Our quota implementation computes Throttle time separately from 
Throttler.scala. Unify the calculation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-08-10 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2419:
---
Description: 
Currently, metrics cannot be removed once registered. 
Implement a feature to remove certain sensors after a certain period of 
inactivity (perhaps configurable).

  was:Implement a feature to remove certain sensors after a certain period of 
inactivity (perhaps configurable).


> Allow certain Sensors to be garbage collected after inactivity
> --
>
> Key: KAFKA-2419
> URL: https://issues.apache.org/jira/browse/KAFKA-2419
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> Currently, metrics cannot be removed once registered. 
> Implement a feature to remove certain sensors after a certain period of 
> inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2419) Allow certain Sensors to be garbage collected after inactivity

2015-08-10 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2419:
--

 Summary: Allow certain Sensors to be garbage collected after 
inactivity
 Key: KAFKA-2419
 URL: https://issues.apache.org/jira/browse/KAFKA-2419
 Project: Kafka
  Issue Type: New Feature
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


Implement a feature to remove certain sensors after a certain period of 
inactivity (perhaps configurable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-08-10 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14680409#comment-14680409
 ] 

Aditya Auradkar commented on KAFKA-2084:


[~junrao][~jjkoshy] I think I've addressed all of your comments. Can you guys 
take another look?

> byte rate metrics per client ID (producer and consumer)
> ---
>
> Key: KAFKA-2084
> URL: https://issues.apache.org/jira/browse/KAFKA-2084
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
> KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
> KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
> KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
> KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
> KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
> KAFKA-2084_2015-06-04_16:31:22.patch, KAFKA-2084_2015-06-12_10:39:35.patch, 
> KAFKA-2084_2015-06-29_17:53:44.patch, KAFKA-2084_2015-08-04_18:50:51.patch, 
> KAFKA-2084_2015-08-04_19:07:46.patch, KAFKA-2084_2015-08-07_11:27:51.patch
>
>
> We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
> basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1229) Reload broker config without a restart

2015-08-05 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14659071#comment-14659071
 ] 

Aditya Auradkar commented on KAFKA-1229:


Hey [~vamsi360],

We recently did a pretty detailed discussion on this: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration
At that time, we decided not to support reloading broker config via SIGHUP.

Aditya

> Reload broker config without a restart
> --
>
> Key: KAFKA-1229
> URL: https://issues.apache.org/jira/browse/KAFKA-1229
> Project: Kafka
>  Issue Type: Wish
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Carlo Cabanilla
>Priority: Minor
>
> In order to minimize client disruption, ideally you'd be able to reload 
> broker config without having to restart the server. On *nix system the 
> convention is to have the process reread its configuration if it receives a 
> SIGHUP signal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-08-04 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654505#comment-14654505
 ] 

Aditya Auradkar commented on KAFKA-2205:


Thanks Jun.

1. Filed https://issues.apache.org/jira/browse/KAFKA-2404. Shall submit a patch 
soon.
2. Documented the changes.
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch, KAFKA-2205_2015-07-24_18:11:34.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2404) Delete config znode when config values are empty

2015-08-04 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2404:
--

 Summary: Delete config znode when config values are empty
 Key: KAFKA-2404
 URL: https://issues.apache.org/jira/browse/KAFKA-2404
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


Jun's comment from KAFKA-2205:

"Currently, if I add client config and then remove it, the clientid still shows 
up during describe, but with empty config values. We probably should delete the 
path when there is no overwritten values. Could you do that in a follow up 
patch?
bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type client --describe 
Configs for client:client1 are"




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-07-28 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645166#comment-14645166
 ] 

Aditya Auradkar commented on KAFKA-2255:


This was brought up during a mailing list discussion. I'll work on it.

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-07-28 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2255:
--

Assignee: Aditya Auradkar

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-24 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641313#comment-14641313
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] One more time :). Addressed your comments.. ready to commit I think.

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch, KAFKA-2205_2015-07-24_18:11:34.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631772#comment-14631772
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] Another patch ready!

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-14 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14626814#comment-14626814
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] Thanks! I addressed your remaining comments. Please take a look.

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2332) Add quota metrics to old producer and consumer

2015-07-13 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2332:
--

 Summary: Add quota metrics to old producer and consumer
 Key: KAFKA-2332
 URL: https://issues.apache.org/jira/browse/KAFKA-2332
 Project: Kafka
  Issue Type: Improvement
Reporter: Aditya Auradkar
Assignee: Dong Lin


Quota metrics have only been added to the new producer and consumer. It may be 
beneficial to add these to the existing consumer and old producer also for 
clients using the older versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2332) Add quota metrics to old producer and consumer

2015-07-13 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2332:
---
Issue Type: Sub-task  (was: Improvement)
Parent: KAFKA-2083

> Add quota metrics to old producer and consumer
> --
>
> Key: KAFKA-2332
> URL: https://issues.apache.org/jira/browse/KAFKA-2332
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Dong Lin
>  Labels: quotas
>
> Quota metrics have only been added to the new producer and consumer. It may 
> be beneficial to add these to the existing consumer and old producer also for 
> clients using the older versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-07 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14617849#comment-14617849
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] - Addressed your comments. Can you take a look again? Thanks!

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-01 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611311#comment-14611311
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] - Addressed your comments. Can you take another look?

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-06-22 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14596965#comment-14596965
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] - Can you review this patch?

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2293) IllegalFormatConversionException in Partition.scala

2015-06-22 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14596298#comment-14596298
 ] 

Aditya Auradkar commented on KAFKA-2293:


[~junrao] Can you take a look at this minor fix?

> IllegalFormatConversionException in Partition.scala
> ---
>
> Key: KAFKA-2293
> URL: https://issues.apache.org/jira/browse/KAFKA-2293
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2293.patch
>
>
> ERROR [KafkaApis] [kafka-request-handler-9] [kafka-server] [] [KafkaApi-306] 
> error when handling request Name: 
> java.util.IllegalFormatConversionException: d != 
> kafka.server.LogOffsetMetadata
> at 
> java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4302)
> at 
> java.util.Formatter$FormatSpecifier.printInteger(Formatter.java:2793)
> at java.util.Formatter$FormatSpecifier.print(Formatter.java:2747)
> at java.util.Formatter.format(Formatter.java:2520)
> at java.util.Formatter.format(Formatter.java:2455)
> at java.lang.String.format(String.java:2925)
> at 
> scala.collection.immutable.StringLike$class.format(StringLike.scala:266)
> at scala.collection.immutable.StringOps.format(StringOps.scala:31)
> at 
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:253)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:788)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:788)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:433)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2293) IllegalFormatConversionException in Partition.scala

2015-06-22 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2293:
--

 Summary: IllegalFormatConversionException in Partition.scala
 Key: KAFKA-2293
 URL: https://issues.apache.org/jira/browse/KAFKA-2293
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


ERROR [KafkaApis] [kafka-request-handler-9] [kafka-server] [] [KafkaApi-306] 
error when handling request Name: 
java.util.IllegalFormatConversionException: d != kafka.server.LogOffsetMetadata
at 
java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4302)
at java.util.Formatter$FormatSpecifier.printInteger(Formatter.java:2793)
at java.util.Formatter$FormatSpecifier.print(Formatter.java:2747)
at java.util.Formatter.format(Formatter.java:2520)
at java.util.Formatter.format(Formatter.java:2455)
at java.lang.String.format(String.java:2925)
at 
scala.collection.immutable.StringLike$class.format(StringLike.scala:266)
at scala.collection.immutable.StringOps.format(StringOps.scala:31)
at 
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:253)
at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:791)
at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:788)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at 
kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:788)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:433)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-20 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14594702#comment-14594702
 ] 

Aditya Auradkar commented on KAFKA-2238:


[~junrao] - KafkaConfig seems to be created in Kafka.scala right before we pass 
it to the KafkaMetricsReporter using the raw properties object. I guess we cant 
pass in KafkaConfig directly because the same class is used in the scala 
clients as well. The only real benefit on adding these properties to 
KafkaConfig is that they get documented and show up on the release wikis.

[~gwenshap] - Took at look at KAFKA-2249. For metrics, are you referring to the 
config for "metricReporterClasses"?  If so, aren't they only for the new 
metrics package?

> KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
> ---
>
> Key: KAFKA-2238
> URL: https://issues.apache.org/jira/browse/KAFKA-2238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2238.patch
>
>
> All metrics config values are not included in KafkaConfig and consequently 
> cannot be configured into the brokers. This is because the 
> KafkaMetricsReporter is passed a properties object generated by calling 
> toProps on KafkaConfig
> KafkaMetricsReporter.startReporters(new 
> VerifiableProperties(serverConfig.toProps))
> However, KafkaConfig never writes these values into the properties object and 
> hence these aren't configurable. The defaults always apply
> Add the following metrics to KafkaConfig
> kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
> kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2287) Add metric to track the number of clientId throttled

2015-06-19 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2287:
--

 Summary: Add metric to track the number of clientId throttled
 Key: KAFKA-2287
 URL: https://issues.apache.org/jira/browse/KAFKA-2287
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-11 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582679#comment-14582679
 ] 

Aditya Auradkar commented on KAFKA-2238:


[~junrao][~nehanarkhede] Can one of you review this?

> KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
> ---
>
> Key: KAFKA-2238
> URL: https://issues.apache.org/jira/browse/KAFKA-2238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2238.patch
>
>
> All metrics config values are not included in KafkaConfig and consequently 
> cannot be configured into the brokers. This is because the 
> KafkaMetricsReporter is passed a properties object generated by calling 
> toProps on KafkaConfig
> KafkaMetricsReporter.startReporters(new 
> VerifiableProperties(serverConfig.toProps))
> However, KafkaConfig never writes these values into the properties object and 
> hence these aren't configurable. The defaults always apply
> Add the following metrics to KafkaConfig
> kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
> kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2205:
---
Labels: quotas  (was: )

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2241) AbstractFetcherThread.shutdown() should not block on ReadableByteChannel.read(buffer)

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2241:
---
Labels: quotas  (was: )

> AbstractFetcherThread.shutdown() should not block on 
> ReadableByteChannel.read(buffer)
> -
>
> Key: KAFKA-2241
> URL: https://issues.apache.org/jira/browse/KAFKA-2241
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>  Labels: quotas
> Attachments: KAFKA-2241.patch, KAFKA-2241_2015-06-03_15:30:35.patch, 
> client.java, server.java
>
>
> This is likely a bug from Java. This affects Kafka and here is the patch to 
> fix it.
> Here is the description of the bug. By description of SocketChannel in Java 7 
> Documentation. If another thread interrupts the current thread while the read 
> operation is in progress, the it should closes the channel and throw 
> ClosedByInterruptException. However, we find that interrupting the thread 
> will not unblock the channel immediately. Instead, it waits for response or 
> socket timeout before throwing an exception.
> This will cause problem in the following scenario. Suppose one 
> console_consumer_1 is reading from a topic, and due to quota delay or 
> whatever reason, it block on channel.read(buffer). At this moment, another 
> console_consumer_2 joins and triggers rebalance at console_consumer_1. But 
> consumer_1 will block waiting on the channel.read before it can release 
> partition ownership, causing consumer_2 to fail after a number of failed 
> attempts to obtain partition ownership.
> In other words, AbstractFetcherThread.shutdown() is not guaranteed to 
> shutdown due to this bug.
> The problem is confirmed with Java 1.7 and java 1.6. To check it by yourself, 
> you can use the attached server.java and client.java -- start the server 
> before the client and see if client unblock after interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2084:
---
Labels: quotas  (was: )

> byte rate metrics per client ID (producer and consumer)
> ---
>
> Key: KAFKA-2084
> URL: https://issues.apache.org/jira/browse/KAFKA-2084
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
> KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
> KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
> KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
> KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
> KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
> KAFKA-2084_2015-06-04_16:31:22.patch
>
>
> We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
> basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2171) System Test for Quotas

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2171:
---
Labels: quotas  (was: )

> System Test for Quotas
> --
>
> Key: KAFKA-2171
> URL: https://issues.apache.org/jira/browse/KAFKA-2171
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>  Labels: quotas
> Attachments: KAFKA-2171.patch, KAFKA-2171.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Initial setup and configuration:
> In all scenarios, we create the following entities and topic:
> - 1 kafka brokers
> - 1 topic with replication factor = 1, ackNum = -1, and parition = 6
> - 1 producer performance
> - 2 console consumers
> - 3 jmx tools, one for each of the producer and consumers
> - we consider two rates are approximately the same if they differ by at most 
> 10%.
> Scenario 1: validate the effectiveness of default producer and consumer quota
> 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
> 2) Produce 2000 messages of 2 bytes each (with clientId = 
> producer_performance)
> 3) Wait until producer stops
> 4) Two consumers consume from the topic (with clientId = group1 and group2 
> respectively )
> 5) verify that actual rate is within 10% of expected rate (quota)
> Scenario 2: validate the effectiveness of producer and consumer quota override
> 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
> Override quota of producer_performance and group1 to be 15000 Bytes/sec
> 2) Produce 2000 messages of 2 bytes each (with clientId = 
> producer_performance)
> 3) Wait until producer stops
> 4) Two consumers consume from the topic (with clientId = group1 and group2 
> respectively )
> 5) verify that actual rate is within 10% of expected rate (quota)
> Scenario 3: validate the effectiveness of quota sharing
> 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
> Override quota of producer_performance and group1 to be 15000 Bytes/sec
> 2) Produce 2000 messages of 2 bytes each (with clientId = 
> producer_performance)
> 3) Wait until producer stops
> 4) Two consumers consume from the topic (with clientId = group1 for both 
> consumers)
> 5) verify that actual rate is within 10% of expected rate (quota)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2136:
---
Labels: quotas  (was: )

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-06-08 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577509#comment-14577509
 ] 

Aditya Auradkar edited comment on KAFKA-1367 at 6/8/15 5:40 PM:


[~jjkoshy] [~junrao] KAFKA-2225, even if we leave the ISR in the 
TopicMetadataRequest, how do the consumers detect which of the replicas in ISR 
to fetch from right? The consumers need to know which "zone" each of the 
brokers live in and their own in order to fetch from the closest replica (which 
mitigates with the bandwidth issues described in 2225).

Couple of options:
1. Return it in BrokerMetadataRequest (KIP-24)
2. Piggyback it along with the ISR field in TMR. i.e. isr : {0: "zone1", 1: 
"zone2"}

If we choose to do (2), then the TMR will evolve anyway.


was (Author: aauradkar):
[~jjkoshy] [~junrao] KAFKA-2225, even if we leave the ISR in the 
TopicMetadataRequest, how do the consumers detect which of the replicas in ISR 
to fetch from right? The consumers need to know which "zone" each of the 
brokers live in and their own in order to fetch from the closest replica (which 
mitigates with the bandwidth issues described in 2225).

> Broker topic metadata not kept in sync with ZooKeeper
> -
>
> Key: KAFKA-1367
> URL: https://issues.apache.org/jira/browse/KAFKA-1367
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Ryan Berdeen
>Assignee: Ashish K Singh
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1367.txt
>
>
> When a broker is restarted, the topic metadata responses from the brokers 
> will be incorrect (different from ZooKeeper) until a preferred replica leader 
> election.
> In the metadata, it looks like leaders are correctly removed from the ISR 
> when a broker disappears, but followers are not. Then, when a broker 
> reappears, the ISR is never updated.
> I used a variation of the Vagrant setup created by Joe Stein to reproduce 
> this with latest from the 0.8.1 branch: 
> https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-06-08 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577509#comment-14577509
 ] 

Aditya Auradkar commented on KAFKA-1367:


[~jjkoshy] [~junrao] KAFKA-2225, even if we leave the ISR in the 
TopicMetadataRequest, how do the consumers detect which of the replicas in ISR 
to fetch from right? The consumers need to know which "zone" each of the 
brokers live in and their own in order to fetch from the closest replica (which 
mitigates with the bandwidth issues described in 2225).

> Broker topic metadata not kept in sync with ZooKeeper
> -
>
> Key: KAFKA-1367
> URL: https://issues.apache.org/jira/browse/KAFKA-1367
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Ryan Berdeen
>Assignee: Ashish K Singh
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1367.txt
>
>
> When a broker is restarted, the topic metadata responses from the brokers 
> will be incorrect (different from ZooKeeper) until a preferred replica leader 
> election.
> In the metadata, it looks like leaders are correctly removed from the ISR 
> when a broker disappears, but followers are not. Then, when a broker 
> reappears, the ISR is never updated.
> I used a variation of the Vagrant setup created by Joe Stein to reproduce 
> this with latest from the 0.8.1 branch: 
> https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2247) Merge kafka.utils.Time and kafka.common.utils.Time

2015-06-03 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2247:
--

 Summary: Merge kafka.utils.Time and kafka.common.utils.Time
 Key: KAFKA-2247
 URL: https://issues.apache.org/jira/browse/KAFKA-2247
 Project: Kafka
  Issue Type: Improvement
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
Priority: Minor


We currently have 2 different versions of Time in clients and core. These need 
to be merged



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2238:
---
Description: 
All metrics config values are not included in KafkaConfig and consequently 
cannot be configured into the brokers. This is because the KafkaMetricsReporter 
is passed a properties object generated by calling toProps on KafkaConfig
KafkaMetricsReporter.startReporters(new 
VerifiableProperties(serverConfig.toProps))

However, KafkaConfig never writes these values into the properties object and 
hence these aren't configurable. The defaults always apply

Add the following metrics to KafkaConfig
kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir

  was:
All metrics config values are not included in KafkaConfig and consequently do 
not show up in the generated documentation.

Add the following metrics to KafkaConfig
kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir

These metrics are read from 


> KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
> ---
>
> Key: KAFKA-2238
> URL: https://issues.apache.org/jira/browse/KAFKA-2238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2238.patch
>
>
> All metrics config values are not included in KafkaConfig and consequently 
> cannot be configured into the brokers. This is because the 
> KafkaMetricsReporter is passed a properties object generated by calling 
> toProps on KafkaConfig
> KafkaMetricsReporter.startReporters(new 
> VerifiableProperties(serverConfig.toProps))
> However, KafkaConfig never writes these values into the properties object and 
> hence these aren't configurable. The defaults always apply
> Add the following metrics to KafkaConfig
> kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
> kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2238:
---
Summary: KafkaMetricsConfig cannot be configured in broker (KafkaConfig)  
(was: KafkaMetricsConfig not documented in KafkaConfig)

> KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
> ---
>
> Key: KAFKA-2238
> URL: https://issues.apache.org/jira/browse/KAFKA-2238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2238.patch
>
>
> All metrics config values are not included in KafkaConfig and consequently do 
> not show up in the generated documentation.
> Add the following metrics to KafkaConfig
> kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
> kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir
> These metrics are read from 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2238) KafkaMetricsConfig not documented in KafkaConfig

2015-06-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2238:
---
Description: 
All metrics config values are not included in KafkaConfig and consequently do 
not show up in the generated documentation.

Add the following metrics to KafkaConfig
kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir

These metrics are read from 

  was:
All metrics config values are not included in KafkaConfig and consequently do 
not show up in the generated documentation.

Add the following metrics to KafkaConfig
kafka.metrics.reporters
kafka.metrics.polling.interval.secs



> KafkaMetricsConfig not documented in KafkaConfig
> 
>
> Key: KAFKA-2238
> URL: https://issues.apache.org/jira/browse/KAFKA-2238
> Project: Kafka
>  Issue Type: Bug
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2238.patch
>
>
> All metrics config values are not included in KafkaConfig and consequently do 
> not show up in the generated documentation.
> Add the following metrics to KafkaConfig
> kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
> kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir
> These metrics are read from 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2238) KafkaMetricsConfig not documented in KafkaConfig

2015-06-02 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2238:
--

 Summary: KafkaMetricsConfig not documented in KafkaConfig
 Key: KAFKA-2238
 URL: https://issues.apache.org/jira/browse/KAFKA-2238
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


All metrics config values are not included in KafkaConfig and consequently do 
not show up in the generated documentation.

Add the following metrics to KafkaConfig
kafka.metrics.reporters
kafka.metrics.polling.interval.secs




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2209) Change client quotas dynamically using DynamicConfigManager

2015-05-20 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2209:
--

 Summary: Change client quotas dynamically using 
DynamicConfigManager
 Key: KAFKA-2209
 URL: https://issues.apache.org/jira/browse/KAFKA-2209
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >