[jira] [Commented] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632276#comment-14632276
 ] 

Ashish K Singh commented on KAFKA-2347:
---

Thanks for the info [~becket_qin]. That turned out to be a breeze :). I hope I 
got it correct, can you take a look.

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Ashish K Singh
> Attachments: KAFKA-2347.patch
>
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632275#comment-14632275
 ] 

Ashish K Singh commented on KAFKA-2347:
---

Created reviewboard https://reviews.apache.org/r/36593/
 against branch trunk

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Ashish K Singh
> Attachments: KAFKA-2347.patch
>
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2347:
--
Status: Patch Available  (was: Open)

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Ashish K Singh
> Attachments: KAFKA-2347.patch
>
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2347:
--
Attachment: KAFKA-2347.patch

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Ashish K Singh
> Attachments: KAFKA-2347.patch
>
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36593: Patch for KAFKA-2347

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36593/
---

Review request for kafka.


Bugs: KAFKA-2347
https://issues.apache.org/jira/browse/KAFKA-2347


Repository: kafka


Description
---

KAFKA-2347: Add setConsumerRebalanceListener method to 
ZookeeperConsuemrConnector java api


Diffs
-

  core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java 
ca74ca8abf03478b6de4ec1f82fbac379e7603f1 

Diff: https://reviews.apache.org/r/36593/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Assigned] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh reassigned KAFKA-2347:
-

Assignee: Ashish K Singh

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Ashish K Singh
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632270#comment-14632270
 ] 

Ashish K Singh commented on KAFKA-2301:
---

[~ewencp], [~junrao] this depends on KAFKA-313. It will be nice if you guys can 
take a look at that as well. Thanks!

> Deprecate ConsumerOffsetChecker
> ---
>
> Key: KAFKA-2301
> URL: https://issues.apache.org/jira/browse/KAFKA-2301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch, 
> KAFKA-2301_2015-07-02_09:04:35.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632269#comment-14632269
 ] 

Jiangjie Qin commented on KAFKA-2347:
-

[~ashishujjain] Sure, actually it was my bad when writing KAFKA-1650. The 
function is already there everywhere, but I forgot to add the interface to Java 
API. Thanks.

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632268#comment-14632268
 ] 

Ashish K Singh commented on KAFKA-2275:
---

[~guozhang] if you agree with the approach of extending existing partitionsFor 
API, instead of creating a new ListTopics() API, then we should update the JIRA 
title accordingly. Thoughts?

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275_2015-07-17_21:39:27.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632267#comment-14632267
 ] 

Ashish K Singh commented on KAFKA-2275:
---

[~onurkaraman] let me know if it looks fine. Thanks!

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275_2015-07-17_21:39:27.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632265#comment-14632265
 ] 

Ashish K Singh commented on KAFKA-2275:
---

Updated reviewboard https://reviews.apache.org/r/36590/
 against branch trunk

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275_2015-07-17_21:39:27.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632264#comment-14632264
 ] 

Ewen Cheslack-Postava commented on KAFKA-2301:
--

[~junrao] This looks like a trivial addition of a warning, assigned to you as 
reviewer just to make sure it gets noticed...

> Deprecate ConsumerOffsetChecker
> ---
>
> Key: KAFKA-2301
> URL: https://issues.apache.org/jira/browse/KAFKA-2301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch, 
> KAFKA-2301_2015-07-02_09:04:35.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2275:
--
Attachment: KAFKA-2275_2015-07-17_21:39:27.patch

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275_2015-07-17_21:39:27.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36548: Patch for KAFKA-2336

2015-07-17 Thread Jiangjie Qin


> On July 17, 2015, 4:26 a.m., Jiangjie Qin wrote:
> > Looks good to me.
> 
> Jiangjie Qin wrote:
> Actually do we need to talk to Zookeeper every time? Can we read the data 
> from topic metadata cache directly?
> 
> Gwen Shapira wrote:
> Good point, Jiangjie - looks like partitionFor is called on every 
> ConsumerMetadataRequest handling, so some kind of caching will be nice.
> 
> Grant Henke wrote:
> We only talk to Zookeeper once at instance creation via the call to 
> `getOffsetsTopicPartitionCount` and setting the val 
> `offsetsTopicPartitionCount`. The static value is used from then on for every 
> call to `partitionFor`.
> 
> Jiangjie Qin wrote:
> Ah, I see, my bad. Then this patch seems not completely solve the issue, 
> though. Let's say offsets topic is not exist yet. What if two brokers had 
> different offset topic partition number configuration? After they startup and 
> before the offset topic get created in zookeeper, they will have different 
> value for offsetsTopicPartitionCount. Will that cause problem silently?
> 
> Grant Henke wrote:
> I think that sort of issue exists for many of Kafka's configurations, and 
> exists for this configuration without this patch too. I do not aim to solve 
> non-uniform configuration in this patch.

I agree that configuration mismatch could cause issue for other configurations 
as well. But having existing problems does not mean we should introduce one 
more to them. Besides, is it a simpler solution to just read from topic 
metadata cache?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36548/#review92020
---


On July 16, 2015, 6:04 p.m., Grant Henke wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36548/
> ---
> 
> (Updated July 16, 2015, 6:04 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2336
> https://issues.apache.org/jira/browse/KAFKA-2336
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fix Scala style
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 47b6ce93da320a565435b4a7916a0c4371143b8a 
> 
> Diff: https://reviews.apache.org/r/36548/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Grant Henke
> 
>



Re: Review Request 36590: Patch for KAFKA-2275

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36590/
---

(Updated July 18, 2015, 4:39 a.m.)


Review request for kafka.


Bugs: KAFKA-2275
https://issues.apache.org/jira/browse/KAFKA-2275


Repository: kafka


Description (updated)
---

Return metadata for all topics if empty list is passed to partitionsFor


KAFKA-2275: Add a "Map> partitionsFor(String... 
topics)" API to the new consumer


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
252b759c0801f392e3526b0f31503b4b8fbf1c8a 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
bea3d737c51be77d5b5293cdd944d33b905422ba 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
c14eed1e95f2e682a235159a366046f00d1d90d6 
  core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
3eb5f95731a3f06f662b334ab2b3d0ad7fa9e1ca 

Diff: https://reviews.apache.org/r/36590/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Updated] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2301:
-
Reviewer: Jun Rao

> Deprecate ConsumerOffsetChecker
> ---
>
> Key: KAFKA-2301
> URL: https://issues.apache.org/jira/browse/KAFKA-2301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch, 
> KAFKA-2301_2015-07-02_09:04:35.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-26 Add Copycat connector framework for data import/export

2015-07-17 Thread Ewen Cheslack-Postava
If I'm counting correctly, this passes with 7 binding and 1 non-binding
+1s. I'll update the wiki and post an initial patch in the coming days!

Thanks everyone for the feedback and for voting!

-Ewen

On Fri, Jul 17, 2015 at 3:22 PM, Joel Koshy  wrote:

> +1
>
> Thanks,
>
> Joel
>
> On Tue, Jul 14, 2015 at 2:09 PM, Ewen Cheslack-Postava
>  wrote:
> > Hi all,
> >
> > Let's start a vote on KIP-26: Add Copycat connector framework for data
> > import/export
> >
> > For reference, here's the wiki:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767
> > And the mailing list thread (split across two months):
> >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201506.mbox/%3CCAE1jLMOEJjnorFK5CtR3g-n%3Dm_AkrFsYeccsB4QimTRfGBrAGQ%40mail.gmail.com%3E
> >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201507.mbox/%3CCAHwHRrUeNh%2BnCHwCTUCrcipHM3Po0ECUysO%2B%3DX3nwUeOGrcgdw%40mail.gmail.com%3E
> >
> > Just to clarify since this is a bit different from the KIPs voted on so
> > far, the KIP just covers including Copycat in Kafka (rather than having
> it
> > as a separate project). While the KIP aimed to be clear about the exact
> > scope, the details require further discussion. The aim is to include some
> > connectors as well, at a minimum for demonstration purposes, but the
> > expectation is that connector development will, by necessity, be
> federated.
> >
> > I'll kick it off with a +1 (non-binding).
> >
> > --
> > Thanks,
> > Ewen
>



-- 
Thanks,
Ewen


[jira] [Comment Edited] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-17 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632259#comment-14632259
 ] 

Yasuhiro Matsuda edited comment on KAFKA-2260 at 7/18/15 4:16 AM:
--

Here is the outline of the variant Jay mentioned.

- A broker holds a fixed size array of offsets for each partition. Array 
indexes are hash of keys. In a sense, an array element works as a 
sub-partition. Sub-partitions do not hold data (messages). All they have are 
the high water marks.
- The broker maintains high water marks for each sub-partition. A sub-partition 
high water mark is updated when a message whose key belongs to the 
sub-partition is appended to the log. 
- An application maintains the high water mark of each partition (not 
sub-partition!) as it consumes messages. It doesn't need to know anything about 
sub-partitions in a broker.

A produce request is processed as follows.
1. The producer sends the known high water mark of the partition with a message.
2. The broker compares the high water mark in the produce request and the high 
water mark of the sub-partition corresponding the message key.
3. If the former is greater than or equal to the latter, the broker accepts the 
produce request. (Note that this is not equality test!)
4. Otherwise, the broker rejects the request.

A nice thing about this is that it is easy to increase the concurrency without 
re-partitioning, and its overhead is predictable.

When changing the number of sub-partitions, the broker doesn't have to 
recompute sub-partition high water marks. It can initialize all array elements 
with the partition's high water mark.



was (Author: yasuhiro.matsuda):
Here is the outline of the variant Jay mentioned.

- A broker holds a fixed size array of offsets for each partition. Array 
indexes are hash of keys. In a sense, an array element works as a 
sub-partition. Sub-partitions do not hold data (messages). All they have are 
the high water marks.
- The broker maintains high water marks for each sub-partition. A sub-partition 
high water mark is updated when a message whose key belongs to the 
sub-partition is appended to the log. 
- An application maintains the high water mark of each partition (not 
sub-partition!) as it consumes messages. It doesn't need to know anything about 
sub-partitions in a broker.

A produce request is processed as follows.
1. The producer sends the known high water mark of the partition with a message.
2. The broker compares the high water mark in the produce request and the high 
water mark of the sub-partition corresponding the message key.
3. If the former is greater than the latter, the broker accepts the produce 
request. (Note that this is not equality test!)
4. Otherwise, the broker rejects the request.

A nice thing about this is that it is easy to increase the concurrency without 
re-partitioning, and its overhead is predictable.

When changing the number of sub-partitions, the broker doesn't have to 
recompute sub-partition high water marks. It can initialize all array elements 
with the partition's high water mark.


> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain tha

[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-17 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632259#comment-14632259
 ] 

Yasuhiro Matsuda commented on KAFKA-2260:
-

Here is the outline of the variant Jay mentioned.

- A broker holds a fixed size array of offsets for each partition. Array 
indexes are hash of keys. In a sense, an array element works as a 
sub-partition. Sub-partitions do not hold data (messages). All they have are 
the high water marks.
- The broker maintains high water marks for each sub-partition. A sub-partition 
high water mark is updated when a message whose key belongs to the 
sub-partition is appended to the log. 
- An application maintains the high water mark of each partition (not 
sub-partition!) as it consumes messages. It doesn't need to know anything about 
sub-partitions in a broker.

A produce request is processed as follows.
1. The producer sends the known high water mark of the partition with a message.
2. The broker compares the high water mark in the produce request and the high 
water mark of the sub-partition corresponding the message key.
3. If the former is greater than the latter, the broker accepts the produce 
request. (Note that this is not equality test!)
4. Otherwise, the broker rejects the request.

A nice thing about this is that it is easy to increase the concurrency without 
re-partitioning, and its overhead is predictable.

When changing the number of sub-partitions, the broker doesn't have to 
recompute sub-partition high water marks. It can initialize all array elements 
with the partition's high water mark.


> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an e

[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-07-18_00:37:31.patch

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632244#comment-14632244
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-17 Thread Edward Ribeiro

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/
---

(Updated July 18, 2015, 3:37 a.m.)


Review request for kafka.


Bugs: KAFKA-2338
https://issues.apache.org/jira/browse/KAFKA-2338


Repository: kafka


Description
---

KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
update broker and consumer settings


Diffs (updated)
-

  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 

Diff: https://reviews.apache.org/r/36578/diff/


Testing
---


Thanks,

Edward Ribeiro



Re: Review Request 32841: Patch for KAFKA-2041

2015-07-17 Thread bantony

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32841/
---

(Updated July 18, 2015, 2:46 a.m.)


Review request for kafka.


Bugs: KAFKA-2041
https://issues.apache.org/jira/browse/KAFKA-2041


Repository: kafka


Description
---

KAFKA-2041


Diffs (updated)
-

  
log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
 628ff53 
  log4j-appender/src/main/java/org/apache/kafka/log4jappender/Keyer.java 
PRE-CREATION 
  
log4j-appender/src/test/java/org/apache/kafka/log4jappender/KafkaLog4jAppenderTest.java
 71bdd94 
  log4j-appender/src/test/java/org/apache/kafka/log4jappender/MockKeyer.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/32841/diff/


Testing
---


Thanks,

benoyantony



[jira] [Updated] (KAFKA-2041) Add ability to specify a KeyClass for KafkaLog4jAppender

2015-07-17 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated KAFKA-2041:

Attachment: kafka-2041-004.patch

Attaching the patch ported to Java.
Also added a test case to test whether the  key is properly derived.

> Add ability to specify a KeyClass for KafkaLog4jAppender
> 
>
> Key: KAFKA-2041
> URL: https://issues.apache.org/jira/browse/KAFKA-2041
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Benoy Antony
>Assignee: Jun Rao
> Attachments: KAFKA-2041.patch, kafka-2041-001.patch, 
> kafka-2041-002.patch, kafka-2041-003.patch, kafka-2041-004.patch
>
>
> KafkaLog4jAppender is the Log4j Appender to publish messages to Kafka. 
> Since there is no key or explicit partition number, the messages are sent to 
> random partitions. 
> In some cases, it is possible to derive a key from the message itself. 
> So it may be beneficial to enable KafkaLog4jAppender to accept KeyClass which 
> will provide a key for a given message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632227#comment-14632227
 ] 

Ashish K Singh commented on KAFKA-2347:
---

[~becket_qin] I might have some cycles to work on this, mind if I take a crack 
at it?

> Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java 
> api.
> ---
>
> Key: KAFKA-2347
> URL: https://issues.apache.org/jira/browse/KAFKA-2347
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>
> The setConsumerRebalanceListener() method is in scala API but not in java 
> api. Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632225#comment-14632225
 ] 

Ewen Cheslack-Postava commented on KAFKA-2260:
--

[~bkirwi] That's how I interpreted it initially as well, but with the right 
scheme it actually just works if you use the high watermark for the entire 
partition. I'll leave it to [~ymatsuda] to give the complete explanation of his 
very nice extension of this idea.

I'd suggest we should shift this discussion to the KIP mailing list thread. 
That has better visibility than a JIRA thread, so we'll probably get a more 
thorough and diverse discussion there. There are already some questions from 
others being addressed in that thread.

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632224#comment-14632224
 ] 

Ashish K Singh commented on KAFKA-2275:
---

[~onurkaraman] my bad, will update the patch.

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-17 Thread Edward Ribeiro


> On July 18, 2015, 12:10 a.m., Ashish Singh wrote:
> > core/src/main/scala/kafka/admin/TopicCommand.scala, lines 56-62
> > 
> >
> > Nit: What changed here? Its always a good idea to keep non-functional 
> > changes very minimal. If there a re a lot of them, they can be addressed in 
> > a separate patch. Makes it easier to review functional changes.

You right. Gonna fix this one and unrelated changes below.


- Edward


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92145
---


On July 17, 2015, 7:32 p.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> ---
> 
> (Updated July 17, 2015, 7:32 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
> https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



Re: Review Request 36578: Patch for KAFKA-2338

2015-07-17 Thread Edward Ribeiro


> On July 18, 2015, 12:10 a.m., Ashish Singh wrote:
> > core/src/main/scala/kafka/admin/TopicCommand.scala, line 87
> > 
> >
> > I guess it is unnecessary to parse "0" as int. You can have 
> > maxMessageSize init to 0 and update if getProperty returns a non null value.

Okay, but doing so it will be something like:

```
var maxMessageSize = 0
if (configs.getProperty(LogConfig.MaxMessageBytesProp) != null) 
   maxMessageSize = 
Integer.parseInt(configs.getProperty(LogConfig.MaxMessageBytesProp))
```

The second case still has to parseInt() a String, it uses a ``var``, and spends 
more lines (three or four, if I create a second variable to hold 
getProperty()). I don't see a particular advantage in doing this tbh, but I am 
fine with either way.


- Edward


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92145
---


On July 17, 2015, 7:32 p.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> ---
> 
> (Updated July 17, 2015, 7:32 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
> https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



[jira] [Created] (KAFKA-2347) Add setConsumerRebalanceListener method to ZookeeperConsuemrConnector java api.

2015-07-17 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2347:
---

 Summary: Add setConsumerRebalanceListener method to 
ZookeeperConsuemrConnector java api.
 Key: KAFKA-2347
 URL: https://issues.apache.org/jira/browse/KAFKA-2347
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin


The setConsumerRebalanceListener() method is in scala API but not in java api. 
Needs to add it back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632213#comment-14632213
 ] 

Onur Karaman commented on KAFKA-2275:
-

Just to make sure we're on the same page: 
https://reviews.apache.org/r/36590/diff/1/ does not address the original 
concern of the jira. It provides the PartitionInfo related only to the topics 
passed into partitionsFor.

I think the intent of listTopics() and what I poorly tried to convey earlier 
was that we need to have a way to get a global view of all topics and 
TopicPartitions in the cluster -- even we don't know any of the topics in the 
cluster.

This is useful if we want to do explicit group management (partition-based 
subscriptions), but we don't know upfront all the TopicPartitions we want to 
consume. The API would let us periodically check on the global view of the 
TopicPartitions so we can adjust the partitions we subscribed to.

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Guozhang Wang
+1

On Fri, Jul 17, 2015 at 10:06 AM, Gwen Shapira 
wrote:

> +1 (binding)
>
> On Fri, Jul 17, 2015 at 3:26 AM, Ismael Juma  wrote:
> > Hi all,
> >
> > I would like to start a vote on dropping support for Scala 2.9 for the
> next
> > release. People seemed to be in favour of the idea in previous
> discussions:
> >
> > * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
> > * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72
> >
> > Summary of why we should drop Scala 2.9:
> >
> > * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are
> > not binary compatible).
> > * Code has been committed to trunk that doesn't build with Scala 2.9
> weeks
> > ago and no-one seems to have noticed or cared (well, I filed
> > https://issues.apache.org/jira/browse/KAFKA-2325). Can we really
> support a
> > version if we don't test it?
> > * New clients library is written in Java and won't be affected. It also
> has
> > received a lot of work and it's much improved since the last release.
> > * It was released 4 years ago, it has been unsupported for a long time
> and
> > most projects have dropped support for it (for example, we use a
> different
> > version of ScalaTest for Scala 2.9)
> > * Scala 2.10 introduced Futures and a few useful features like String
> > interpolation and value classes.
> > * Doesn't work with Java 8 (
> https://issues.apache.org/jira/browse/KAFKA-2203
> > ).
> >
> > The reason not to drop it is to maintain compatibility for people stuck
> in
> > 2.9 who also want to upgrade both client and broker to the next Kafka
> > release.
> >
> > The vote will run for 72 hours.
> >
> > +1 (non-binding) from me.
> >
> > Best,
> > Ismael
>



-- 
-- Guozhang


[jira] [Commented] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632151#comment-14632151
 ] 

Ashish K Singh commented on KAFKA-2301:
---

[~nehanarkhede] pinging for review.

> Deprecate ConsumerOffsetChecker
> ---
>
> Key: KAFKA-2301
> URL: https://issues.apache.org/jira/browse/KAFKA-2301
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch, 
> KAFKA-2301_2015-07-02_09:04:35.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2299) kafka-patch-review tool does not correctly capture testing done

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632154#comment-14632154
 ] 

Ashish K Singh commented on KAFKA-2299:
---

[~nehanarkhede] pinging for review.

> kafka-patch-review tool does not correctly capture testing done
> ---
>
> Key: KAFKA-2299
> URL: https://issues.apache.org/jira/browse/KAFKA-2299
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Attachments: KAFKA-2299.patch
>
>
> kafka-patch-review tool does not correctly capture testing done when 
> specified with -t or --testing-done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-313) Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632153#comment-14632153
 ] 

Ashish K Singh commented on KAFKA-313:
--

[~nehanarkhede] pinging for review.

> Add JSON/CSV output and looping options to ConsumerGroupCommand
> ---
>
> Key: KAFKA-313
> URL: https://issues.apache.org/jira/browse/KAFKA-313
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dave DeMaagd
>Assignee: Ashish K Singh
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-313-2012032200.diff, KAFKA-313.1.patch, 
> KAFKA-313.patch, KAFKA-313_2015-02-23_18:11:32.patch, 
> KAFKA-313_2015-06-24_11:14:24.patch
>
>
> Adds:
> * '--loop N' - causes the program to loop forever, sleeping for up to N 
> seconds between loops (loop time minus collection time, unless that's less 
> than 0, at which point it will just run again immediately)
> * '--asjson' - display as a JSON string instead of the more human readable 
> output format.
> Neither of the above  depend on each other (you can loop in the human 
> readable output, or do a single shot execution with JSON output).  Existing 
> behavior/output maintained if neither of the above are used.  Diff Attached.
> Impacted files:
> core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92145
---



core/src/main/scala/kafka/admin/TopicCommand.scala (lines 56 - 62)


Nit: What changed here? Its always a good idea to keep non-functional 
changes very minimal. If there a re a lot of them, they can be addressed in a 
separate patch. Makes it easier to review functional changes.



core/src/main/scala/kafka/admin/TopicCommand.scala (line 87)


I guess it is unnecessary to parse "0" as int. You can have maxMessageSize 
init to 0 and update if getProperty returns a non null value.



core/src/main/scala/kafka/admin/TopicCommand.scala (line 119)


Non-func change?



core/src/main/scala/kafka/admin/TopicCommand.scala (line 129)


Non-func change?



core/src/main/scala/kafka/admin/TopicCommand.scala (line 145)


Non-func change?


- Ashish Singh


On July 17, 2015, 7:32 p.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> ---
> 
> (Updated July 17, 2015, 7:32 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
> https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> f84306143c43049e3aa44e42beaffe7eb2783163 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632137#comment-14632137
 ] 

Ashish K Singh commented on KAFKA-2275:
---

Created reviewboard https://reviews.apache.org/r/36590/
 against branch trunk

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36590: Patch for KAFKA-2275

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36590/
---

Review request for kafka.


Bugs: KAFKA-2275
https://issues.apache.org/jira/browse/KAFKA-2275


Repository: kafka


Description
---

KAFKA-2275: Add a "Map> partitionsFor(String... 
topics)" API to the new consumer


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
252b759c0801f392e3526b0f31503b4b8fbf1c8a 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
bea3d737c51be77d5b5293cdd944d33b905422ba 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
c14eed1e95f2e682a235159a366046f00d1d90d6 
  core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
3eb5f95731a3f06f662b334ab2b3d0ad7fa9e1ca 

Diff: https://reviews.apache.org/r/36590/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Updated] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2275:
--
Status: Patch Available  (was: Open)

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2275:
--
Attachment: KAFKA-2275.patch

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-17 Thread Ben Kirwin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632120#comment-14632120
 ] 

Ben Kirwin commented on KAFKA-2260:
---

Interesting -- the idea is that we track the max offset per *hash* of the key, 
instead of the key itself? I guess that if you use an array of length 1, this 
reduces to the current proposal. :) It would be interesting to calculate how 
frequently different keys would conflict, given a good hash function.

It seems like, for this to work, you'd need to add an additional method to the 
API to get the current offset for the hash of a given key?

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-17 Thread Ben Kirwin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14632113#comment-14632113
 ] 

Ben Kirwin commented on KAFKA-2260:
---

Ah, sorry! Let me try again.

Suppose you try and send a batch of messages to a partition, but get some 
network error -- it's possible that they were published successfully, or that 
they were lost before they made it to the broker. With the CAS, the simple 
thing to do is to resend the same batch with the same expected offsets. If the 
messages were published correctly last time, you'll get a check mismatch error; 
and if they weren't, they'll be appended correctly.

If the series of messages that the producer wants to send is fixed, the same 
mechanism would work even through producer restarts. If the set of messages 
isn't fixed -- the producer might have a completely different set of messages 
to send after restarting -- than what it means to be exactly-once becomes a lot 
more domain-dependent; you might want to write exactly one group of messages 
for each input message, or rpc request, or five-minute interval -- but that 
requires coordination between a bunch of different moving parts, and I don't 
think there's one coordination mechanism that handles all cases. (This 
'expected offset' thing is enough for some, but certainly not all of them...)

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: 

Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Joel Koshy
+1

On Fri, Jul 17, 2015 at 3:26 AM, Ismael Juma  wrote:
> Hi all,
>
> I would like to start a vote on dropping support for Scala 2.9 for the next
> release. People seemed to be in favour of the idea in previous discussions:
>
> * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
> * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72
>
> Summary of why we should drop Scala 2.9:
>
> * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are
> not binary compatible).
> * Code has been committed to trunk that doesn't build with Scala 2.9 weeks
> ago and no-one seems to have noticed or cared (well, I filed
> https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a
> version if we don't test it?
> * New clients library is written in Java and won't be affected. It also has
> received a lot of work and it's much improved since the last release.
> * It was released 4 years ago, it has been unsupported for a long time and
> most projects have dropped support for it (for example, we use a different
> version of ScalaTest for Scala 2.9)
> * Scala 2.10 introduced Futures and a few useful features like String
> interpolation and value classes.
> * Doesn't work with Java 8 (https://issues.apache.org/jira/browse/KAFKA-2203
> ).
>
> The reason not to drop it is to maintain compatibility for people stuck in
> 2.9 who also want to upgrade both client and broker to the next Kafka
> release.
>
> The vote will run for 72 hours.
>
> +1 (non-binding) from me.
>
> Best,
> Ismael


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-17 Thread Ben Kirwin
Hi all,

So, perhaps it's worth adding a couple specific examples of where this
feature is useful, to make this a bit more concrete:

- Suppose I'm using Kafka as a commit log for a partitioned KV store,
like Samza or Pistachio (?) do. We bootstrap the process state by
reading from that partition, and log all state updates to that
partition when we're running. Now imagine that one of my processes
locks up -- GC or similar -- and the system transitions that partition
over to another node. When the GC is finished, the old 'owner' of that
partition might still be trying to write to the commit log at the same
as the new one is. A process might detect this by noticing that the
offset of the published message is bigger than it thought the upcoming
offset was, which implies someone else has been writing to the log...
but by then it's too late, and the commit log is already corrupt. With
a 'conditional produce', one of those processes will have it's publish
request refused -- so we've avoided corrupting the state.

- Envision some copycat-like system, where we have some sharded
postgres setup and we're tailing each shard into its own partition.
Normally, it's fairly easy to avoid duplicates here: we can track
which offset in the WAL corresponds to which offset in Kafka, and we
know how many messages we've written to Kafka already, so the state is
very simple. However, it is possible that for a moment -- due to
rebalancing or operator error or some other thing -- two different
nodes are tailing the same postgres shard at once! Normally this would
introduce duplicate messages, but by specifying the expected offset,
we can avoid this.

So perhaps it's better to say that this is useful when a single
producer is *expected*, but multiple producers are *possible*? (In the
same way that the high-level consumer normally has 1 consumer in a
group reading from a partition, but there are small windows where more
than one might be reading at the same time.) This is also the spirit
of the 'runtime cost' comment -- in the common case, where there is
little to no contention, there's no performance overhead either. I
mentioned this a little in the Motivation section -- maybe I should
flesh that out a little bit?

For me, the motivation to work this up was that I kept running into
cases, like the above, where the existing API was almost-but-not-quite
enough to give the guarantees I was looking for -- and the extension
needed to handle those cases too was pretty small and natural-feeling.

On Fri, Jul 17, 2015 at 4:49 PM, Ashish Singh  wrote:
> Good concept. I have a question though.
>
> Say there are two producers A and B. Both producers are producing to same
> partition.
> - A sends a message with expected offset, x1
> - Broker accepts is and sends an Ack
> - B sends a message with expected offset, x1
> - Broker rejects it, sends nack
> - B sends message again with expected offset, x1+1
> - Broker accepts it and sends Ack
> I guess this is what this KIP suggests, right? If yes, then how does this
> ensure that same message will not be written twice when two producers are
> producing to same partition? Producer on receiving a nack will try again
> with next offset and will keep doing so till the message is accepted. Am I
> missing something?
>
> Also, you have mentioned on KIP, "it imposes little to no runtime cost in
> memory or time", I think that is not true for time. With this approach
> producers' performance will reduce proportionally to number of producers
> writing to same partition. Please correct me if I am missing out something.
>
>
> On Fri, Jul 17, 2015 at 11:32 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
>
>> If we have 2 producers producing to a partition, they can be out of order,
>> then how does one producer know what offset to expect as it does not
>> interact with other producer?
>>
>> Can you give an example flow that explains how it works with single
>> producer and with multiple producers?
>>
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Fri, Jul 17, 2015 at 10:28 AM, Flavio Junqueira <
>> fpjunque...@yahoo.com.invalid> wrote:
>>
>> > I like this feature, it reminds me of conditional updates in zookeeper.
>> > I'm not sure if it'd be best to have some mechanism for fencing rather
>> than
>> > a conditional write like you're proposing. The reason I'm saying this is
>> > that the conditional write applies to requests individually, while it
>> > sounds like you want to make sure that there is a single client writing
>> so
>> > over multiple requests.
>> >
>> > -Flavio
>> >
>> > > On 17 Jul 2015, at 07:30, Ben Kirwin  wrote:
>> > >
>> > > Hi there,
>> > >
>> > > I just added a KIP for a 'conditional publish' operation: a simple
>> > > CAS-like mechanism for the Kafka producer. The wiki page is here:
>> > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
>> > >
>> > > And there's some previous discussion on the ticket and the users list:
>> > >
>> > > https://is

Re: [VOTE] KIP-26 Add Copycat connector framework for data import/export

2015-07-17 Thread Joel Koshy
+1

Thanks,

Joel

On Tue, Jul 14, 2015 at 2:09 PM, Ewen Cheslack-Postava
 wrote:
> Hi all,
>
> Let's start a vote on KIP-26: Add Copycat connector framework for data
> import/export
>
> For reference, here's the wiki:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767
> And the mailing list thread (split across two months):
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201506.mbox/%3CCAE1jLMOEJjnorFK5CtR3g-n%3Dm_AkrFsYeccsB4QimTRfGBrAGQ%40mail.gmail.com%3E
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201507.mbox/%3CCAHwHRrUeNh%2BnCHwCTUCrcipHM3Po0ECUysO%2B%3DX3nwUeOGrcgdw%40mail.gmail.com%3E
>
> Just to clarify since this is a bit different from the KIPs voted on so
> far, the KIP just covers including Copycat in Kafka (rather than having it
> as a separate project). While the KIP aimed to be clear about the exact
> scope, the details require further discussion. The aim is to include some
> connectors as well, at a minimum for demonstration purposes, but the
> expectation is that connector development will, by necessity, be federated.
>
> I'll kick it off with a +1 (non-binding).
>
> --
> Thanks,
> Ewen


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-17 Thread Ashish Singh
Good concept. I have a question though.

Say there are two producers A and B. Both producers are producing to same
partition.
- A sends a message with expected offset, x1
- Broker accepts is and sends an Ack
- B sends a message with expected offset, x1
- Broker rejects it, sends nack
- B sends message again with expected offset, x1+1
- Broker accepts it and sends Ack
I guess this is what this KIP suggests, right? If yes, then how does this
ensure that same message will not be written twice when two producers are
producing to same partition? Producer on receiving a nack will try again
with next offset and will keep doing so till the message is accepted. Am I
missing something?

Also, you have mentioned on KIP, "it imposes little to no runtime cost in
memory or time", I think that is not true for time. With this approach
producers' performance will reduce proportionally to number of producers
writing to same partition. Please correct me if I am missing out something.


On Fri, Jul 17, 2015 at 11:32 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> If we have 2 producers producing to a partition, they can be out of order,
> then how does one producer know what offset to expect as it does not
> interact with other producer?
>
> Can you give an example flow that explains how it works with single
> producer and with multiple producers?
>
>
> Thanks,
>
> Mayuresh
>
> On Fri, Jul 17, 2015 at 10:28 AM, Flavio Junqueira <
> fpjunque...@yahoo.com.invalid> wrote:
>
> > I like this feature, it reminds me of conditional updates in zookeeper.
> > I'm not sure if it'd be best to have some mechanism for fencing rather
> than
> > a conditional write like you're proposing. The reason I'm saying this is
> > that the conditional write applies to requests individually, while it
> > sounds like you want to make sure that there is a single client writing
> so
> > over multiple requests.
> >
> > -Flavio
> >
> > > On 17 Jul 2015, at 07:30, Ben Kirwin  wrote:
> > >
> > > Hi there,
> > >
> > > I just added a KIP for a 'conditional publish' operation: a simple
> > > CAS-like mechanism for the Kafka producer. The wiki page is here:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
> > >
> > > And there's some previous discussion on the ticket and the users list:
> > >
> > > https://issues.apache.org/jira/browse/KAFKA-2260
> > >
> > >
> >
> https://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAAeOB6ccyAA13YNPqVQv2o-mT5r=c9v7a+55sf2wp93qg7+...@mail.gmail.com%3E
> > >
> > > As always, comments and suggestions are very welcome.
> > >
> > > Thanks,
> > > Ben
> >
> >
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 

Regards,
Ashish


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631868#comment-14631868
 ] 

Ismael Juma commented on KAFKA-1595:


That's fine Gwen, it's not blocking anything. I asked whether you were 
interested because you participated in the ticket, but no pressure, of course. 
:)Thanks!

> Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
> -
>
> Key: KAFKA-1595
> URL: https://issues.apache.org/jira/browse/KAFKA-1595
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Jagbir
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.8.3
>
>
> The following issue is created as a follow up suggested by Jun Rao
> in a kafka news group message with the Subject
> "Blocking Recursive parsing from 
> kafka.consumer.TopicCount$.constructTopicCount"
> SUMMARY:
> An issue was detected in a typical cluster of 3 kafka instances backed
> by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
> java version 1.7.0_65). On consumer end, when consumers get recycled,
> there is a troubling JSON parsing recursion which takes a busy lock and
> blocks consumers thread pool.
> In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
> a global lock (0xd3a7e1d0) during the rebalance, and fires an
> expensive JSON parsing, while keeping the other consumers from shutting
> down, see, e.g,
> at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
> The deep recursive JSON parsing should be deprecated in favor
> of a better JSON parser, see, e.g,
> http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
> DETAILS:
> The first dump is for a recursive blocking thread holding the lock for 
> 0xd3a7e1d0
> and the subsequent dump is for a waiting thread.
> (Please grep for 0xd3a7e1d0 to see the locked object.)
> Â 
> -8<-
> "Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor"
> prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
> java.lang.Thread.State: RUNNABLE
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.sc

[jira] [Commented] (KAFKA-824) java.lang.NullPointerException in commitOffsets

2015-07-17 Thread Nick Zalabak (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631822#comment-14631822
 ] 

Nick Zalabak commented on KAFKA-824:


Can someone confirm that upgrading to zkclient-0.5 did fix the NPE?

> java.lang.NullPointerException in commitOffsets 
> 
>
> Key: KAFKA-824
> URL: https://issues.apache.org/jira/browse/KAFKA-824
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.7.2, 0.8.2.0
>Reporter: Yonghui Zhao
>  Labels: newbie
> Attachments: ZkClient.0.3.txt, ZkClient.0.4.txt, screenshot-1.jpg
>
>
> Neha Narkhede
> "Yes, I have. Unfortunately, I never quite around to fixing it. My guess is
> that it is caused due to a race condition between the rebalance thread and
> the offset commit thread when a rebalance is triggered or the client is
> being shutdown. Do you mind filing a bug ?"
> 2013/03/25 12:08:32.020 WARN [ZookeeperConsumerConnector] [] 
> 0_lu-ml-test10.bj-1364184411339-7c88f710 exception during commitOffsets
> java.lang.NullPointerException
> at org.I0Itec.zkclient.ZkConnection.writeData(ZkConnection.java:111)
> at org.I0Itec.zkclient.ZkClient$10.call(ZkClient.java:813)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
> at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:809)
> at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:777)
> at kafka.utils.ZkUtils$.updatePersistentPath(ZkUtils.scala:103)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2$$anonfun$apply$4.apply(ZookeeperConsumerConnector.scala:251)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2$$anonfun$apply$4.apply(ZookeeperConsumerConnector.scala:248)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:549)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at 
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:570)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2.apply(ZookeeperConsumerConnector.scala:248)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2.apply(ZookeeperConsumerConnector.scala:246)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at kafka.utils.Pool$$anon$1.foreach(Pool.scala:53)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at kafka.utils.Pool.foreach(Pool.scala:24)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:246)
> at 
> kafka.consumer.ZookeeperConsumerConnector.autoCommit(ZookeeperConsumerConnector.scala:232)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:126)
> at kafka.utils.Utils$$anon$2.run(Utils.scala:58)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631785#comment-14631785
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Hi [~ewencp], as this my first official Kafka patch (yay!) it should have some 
misunderstandings or plain errors (I was particularly in doubt about the 
logging of a useful message by the broker's replica fetcher). Please, whenever 
you have time, see if it's okay. Thanks!

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338.patch

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631784#comment-14631784
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Created reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36578: Patch for KAFKA-2338

2015-07-17 Thread Edward Ribeiro

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/
---

Review request for kafka.


Bugs: KAFKA-2338
https://issues.apache.org/jira/browse/KAFKA-2338


Repository: kafka


Description
---

KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
update broker and consumer settings


Diffs
-

  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
f84306143c43049e3aa44e42beaffe7eb2783163 

Diff: https://reviews.apache.org/r/36578/diff/


Testing
---


Thanks,

Edward Ribeiro



Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Jakob Homan
+1 (binding)

On 17 July 2015 at 10:06, Gwen Shapira  wrote:
> +1 (binding)
>
> On Fri, Jul 17, 2015 at 3:26 AM, Ismael Juma  wrote:
>> Hi all,
>>
>> I would like to start a vote on dropping support for Scala 2.9 for the next
>> release. People seemed to be in favour of the idea in previous discussions:
>>
>> * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
>> * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72
>>
>> Summary of why we should drop Scala 2.9:
>>
>> * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are
>> not binary compatible).
>> * Code has been committed to trunk that doesn't build with Scala 2.9 weeks
>> ago and no-one seems to have noticed or cared (well, I filed
>> https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a
>> version if we don't test it?
>> * New clients library is written in Java and won't be affected. It also has
>> received a lot of work and it's much improved since the last release.
>> * It was released 4 years ago, it has been unsupported for a long time and
>> most projects have dropped support for it (for example, we use a different
>> version of ScalaTest for Scala 2.9)
>> * Scala 2.10 introduced Futures and a few useful features like String
>> interpolation and value classes.
>> * Doesn't work with Java 8 (https://issues.apache.org/jira/browse/KAFKA-2203
>> ).
>>
>> The reason not to drop it is to maintain compatibility for people stuck in
>> 2.9 who also want to upgrade both client and broker to the next Kafka
>> release.
>>
>> The vote will run for 72 hours.
>>
>> +1 (non-binding) from me.
>>
>> Best,
>> Ismael


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631772#comment-14631772
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] Another patch ready!

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Official Kafka Gitter Room?

2015-07-17 Thread Ashish Singh
+1, will be helpful.

On Fri, Jul 17, 2015 at 11:28 AM, Grant Henke  wrote:

> With more of Kafka's development moving to Github is there any interest in
> creating a Gitter chat room?
>
> I think it could be useful to have a place to chat that is associated with
> the Kafka repo. Note that we do currently have an IRC channel, but from my
> experience its a ghost town.
>
> I am also open to alternatives but mention Gitter because:
>
>- It's directly associated with the repository
>- Strong Github integration
>- Very accessible (No application install required)
>- Nearly everyone already has a Github account
>- ... I am sure there is more.
>
> --
> Grant Henke
> Solutions Consultant | Cloudera
> ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>



-- 

Regards,
Ashish


Merge improvements back into Kafka Metrics?

2015-07-17 Thread Felix GV
Hi,

We've been using the new Kafka Metrics within Voldemort for a little while
now, and we have made some improvements to the library that you might like
to copy back into Kafka proper. You can view the changes that went in after
we forked here:

https://github.com/tehuti-io/tehuti/commits/master

The most critical ones are probably these two:

   - A pretty simpe yet nasty bug in Percentile that pretty much made
   Histograms useless otherwise:
   
https://github.com/tehuti-io/tehuti/commit/913dcc0dcc79e2ce87a4c3e52a1affe2aaae9948
   - A few improvements to SampledStat (unfortunately littered across
   several commits) were made to prevent spurious values from being measured
   out of a disproportionately small time window (either initally, or because
   all windows expired in the case of an infrequently used stat) :
   
https://github.com/tehuti-io/tehuti/blob/master/src/main/java/io/tehuti/metrics/stats/SampledStat.java

There were other minor changes here and there, to make the APIs more usable
(IMHO) though that may be a matter of personal taste more than correctness.

If you're interested in the above changes, I could put together a patch and
file a JIRA. Or someone else can do it if they prefer.

On an unrelated note, if you do merge the changes back into Kafka, it would
be nice if you considered releasing kafka-metrics as a standalone artifact.
Voldemort could depend on kafka-metrics rather than tehuti if it was fixed
properly, but it would be a stretch for Voldemort to depend on all of Kafka
(or even Kafka clients...). The fork was just to iterate quicker at the
time we needed this, but it would be nice to bring it back together.

Let me know if I can help in any way.

--
*Felix GV*
Senior Software Engineer
Data Infrastructure
LinkedIn

f...@linkedin.com
linkedin.com/in/felixgv


Official Kafka Gitter Room?

2015-07-17 Thread Grant Henke
With more of Kafka's development moving to Github is there any interest in
creating a Gitter chat room?

I think it could be useful to have a place to chat that is associated with
the Kafka repo. Note that we do currently have an IRC channel, but from my
experience its a ghost town.

I am also open to alternatives but mention Gitter because:

   - It's directly associated with the repository
   - Strong Github integration
   - Very accessible (No application install required)
   - Nearly everyone already has a Github account
   - ... I am sure there is more.

-- 
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-17 Thread Mayuresh Gharat
If we have 2 producers producing to a partition, they can be out of order,
then how does one producer know what offset to expect as it does not
interact with other producer?

Can you give an example flow that explains how it works with single
producer and with multiple producers?


Thanks,

Mayuresh

On Fri, Jul 17, 2015 at 10:28 AM, Flavio Junqueira <
fpjunque...@yahoo.com.invalid> wrote:

> I like this feature, it reminds me of conditional updates in zookeeper.
> I'm not sure if it'd be best to have some mechanism for fencing rather than
> a conditional write like you're proposing. The reason I'm saying this is
> that the conditional write applies to requests individually, while it
> sounds like you want to make sure that there is a single client writing so
> over multiple requests.
>
> -Flavio
>
> > On 17 Jul 2015, at 07:30, Ben Kirwin  wrote:
> >
> > Hi there,
> >
> > I just added a KIP for a 'conditional publish' operation: a simple
> > CAS-like mechanism for the Kafka producer. The wiki page is here:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
> >
> > And there's some previous discussion on the ticket and the users list:
> >
> > https://issues.apache.org/jira/browse/KAFKA-2260
> >
> >
> https://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAAeOB6ccyAA13YNPqVQv2o-mT5r=c9v7a+55sf2wp93qg7+...@mail.gmail.com%3E
> >
> > As always, comments and suggestions are very welcome.
> >
> > Thanks,
> > Ben
>
>


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Build failed in Jenkins: Kafka-trunk #549

2015-07-17 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2345;  Attempt to delete a topic already marked for deletion 
throws ZkNodeExistsException; patched by Ashish Singh; reviewed by Sriharsha 
Chintalapani and Ismael Juma

[cshapi] Adding a file missed while committing KAFKA-2345

--
Started by an SCM change
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 84636272422b6379d57d4c5ef68b156edc1c67f8 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 84636272422b6379d57d4c5ef68b156edc1c67f8
 > git rev-list 15cba9f00dc606dd49e428f4ac8ccae0c0b8b37d # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson1285061421548932889.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 21.035 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson6954480513441923318.sh
+ ./gradlew -PscalaVersion=2.10.1 test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:rat
Rat report: build/rat/rat-report.html
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:checkstyleMain UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:checkstyleTest UP-TO-DATE
:clients:test UP-TO-DATE
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:compileTestJava UP-TO-DATE
:contrib:processTestResources UP-TO-DATE
:contrib:testClasses UP-TO-DATE
:contrib:test UP-TO-DATE
:clients:jar UP-TO-DATE
:log4j-appender:compileJava UP-TO-DATE
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes UP-TO-DATE
:log4j-appender:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:230:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

Re: Review Request 34554: Patch for KAFKA-2205

2015-07-17 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 17, 2015, 6:20 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

KAFKA-2205: Summary of changes
1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
handle multiple types of entities.
2. Changed format of the notification znode as described in KIP-21
3. Replaced TopicConfigManager with DynamicConfigManager.
4. Added new testcases. Existing testcases all pass
5. Added ConfigCommand to handle all config changes. Eventually this will make 
calls to the broker once the new API's are built for now it speaks to ZK 
directly
6. Addressed all of Jun's comments.


Diffs
-

  bin/kafka-configs.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
2b4e028f8a60fcae40c42cfabcc357e70e7ff9a6 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/cluster/Partition.scala 
2649090b6cbf8d442649f19fd7113a30d62bca91 
  core/src/main/scala/kafka/controller/KafkaController.scala 
b4fc755641b9bbe8a6bf9c221a9ffaec0b94d6e8 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
bb6b5c8764522e7947bb08998256ce1deb717c84 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
c89d00b5976ffa34cafdae261239934b1b917bfe 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
166814c2959a429e20f400d1c0e523090ce37d91 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
dcd69881445c29765f66a7d21d2d18437f4df428 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
8a871cfaf6a534acd1def06a5ac95b5c985b024c 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631699#comment-14631699
 ] 

Aditya A Auradkar commented on KAFKA-2205:
--

Updated reviewboard https://reviews.apache.org/r/34554/diff/
 against branch origin/trunk

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631700#comment-14631700
 ] 

Gwen Shapira commented on KAFKA-1595:
-

[~ijuma] - I'll be happy to review, but can't commit on when I'll get around to 
it - so I hope its not blocking anything you are working on.

> Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
> -
>
> Key: KAFKA-1595
> URL: https://issues.apache.org/jira/browse/KAFKA-1595
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Jagbir
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.8.3
>
>
> The following issue is created as a follow up suggested by Jun Rao
> in a kafka news group message with the Subject
> "Blocking Recursive parsing from 
> kafka.consumer.TopicCount$.constructTopicCount"
> SUMMARY:
> An issue was detected in a typical cluster of 3 kafka instances backed
> by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
> java version 1.7.0_65). On consumer end, when consumers get recycled,
> there is a troubling JSON parsing recursion which takes a busy lock and
> blocks consumers thread pool.
> In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
> a global lock (0xd3a7e1d0) during the rebalance, and fires an
> expensive JSON parsing, while keeping the other consumers from shutting
> down, see, e.g,
> at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
> The deep recursive JSON parsing should be deprecated in favor
> of a better JSON parser, see, e.g,
> http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
> DETAILS:
> The first dump is for a recursive blocking thread holding the lock for 
> 0xd3a7e1d0
> and the subsequent dump is for a waiting thread.
> (Please grep for 0xd3a7e1d0 to see the locked object.)
> Â 
> -8<-
> "Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor"
> prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
> java.lang.Thread.State: RUNNABLE
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> s

[jira] [Updated] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2205:
-
Attachment: KAFKA-2205_2015-07-17_11:18:31.patch

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch, 
> KAFKA-2205_2015-07-17_11:18:31.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-17 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 17, 2015, 6:18 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

Some fixes


KAFKA-2205


KAFKA-2205


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Minor changes


Diffs (updated)
-

  bin/kafka-configs.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
2b4e028f8a60fcae40c42cfabcc357e70e7ff9a6 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/cluster/Partition.scala 
2649090b6cbf8d442649f19fd7113a30d62bca91 
  core/src/main/scala/kafka/controller/KafkaController.scala 
b4fc755641b9bbe8a6bf9c221a9ffaec0b94d6e8 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
bb6b5c8764522e7947bb08998256ce1deb717c84 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
c89d00b5976ffa34cafdae261239934b1b917bfe 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
166814c2959a429e20f400d1c0e523090ce37d91 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
dcd69881445c29765f66a7d21d2d18437f4df428 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
8a871cfaf6a534acd1def06a5ac95b5c985b024c 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631694#comment-14631694
 ] 

Aditya A Auradkar commented on KAFKA-2205:
--

Updated reviewboard https://reviews.apache.org/r/34554/diff/
 against branch origin/trunk

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-17 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 17, 2015, 6:14 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

KAFKA-2205: Summary of changes
1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
handle multiple types of entities.
2. Changed format of the notification znode as described in KIP-21
3. Replaced TopicConfigManager with DynamicConfigManager.
4. Added new testcases. Existing testcases all pass
5. Added ConfigCommand to handle all config changes. Eventually this will make 
calls to the broker once the new API's are built for now it speaks to ZK 
directly
6. Addressed all of Jun's comments.


Diffs
-

  bin/kafka-configs.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/cluster/Partition.scala 
2649090b6cbf8d442649f19fd7113a30d62bca91 
  core/src/main/scala/kafka/controller/KafkaController.scala 
b4fc755641b9bbe8a6bf9c221a9ffaec0b94d6e8 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
bb6b5c8764522e7947bb08998256ce1deb717c84 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
c89d00b5976ffa34cafdae261239934b1b917bfe 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
166814c2959a429e20f400d1c0e523090ce37d91 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
dcd69881445c29765f66a7d21d2d18437f4df428 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
8a871cfaf6a534acd1def06a5ac95b5c985b024c 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-17 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2205:
-
Attachment: KAFKA-2205_2015-07-17_11:14:26.patch

> Generalize TopicConfigManager to handle multiple entity configs
> ---
>
> Key: KAFKA-2205
> URL: https://issues.apache.org/jira/browse/KAFKA-2205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
> Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch, 
> KAFKA-2205_2015-07-07_19:12:15.patch, KAFKA-2205_2015-07-14_10:33:47.patch, 
> KAFKA-2205_2015-07-14_10:36:36.patch, KAFKA-2205_2015-07-17_11:14:26.patch
>
>
> Acceptance Criteria:
> - TopicConfigManager should be generalized to handle Topic and Client configs 
> (and any type of config in the future). As described in KIP-21
> - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-17 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 17, 2015, 6:14 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

Some fixes


KAFKA-2205


KAFKA-2205


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Addressing Jun's comments


Diffs (updated)
-

  bin/kafka-configs.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/cluster/Partition.scala 
2649090b6cbf8d442649f19fd7113a30d62bca91 
  core/src/main/scala/kafka/controller/KafkaController.scala 
b4fc755641b9bbe8a6bf9c221a9ffaec0b94d6e8 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
bb6b5c8764522e7947bb08998256ce1deb717c84 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
c89d00b5976ffa34cafdae261239934b1b917bfe 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
166814c2959a429e20f400d1c0e523090ce37d91 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
dcd69881445c29765f66a7d21d2d18437f4df428 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
8a871cfaf6a534acd1def06a5ac95b5c985b024c 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar



Re: Review Request 34554: Patch for KAFKA-2205

2015-07-17 Thread Aditya Auradkar


> On July 17, 2015, 4:43 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/ConfigHandler.scala, lines 64-65
> > 
> >
> > Could we just use Pool?

Nice.. didn't know about that util.


- Aditya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/#review92008
---


On July 14, 2015, 5:37 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34554/
> ---
> 
> (Updated July 14, 2015, 5:37 p.m.)
> 
> 
> Review request for kafka, Joel Koshy and Jun Rao.
> 
> 
> Bugs: KAFKA-2205
> https://issues.apache.org/jira/browse/KAFKA-2205
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2205: Summary of changes
> 1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
> handle multiple types of entities.
> 2. Changed format of the notification znode as described in KIP-21
> 3. Replaced TopicConfigManager with DynamicConfigManager.
> 4. Added new testcases. Existing testcases all pass
> 5. Added ConfigCommand to handle all config changes. Eventually this will 
> make calls to the broker once the new API's are built for now it speaks to ZK 
> directly
> 6. Addressed all of Jun's comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> 2649090b6cbf8d442649f19fd7113a30d62bca91 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> b4fc755641b9bbe8a6bf9c221a9ffaec0b94d6e8 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> bb6b5c8764522e7947bb08998256ce1deb717c84 
>   core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
> 64ecb499f24bc801d48f86e1612d927cc08e006d 
>   core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> c89d00b5976ffa34cafdae261239934b1b917bfe 
>   core/src/main/scala/kafka/server/TopicConfigManager.scala 
> 01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
>   core/src/main/scala/kafka/utils/ZkUtils.scala 
> 166814c2959a429e20f400d1c0e523090ce37d91 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
>   core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
> dcd69881445c29765f66a7d21d2d18437f4df428 
>   core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
> 8a871cfaf6a534acd1def06a5ac95b5c985b024c 
> 
> Diff: https://reviews.apache.org/r/34554/diff/
> 
> 
> Testing
> ---
> 
> 1. Added new testcases for new code.
> 2. Verified that both topic and client configs can be changed dynamically by 
> starting a local cluster
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Re: Build failed in Jenkins: KafkaPreCommit #156

2015-07-17 Thread Ismael Juma
It's a bit easy for this to happen with the patch workflow. The GitHub one
via the merge script fixes it nicely. There are a couple of improvements
that need to be done and then I will start a vote on switching to it.
Hopefully early next week.

Ismael
On 17 Jul 2015 18:55, "Gwen Shapira"  wrote:

> my bad, I missed a file while committing.
>
> I did a "trivial" commit with the missing file and I think the build
> looks ok now.
>
>
> On Fri, Jul 17, 2015 at 10:42 AM, Gwen Shapira  wrote:
> > Ick. It seemed to work locally. I'm checking what went wrong.
> >
> > Let me know if you want a revert.
> >
> > On Fri, Jul 17, 2015 at 10:39 AM, Apache Jenkins Server
> >  wrote:
> >> See 
> >>
> >> Changes:
> >>
> >> [cshapi] KAFKA-2345;  Attempt to delete a topic already marked for
> deletion throws ZkNodeExistsException; patched by Ashish Singh; reviewed by
> Sriharsha Chintalapani and Ismael Juma
> >>
> >> --
> >> Started by an SCM change
> >> Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in
> workspace 
> >>  > git rev-parse --is-inside-work-tree # timeout=10
> >> Fetching changes from the remote Git repository
> >>  > git config remote.origin.url
> https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
> >> Fetching upstream changes from
> https://git-wip-us.apache.org/repos/asf/kafka.git
> >>  > git --version # timeout=10
> >>  > git fetch --tags --progress
> https://git-wip-us.apache.org/repos/asf/kafka.git
> +refs/heads/*:refs/remotes/origin/*
> >>  > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
> >>  > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
> >> Checking out Revision a5b11886df8c7aad0548efd2c7c3dbc579232f03
> (refs/remotes/origin/trunk)
> >>  > git config core.sparsecheckout # timeout=10
> >>  > git checkout -f a5b11886df8c7aad0548efd2c7c3dbc579232f03
> >>  > git rev-list 15cba9f00dc606dd49e428f4ac8ccae0c0b8b37d # timeout=10
> >> Setting
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> >> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson3311779494119504084.sh
> >> +
> /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
> >> To honour the JVM settings for this build a new JVM will be forked.
> Please consider using the daemon:
> http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
> >> Building project 'core' with Scala version 2.10.5
> >> :downloadWrapper UP-TO-DATE
> >>
> >> BUILD SUCCESSFUL
> >>
> >> Total time: 20.182 secs
> >> Setting
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> >> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson5074359631024072349.sh
> >> + ./gradlew -PscalaVersion=2.10.1 test
> >> To honour the JVM settings for this build a new JVM will be forked.
> Please consider using the daemon:
> http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
> >> Building project 'core' with Scala version 2.10.1
> >> :compileJava UP-TO-DATE
> >> :processResources UP-TO-DATE
> >> :classes UP-TO-DATE
> >> :rat
> >> Rat report: build/rat/rat-report.html
> >> :compileTestJava UP-TO-DATE
> >> :processTestResources UP-TO-DATE
> >> :testClasses UP-TO-DATE
> >> :test UP-TO-DATE
> >> :clients:compileJava UP-TO-DATE
> >> :clients:processResources UP-TO-DATE
> >> :clients:classes UP-TO-DATE
> >> :clients:checkstyleMain UP-TO-DATE
> >> :clients:compileTestJava UP-TO-DATE
> >> :clients:processTestResources UP-TO-DATE
> >> :clients:testClasses UP-TO-DATE
> >> :clients:checkstyleTest UP-TO-DATE
> >> :clients:test UP-TO-DATE
> >> :contrib:compileJava UP-TO-DATE
> >> :contrib:processResources UP-TO-DATE
> >> :contrib:classes UP-TO-DATE
> >> :contrib:compileTestJava UP-TO-DATE
> >> :contrib:processTestResources UP-TO-DATE
> >> :contrib:testClasses UP-TO-DATE
> >> :contrib:test UP-TO-DATE
> >> :clients:jar UP-TO-DATE
> >> :log4j-appender:compileJava UP-TO-DATE
> >> :log4j-appender:processResources UP-TO-DATE
> >> :log4j-appender:classes UP-TO-DATE
> >> :log4j-appender:jar UP-TO-DATE
> >> :core:compileJava UP-TO-DATE
> >> :core:compileScala<
> https://builds.apache.org/job/KafkaPreCommit/ws/core/src/main/scala/kafka/admin/AdminUtils.scala>:169:
> not found: type TopicAlreadyMarkedForDeletionException
> >>   case e1: ZkNodeExistsException => throw new
> TopicAlreadyMarkedForDeletionException(
> >>   ^
> >> one error found
> >>  FAILED
> >>
> >> FAILURE: Build failed with an exception.
> >>
> >> * What went wrong:
> >> Execution failed for task ':core:compileScala'.
> >>> Compilation failed
> >>
> >> * Try:
> >> Run with --stacktrace option to get the stack trace. Run with --info or
> --debug option to get more log output.
> >>
> >> BUILD FAILED
> >>
> >> Total time: 45.069 secs
> >> Build step 'Execute s

Build failed in Jenkins: KafkaPreCommit #157

2015-07-17 Thread Apache Jenkins Server
See 

Changes:

[cshapi] Adding a file missed while committing KAFKA-2345

--
[...truncated 797 lines...]
kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.jav

Re: Build failed in Jenkins: KafkaPreCommit #156

2015-07-17 Thread Gwen Shapira
my bad, I missed a file while committing.

I did a "trivial" commit with the missing file and I think the build
looks ok now.


On Fri, Jul 17, 2015 at 10:42 AM, Gwen Shapira  wrote:
> Ick. It seemed to work locally. I'm checking what went wrong.
>
> Let me know if you want a revert.
>
> On Fri, Jul 17, 2015 at 10:39 AM, Apache Jenkins Server
>  wrote:
>> See 
>>
>> Changes:
>>
>> [cshapi] KAFKA-2345;  Attempt to delete a topic already marked for deletion 
>> throws ZkNodeExistsException; patched by Ashish Singh; reviewed by Sriharsha 
>> Chintalapani and Ismael Juma
>>
>> --
>> Started by an SCM change
>> Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in workspace 
>> 
>>  > git rev-parse --is-inside-work-tree # timeout=10
>> Fetching changes from the remote Git repository
>>  > git config remote.origin.url 
>> https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
>> Fetching upstream changes from 
>> https://git-wip-us.apache.org/repos/asf/kafka.git
>>  > git --version # timeout=10
>>  > git fetch --tags --progress 
>> https://git-wip-us.apache.org/repos/asf/kafka.git 
>> +refs/heads/*:refs/remotes/origin/*
>>  > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
>>  > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
>> Checking out Revision a5b11886df8c7aad0548efd2c7c3dbc579232f03 
>> (refs/remotes/origin/trunk)
>>  > git config core.sparsecheckout # timeout=10
>>  > git checkout -f a5b11886df8c7aad0548efd2c7c3dbc579232f03
>>  > git rev-list 15cba9f00dc606dd49e428f4ac8ccae0c0b8b37d # timeout=10
>> Setting 
>> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
>> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson3311779494119504084.sh
>> + 
>> /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
>> To honour the JVM settings for this build a new JVM will be forked. Please 
>> consider using the daemon: 
>> http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
>> Building project 'core' with Scala version 2.10.5
>> :downloadWrapper UP-TO-DATE
>>
>> BUILD SUCCESSFUL
>>
>> Total time: 20.182 secs
>> Setting 
>> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
>> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson5074359631024072349.sh
>> + ./gradlew -PscalaVersion=2.10.1 test
>> To honour the JVM settings for this build a new JVM will be forked. Please 
>> consider using the daemon: 
>> http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
>> Building project 'core' with Scala version 2.10.1
>> :compileJava UP-TO-DATE
>> :processResources UP-TO-DATE
>> :classes UP-TO-DATE
>> :rat
>> Rat report: build/rat/rat-report.html
>> :compileTestJava UP-TO-DATE
>> :processTestResources UP-TO-DATE
>> :testClasses UP-TO-DATE
>> :test UP-TO-DATE
>> :clients:compileJava UP-TO-DATE
>> :clients:processResources UP-TO-DATE
>> :clients:classes UP-TO-DATE
>> :clients:checkstyleMain UP-TO-DATE
>> :clients:compileTestJava UP-TO-DATE
>> :clients:processTestResources UP-TO-DATE
>> :clients:testClasses UP-TO-DATE
>> :clients:checkstyleTest UP-TO-DATE
>> :clients:test UP-TO-DATE
>> :contrib:compileJava UP-TO-DATE
>> :contrib:processResources UP-TO-DATE
>> :contrib:classes UP-TO-DATE
>> :contrib:compileTestJava UP-TO-DATE
>> :contrib:processTestResources UP-TO-DATE
>> :contrib:testClasses UP-TO-DATE
>> :contrib:test UP-TO-DATE
>> :clients:jar UP-TO-DATE
>> :log4j-appender:compileJava UP-TO-DATE
>> :log4j-appender:processResources UP-TO-DATE
>> :log4j-appender:classes UP-TO-DATE
>> :log4j-appender:jar UP-TO-DATE
>> :core:compileJava UP-TO-DATE
>> :core:compileScala:169:
>>  not found: type TopicAlreadyMarkedForDeletionException
>>   case e1: ZkNodeExistsException => throw new 
>> TopicAlreadyMarkedForDeletionException(
>>   ^
>> one error found
>>  FAILED
>>
>> FAILURE: Build failed with an exception.
>>
>> * What went wrong:
>> Execution failed for task ':core:compileScala'.
>>> Compilation failed
>>
>> * Try:
>> Run with --stacktrace option to get the stack trace. Run with --info or 
>> --debug option to get more log output.
>>
>> BUILD FAILED
>>
>> Total time: 45.069 secs
>> Build step 'Execute shell' marked build as failure
>> Setting 
>> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1


Re: Build failed in Jenkins: KafkaPreCommit #156

2015-07-17 Thread Gwen Shapira
Ick. It seemed to work locally. I'm checking what went wrong.

Let me know if you want a revert.

On Fri, Jul 17, 2015 at 10:39 AM, Apache Jenkins Server
 wrote:
> See 
>
> Changes:
>
> [cshapi] KAFKA-2345;  Attempt to delete a topic already marked for deletion 
> throws ZkNodeExistsException; patched by Ashish Singh; reviewed by Sriharsha 
> Chintalapani and Ismael Juma
>
> --
> Started by an SCM change
> Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in workspace 
> 
>  > git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>  > git config remote.origin.url 
> https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
> Fetching upstream changes from 
> https://git-wip-us.apache.org/repos/asf/kafka.git
>  > git --version # timeout=10
>  > git fetch --tags --progress 
> https://git-wip-us.apache.org/repos/asf/kafka.git 
> +refs/heads/*:refs/remotes/origin/*
>  > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
>  > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
> Checking out Revision a5b11886df8c7aad0548efd2c7c3dbc579232f03 
> (refs/remotes/origin/trunk)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f a5b11886df8c7aad0548efd2c7c3dbc579232f03
>  > git rev-list 15cba9f00dc606dd49e428f4ac8ccae0c0b8b37d # timeout=10
> Setting 
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson3311779494119504084.sh
> + 
> /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.5
> :downloadWrapper UP-TO-DATE
>
> BUILD SUCCESSFUL
>
> Total time: 20.182 secs
> Setting 
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson5074359631024072349.sh
> + ./gradlew -PscalaVersion=2.10.1 test
> To honour the JVM settings for this build a new JVM will be forked. Please 
> consider using the daemon: 
> http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.1
> :compileJava UP-TO-DATE
> :processResources UP-TO-DATE
> :classes UP-TO-DATE
> :rat
> Rat report: build/rat/rat-report.html
> :compileTestJava UP-TO-DATE
> :processTestResources UP-TO-DATE
> :testClasses UP-TO-DATE
> :test UP-TO-DATE
> :clients:compileJava UP-TO-DATE
> :clients:processResources UP-TO-DATE
> :clients:classes UP-TO-DATE
> :clients:checkstyleMain UP-TO-DATE
> :clients:compileTestJava UP-TO-DATE
> :clients:processTestResources UP-TO-DATE
> :clients:testClasses UP-TO-DATE
> :clients:checkstyleTest UP-TO-DATE
> :clients:test UP-TO-DATE
> :contrib:compileJava UP-TO-DATE
> :contrib:processResources UP-TO-DATE
> :contrib:classes UP-TO-DATE
> :contrib:compileTestJava UP-TO-DATE
> :contrib:processTestResources UP-TO-DATE
> :contrib:testClasses UP-TO-DATE
> :contrib:test UP-TO-DATE
> :clients:jar UP-TO-DATE
> :log4j-appender:compileJava UP-TO-DATE
> :log4j-appender:processResources UP-TO-DATE
> :log4j-appender:classes UP-TO-DATE
> :log4j-appender:jar UP-TO-DATE
> :core:compileJava UP-TO-DATE
> :core:compileScala:169:
>  not found: type TopicAlreadyMarkedForDeletionException
>   case e1: ZkNodeExistsException => throw new 
> TopicAlreadyMarkedForDeletionException(
>   ^
> one error found
>  FAILED
>
> FAILURE: Build failed with an exception.
>
> * What went wrong:
> Execution failed for task ':core:compileScala'.
>> Compilation failed
>
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
>
> BUILD FAILED
>
> Total time: 45.069 secs
> Build step 'Execute shell' marked build as failure
> Setting 
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1


Re: Review Request 36548: Patch for KAFKA-2336

2015-07-17 Thread Grant Henke


> On July 17, 2015, 4:26 a.m., Jiangjie Qin wrote:
> > Looks good to me.
> 
> Jiangjie Qin wrote:
> Actually do we need to talk to Zookeeper every time? Can we read the data 
> from topic metadata cache directly?
> 
> Gwen Shapira wrote:
> Good point, Jiangjie - looks like partitionFor is called on every 
> ConsumerMetadataRequest handling, so some kind of caching will be nice.
> 
> Grant Henke wrote:
> We only talk to Zookeeper once at instance creation via the call to 
> `getOffsetsTopicPartitionCount` and setting the val 
> `offsetsTopicPartitionCount`. The static value is used from then on for every 
> call to `partitionFor`.
> 
> Jiangjie Qin wrote:
> Ah, I see, my bad. Then this patch seems not completely solve the issue, 
> though. Let's say offsets topic is not exist yet. What if two brokers had 
> different offset topic partition number configuration? After they startup and 
> before the offset topic get created in zookeeper, they will have different 
> value for offsetsTopicPartitionCount. Will that cause problem silently?

I think that sort of issue exists for many of Kafka's configurations, and 
exists for this configuration without this patch too. I do not aim to solve 
non-uniform configuration in this patch.


- Grant


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36548/#review92020
---


On July 16, 2015, 6:04 p.m., Grant Henke wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36548/
> ---
> 
> (Updated July 16, 2015, 6:04 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2336
> https://issues.apache.org/jira/browse/KAFKA-2336
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fix Scala style
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 47b6ce93da320a565435b4a7916a0c4371143b8a 
> 
> Diff: https://reviews.apache.org/r/36548/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Grant Henke
> 
>



Build failed in Jenkins: KafkaPreCommit #156

2015-07-17 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2345;  Attempt to delete a topic already marked for deletion 
throws ZkNodeExistsException; patched by Ashish Singh; reviewed by Sriharsha 
Chintalapani and Ismael Juma

--
Started by an SCM change
Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a5b11886df8c7aad0548efd2c7c3dbc579232f03 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a5b11886df8c7aad0548efd2c7c3dbc579232f03
 > git rev-list 15cba9f00dc606dd49e428f4ac8ccae0c0b8b37d # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe /tmp/hudson3311779494119504084.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 20.182 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe /tmp/hudson5074359631024072349.sh
+ ./gradlew -PscalaVersion=2.10.1 test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:rat
Rat report: build/rat/rat-report.html
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:checkstyleMain UP-TO-DATE
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:clients:checkstyleTest UP-TO-DATE
:clients:test UP-TO-DATE
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:compileTestJava UP-TO-DATE
:contrib:processTestResources UP-TO-DATE
:contrib:testClasses UP-TO-DATE
:contrib:test UP-TO-DATE
:clients:jar UP-TO-DATE
:log4j-appender:compileJava UP-TO-DATE
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes UP-TO-DATE
:log4j-appender:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala:169:
 not found: type TopicAlreadyMarkedForDeletionException
  case e1: ZkNodeExistsException => throw new 
TopicAlreadyMarkedForDeletionException(
  ^
one error found
 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:compileScala'.
> Compilation failed

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 45.069 secs
Build step 'Execute shell' marked build as failure
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1


[jira] [Updated] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2345:

   Resolution: Fixed
 Reviewer: Gwen Shapira
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

> Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> --
>
> Key: KAFKA-2345
> URL: https://issues.apache.org/jira/browse/KAFKA-2345
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch
>
>
> Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
> A user does not necessarily have to know about involvement of zk in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631645#comment-14631645
 ] 

Gwen Shapira commented on KAFKA-2345:
-

Thanks for the patch [~singhashish] and for the reviews [~harsha_ch] and 
[~ijuma].
Pushed to trunk.

> Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> --
>
> Key: KAFKA-2345
> URL: https://issues.apache.org/jira/browse/KAFKA-2345
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch
>
>
> Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
> A user does not necessarily have to know about involvement of zk in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36548: Patch for KAFKA-2336

2015-07-17 Thread Jiangjie Qin


> On July 17, 2015, 4:26 a.m., Jiangjie Qin wrote:
> > Looks good to me.
> 
> Jiangjie Qin wrote:
> Actually do we need to talk to Zookeeper every time? Can we read the data 
> from topic metadata cache directly?
> 
> Gwen Shapira wrote:
> Good point, Jiangjie - looks like partitionFor is called on every 
> ConsumerMetadataRequest handling, so some kind of caching will be nice.
> 
> Grant Henke wrote:
> We only talk to Zookeeper once at instance creation via the call to 
> `getOffsetsTopicPartitionCount` and setting the val 
> `offsetsTopicPartitionCount`. The static value is used from then on for every 
> call to `partitionFor`.

Ah, I see, my bad. Then this patch seems not completely solve the issue, 
though. Let's say offsets topic is not exist yet. What if two brokers had 
different offset topic partition number configuration? After they startup and 
before the offset topic get created in zookeeper, they will have different 
value for offsetsTopicPartitionCount. Will that cause problem silently?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36548/#review92020
---


On July 16, 2015, 6:04 p.m., Grant Henke wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36548/
> ---
> 
> (Updated July 16, 2015, 6:04 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2336
> https://issues.apache.org/jira/browse/KAFKA-2336
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fix Scala style
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 47b6ce93da320a565435b4a7916a0c4371143b8a 
> 
> Diff: https://reviews.apache.org/r/36548/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Grant Henke
> 
>



Re: Review Request 36565: Patch for KAFKA-2345

2015-07-17 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36565/#review92106
---

Ship it!


- Gwen Shapira


On July 17, 2015, 5:21 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36565/
> ---
> 
> (Updated July 17, 2015, 5:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2345
> https://issues.apache.org/jira/browse/KAFKA-2345
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2345: Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   
> core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/36565/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-17 Thread Flavio Junqueira
I like this feature, it reminds me of conditional updates in zookeeper. I'm not 
sure if it'd be best to have some mechanism for fencing rather than a 
conditional write like you're proposing. The reason I'm saying this is that the 
conditional write applies to requests individually, while it sounds like you 
want to make sure that there is a single client writing so over multiple 
requests.

-Flavio

> On 17 Jul 2015, at 07:30, Ben Kirwin  wrote:
> 
> Hi there,
> 
> I just added a KIP for a 'conditional publish' operation: a simple
> CAS-like mechanism for the Kafka producer. The wiki page is here:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
> 
> And there's some previous discussion on the ticket and the users list:
> 
> https://issues.apache.org/jira/browse/KAFKA-2260
> 
> https://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAAeOB6ccyAA13YNPqVQv2o-mT5r=c9v7a+55sf2wp93qg7+...@mail.gmail.com%3E
> 
> As always, comments and suggestions are very welcome.
> 
> Thanks,
> Ben



[jira] [Updated] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2345:
--
Attachment: KAFKA-2345_2015-07-17_10:20:55.patch

> Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> --
>
> Key: KAFKA-2345
> URL: https://issues.apache.org/jira/browse/KAFKA-2345
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch
>
>
> Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
> A user does not necessarily have to know about involvement of zk in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631618#comment-14631618
 ] 

Ashish K Singh commented on KAFKA-2345:
---

Updated reviewboard https://reviews.apache.org/r/36565/
 against branch trunk

> Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> --
>
> Key: KAFKA-2345
> URL: https://issues.apache.org/jira/browse/KAFKA-2345
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch
>
>
> Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
> A user does not necessarily have to know about involvement of zk in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2344) kafka-merge-pr should support reviewers in commit message

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631614#comment-14631614
 ] 

Gwen Shapira commented on KAFKA-2344:
-

Good with me [~ijuma].

The JIRA issue was my oversight, I'm not sure we need to change it at this 
point. I'll open a new JIRA if we decide to keep both tools.

> kafka-merge-pr should support reviewers in commit message
> -
>
> Key: KAFKA-2344
> URL: https://issues.apache.org/jira/browse/KAFKA-2344
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>Priority: Minor
>
> Two suggestions for the new pr-merge tool:
> * The tool doesn't allow to credit reviewers while committing. I thought the 
> review credits were a nice habit of the Kafka community and I hate losing it. 
> OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
> tool can ask about committers?
> * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
> if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36565: Patch for KAFKA-2345

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36565/#review92105
---



core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
(line 21)


Won't hurt, but as you mentioned putting message is a good practice. Will 
update.


- Ashish Singh


On July 17, 2015, 5:21 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36565/
> ---
> 
> (Updated July 17, 2015, 5:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2345
> https://issues.apache.org/jira/browse/KAFKA-2345
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2345: Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   
> core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/36565/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



Re: Review Request 36565: Patch for KAFKA-2345

2015-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36565/
---

(Updated July 17, 2015, 5:21 p.m.)


Review request for kafka.


Bugs: KAFKA-2345
https://issues.apache.org/jira/browse/KAFKA-2345


Repository: kafka


Description
---

KAFKA-2345: Attempt to delete a topic already marked for deletion throws 
ZkNodeExistsException


Diffs (updated)
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/36565/diff/


Testing
---


Thanks,

Ashish Singh



Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Gwen Shapira
+1 (binding)

On Fri, Jul 17, 2015 at 3:26 AM, Ismael Juma  wrote:
> Hi all,
>
> I would like to start a vote on dropping support for Scala 2.9 for the next
> release. People seemed to be in favour of the idea in previous discussions:
>
> * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
> * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72
>
> Summary of why we should drop Scala 2.9:
>
> * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are
> not binary compatible).
> * Code has been committed to trunk that doesn't build with Scala 2.9 weeks
> ago and no-one seems to have noticed or cared (well, I filed
> https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a
> version if we don't test it?
> * New clients library is written in Java and won't be affected. It also has
> received a lot of work and it's much improved since the last release.
> * It was released 4 years ago, it has been unsupported for a long time and
> most projects have dropped support for it (for example, we use a different
> version of ScalaTest for Scala 2.9)
> * Scala 2.10 introduced Futures and a few useful features like String
> interpolation and value classes.
> * Doesn't work with Java 8 (https://issues.apache.org/jira/browse/KAFKA-2203
> ).
>
> The reason not to drop it is to maintain compatibility for people stuck in
> 2.9 who also want to upgrade both client and broker to the next Kafka
> release.
>
> The vote will run for 72 hours.
>
> +1 (non-binding) from me.
>
> Best,
> Ismael


[jira] [Commented] (KAFKA-2344) kafka-merge-pr should support reviewers in commit message

2015-07-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631564#comment-14631564
 ] 

Ismael Juma commented on KAFKA-2344:


I renamed the JIRA title, I hope you don't mind.

If improvements are needed on how the JIRA credentials are configured, a 
separate issue should be filed. I think it would be nice if it it also 
supported the same mechanism as the `kafka-patch-review.py` tool for 
consistency (if we want to keep both tools, that is). It would be a nice way 
for someone else to improve the script too. ;)

> kafka-merge-pr should support reviewers in commit message
> -
>
> Key: KAFKA-2344
> URL: https://issues.apache.org/jira/browse/KAFKA-2344
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>Priority: Minor
>
> Two suggestions for the new pr-merge tool:
> * The tool doesn't allow to credit reviewers while committing. I thought the 
> review credits were a nice habit of the Kafka community and I hate losing it. 
> OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
> tool can ask about committers?
> * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
> if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2344) kafka-merge-pr should support reviewers in commit message

2015-07-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2344:
---
Summary: kafka-merge-pr should support reviewers in commit message  (was: 
Improvements for the pr-merge tool)

> kafka-merge-pr should support reviewers in commit message
> -
>
> Key: KAFKA-2344
> URL: https://issues.apache.org/jira/browse/KAFKA-2344
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>Priority: Minor
>
> Two suggestions for the new pr-merge tool:
> * The tool doesn't allow to credit reviewers while committing. I thought the 
> review credits were a nice habit of the Kafka community and I hate losing it. 
> OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
> tool can ask about committers?
> * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
> if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2344) Improvements for the pr-merge tool

2015-07-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2344:
--

Assignee: Ismael Juma

> Improvements for the pr-merge tool
> --
>
> Key: KAFKA-2344
> URL: https://issues.apache.org/jira/browse/KAFKA-2344
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>Priority: Minor
>
> Two suggestions for the new pr-merge tool:
> * The tool doesn't allow to credit reviewers while committing. I thought the 
> review credits were a nice habit of the Kafka community and I hate losing it. 
> OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
> tool can ask about committers?
> * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
> if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2328) merge-kafka-pr.py script should not leave user in a detached branch

2015-07-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2328:
---
Status: Patch Available  (was: Open)

Looks like definite bug, PR fixes it. [~gwenshap] or [~guozhang], any of you 
willing to review and merge?

> merge-kafka-pr.py script should not leave user in a detached branch
> ---
>
> Key: KAFKA-2328
> URL: https://issues.apache.org/jira/browse/KAFKA-2328
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> [~gwenshap] asked:
> "If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
> proceed), I'm left on a detached branch. Any chance the script can put me 
> back in the original branch? or in trunk?"
> Reference 
> https://issues.apache.org/jira/browse/KAFKA-2187?focusedCommentId=14621243&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14621243



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2328) merge-kafka-pr.py script should not leave user in a detached branch

2015-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631543#comment-14631543
 ] 

ASF GitHub Bot commented on KAFKA-2328:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/84

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch

The right command to get the branch name is `git rev-parse --abbrev-ref 
HEAD` instead of `git rev-parse HEAD`. The latter gives the commit hash causing 
a detached branch when we checkout to it. Seems like a bug we inherited from 
the Spark script.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2328-merge-script-no-detached-branch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/84.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #84


commit ae201dd5ef934443fe11b98294f17b7ddd9d6d72
Author: Ismael Juma 
Date:   2015-07-17T16:29:16Z

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch




> merge-kafka-pr.py script should not leave user in a detached branch
> ---
>
> Key: KAFKA-2328
> URL: https://issues.apache.org/jira/browse/KAFKA-2328
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> [~gwenshap] asked:
> "If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
> proceed), I'm left on a detached branch. Any chance the script can put me 
> back in the original branch? or in trunk?"
> Reference 
> https://issues.apache.org/jira/browse/KAFKA-2187?focusedCommentId=14621243&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14621243



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2328; merge-kafka-pr.py script should no...

2015-07-17 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/84

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch

The right command to get the branch name is `git rev-parse --abbrev-ref 
HEAD` instead of `git rev-parse HEAD`. The latter gives the commit hash causing 
a detached branch when we checkout to it. Seems like a bug we inherited from 
the Spark script.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2328-merge-script-no-detached-branch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/84.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #84


commit ae201dd5ef934443fe11b98294f17b7ddd9d6d72
Author: Ismael Juma 
Date:   2015-07-17T16:29:16Z

KAFKA-2328; merge-kafka-pr.py script should not leave user in a detached 
branch




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-1901) Move Kafka version to be generated in code by build (instead of in manifest)

2015-07-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1901:
--
Status: In Progress  (was: Patch Available)

> Move Kafka version to be generated in code by build (instead of in manifest)
> 
>
> Key: KAFKA-1901
> URL: https://issues.apache.org/jira/browse/KAFKA-1901
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jason Rosenberg
>Assignee: Manikumar Reddy
> Attachments: KAFKA-1901.patch, KAFKA-1901_2015-06-26_13:16:29.patch, 
> KAFKA-1901_2015-07-10_16:42:53.patch, KAFKA-1901_2015-07-14_17:59:56.patch
>
>
> With 0.8.2 (rc2), I've started seeing this warning in the logs of apps 
> deployed to our staging (both server and client):
> {code}
> 2015-01-23 00:55:25,273  WARN [async-message-sender-0] common.AppInfo$ - 
> Can't read Kafka version from MANIFEST.MF. Possible cause: 
> java.lang.NullPointerException
> {code}
> The issues is that in our deployment, apps are deployed with single 'shaded' 
> jars (e.g. using the maven shade plugin).  This means the MANIFEST.MF file 
> won't have a kafka version.  Instead, suggest the kafka build generate the 
> proper version in code, as part of the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35867: Patch for KAFKA-1901

2015-07-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35867/#review92078
---



clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java (line 56)


"given `prefix` string and further qualifies the associated AppInfo mbean 
with the attribute `id`."

Actually, sorry to bring this up again (we discussed it briefly in the 
first diff), but I think it would be better to register the app info explicitly 
in the consumer/producer/broker and remove this argument. It is convenient to 
have it instantiated as part of JmxReporter, but it is weird to have to pass in 
an `id` that will be used only for the app-info mbean.



clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java (line 
148)


For consistency with our regular metrics registered via KafkaMetrics maybe 
we should make this `app-info`



clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java (line 
247)


(for consistency) `getCommitId`
Also, `getCommitFingerprint` (fp is one word)

Also, how about these as alternates:
`getCommit` and `getCommitHash`



clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java (line 
254)


Can we also log the commit id and fingerprint?


- Joel Koshy


On July 14, 2015, 12:32 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/35867/
> ---
> 
> (Updated July 14, 2015, 12:32 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1901
> https://issues.apache.org/jira/browse/KAFKA-1901
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addresing Joel's comments
> 
> 
> Diffs
> -
> 
>   build.gradle d86f1a8b25197d53f11e16c54a6854487e175649 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> b4e8f7f0dceefaf94a3495f39f5783cce5ceb25f 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 03b8dd23df63a8d8a117f02eabcce4a2d48c44f7 
>   clients/src/main/java/org/apache/kafka/common/metrics/JmxReporter.java 
> 6b9590c418aedd2727544c5dd23c017b4b72467a 
>   clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/metrics/JmxReporterTest.java 
> 07b1b60d3a9cb1a399a2fe95b87229f64f539f3b 
>   clients/src/test/java/org/apache/kafka/common/metrics/MetricsTest.java 
> 544e120594de78c43581a980b1e4087b4fb98ccb 
>   core/src/main/scala/kafka/common/AppInfo.scala 
> d642ca555f83c41451d4fcaa5c01a1f86eff0a1c 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/main/scala/kafka/server/KafkaServerStartable.scala 
> 1c1b75b4137a8b233b61739018e9cebcc3a34343 
> 
> Diff: https://reviews.apache.org/r/35867/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Neha Narkhede
+1 (binding)

On Fri, Jul 17, 2015 at 8:33 AM, Brock Noland  wrote:

> +1 (non-binding)
>
> On Friday, July 17, 2015, Grant Henke  wrote:
>
> > +1 (non-binding)
> >
> > On Fri, Jul 17, 2015 at 9:44 AM, Ashish Singh  > > wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Friday, July 17, 2015, Stevo Slavić  >
> > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Fri, Jul 17, 2015 at 12:26 PM, Ismael Juma  > 
> > > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start a vote on dropping support for Scala 2.9 for
> > the
> > > > next
> > > > > release. People seemed to be in favour of the idea in previous
> > > > discussions:
> > > > >
> > > > > * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
> > > > > * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72
> > > > >
> > > > > Summary of why we should drop Scala 2.9:
> > > > >
> > > > > * Doubles the number of builds required from 2 to 4 (2.9.1 and
> 2.9.2
> > > are
> > > > > not binary compatible).
> > > > > * Code has been committed to trunk that doesn't build with Scala
> 2.9
> > > > weeks
> > > > > ago and no-one seems to have noticed or cared (well, I filed
> > > > > https://issues.apache.org/jira/browse/KAFKA-2325). Can we really
> > > > support a
> > > > > version if we don't test it?
> > > > > * New clients library is written in Java and won't be affected. It
> > also
> > > > has
> > > > > received a lot of work and it's much improved since the last
> release.
> > > > > * It was released 4 years ago, it has been unsupported for a long
> > time
> > > > and
> > > > > most projects have dropped support for it (for example, we use a
> > > > different
> > > > > version of ScalaTest for Scala 2.9)
> > > > > * Scala 2.10 introduced Futures and a few useful features like
> String
> > > > > interpolation and value classes.
> > > > > * Doesn't work with Java 8 (
> > > > > https://issues.apache.org/jira/browse/KAFKA-2203
> > > > > ).
> > > > >
> > > > > The reason not to drop it is to maintain compatibility for people
> > stuck
> > > > in
> > > > > 2.9 who also want to upgrade both client and broker to the next
> Kafka
> > > > > release.
> > > > >
> > > > > The vote will run for 72 hours.
> > > > >
> > > > > +1 (non-binding) from me.
> > > > >
> > > > > Best,
> > > > > Ismael
> > > > >
> > > >
> > >
> > >
> > > --
> > > Ashish 🎤h
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Solutions Consultant | Cloudera
> > ghe...@cloudera.com  | twitter.com/gchenke |
> > linkedin.com/in/granthenke
> >
>



-- 
Thanks,
Neha


[jira] [Updated] (KAFKA-2337) Verify that metric names will not collide when creating new topics

2015-07-17 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2337:
---
Attachment: KAFKA-2337_2015-07-17_11:17:30.patch

> Verify that metric names will not collide when creating new topics
> --
>
> Key: KAFKA-2337
> URL: https://issues.apache.org/jira/browse/KAFKA-2337
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: KAFKA-2337.patch, KAFKA-2337_2015-07-17_11:17:30.patch
>
>
> When creating a new topic, convert the proposed topic name to the name that 
> will be used in metrics and validate that there are no collisions with 
> existing names.
> See this discussion for context: http://s.apache.org/snW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2337) Verify that metric names will not collide when creating new topics

2015-07-17 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631525#comment-14631525
 ] 

Grant Henke commented on KAFKA-2337:


Updated reviewboard https://reviews.apache.org/r/36570/diff/
 against branch origin/trunk

> Verify that metric names will not collide when creating new topics
> --
>
> Key: KAFKA-2337
> URL: https://issues.apache.org/jira/browse/KAFKA-2337
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: KAFKA-2337.patch, KAFKA-2337_2015-07-17_11:17:30.patch
>
>
> When creating a new topic, convert the proposed topic name to the name that 
> will be used in metrics and validate that there are no collisions with 
> existing names.
> See this discussion for context: http://s.apache.org/snW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36570: Patch for KAFKA-2337

2015-07-17 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36570/
---

(Updated July 17, 2015, 4:17 p.m.)


Review request for kafka.


Bugs: KAFKA-2337
https://issues.apache.org/jira/browse/KAFKA-2337


Repository: kafka


Description
---

KAFKA-2337: Verify that metric names will not collide when creating new topics


Diffs (updated)
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
a90aa8787ff21b963765a547980154363c1c93c6 
  core/src/main/scala/kafka/common/Topic.scala 
32595d6fe432141119db26d3b5ebe229aac40805 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
  core/src/test/scala/unit/kafka/common/TopicTest.scala 
79532c89c41572ba953c4dc3319a05354927e961 

Diff: https://reviews.apache.org/r/36570/diff/


Testing
---


Thanks,

Grant Henke



Re: Review Request 36570: Patch for KAFKA-2337

2015-07-17 Thread Grant Henke


> On July 17, 2015, 4:01 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/common/Topic.scala, line 64
> > 
> >
> > *Maybe* this method name could be renamed to 'collide' to make it more 
> > like a verb instead of a sustantive.
> 
> Grant Henke wrote:
> I had a hard time naming this. Collide, to me, sounded like an action 
> that should return a result. Like I was taking two topics and smashing them 
> together in order to get something new.
> 
> Edward Ribeiro wrote:
> Yeah, totally agree with you. Maybe ''hasCollision'' could be better. 
> Makes it explicit it is a boolean method. wdyt?

Makes sense to me.


- Grant


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36570/#review92089
---


On July 17, 2015, 3:27 p.m., Grant Henke wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36570/
> ---
> 
> (Updated July 17, 2015, 3:27 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2337
> https://issues.apache.org/jira/browse/KAFKA-2337
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2337: Verify that metric names will not collide when creating new topics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/common/Topic.scala 
> 32595d6fe432141119db26d3b5ebe229aac40805 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
>   core/src/test/scala/unit/kafka/common/TopicTest.scala 
> 79532c89c41572ba953c4dc3319a05354927e961 
> 
> Diff: https://reviews.apache.org/r/36570/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Grant Henke
> 
>



Re: Review Request 36570: Patch for KAFKA-2337

2015-07-17 Thread Edward Ribeiro


> On July 17, 2015, 4:01 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/common/Topic.scala, line 64
> > 
> >
> > *Maybe* this method name could be renamed to 'collide' to make it more 
> > like a verb instead of a sustantive.
> 
> Grant Henke wrote:
> I had a hard time naming this. Collide, to me, sounded like an action 
> that should return a result. Like I was taking two topics and smashing them 
> together in order to get something new.

Yeah, totally agree with you. Maybe ''hasCollision'' could be better. Makes it 
explicit it is a boolean method. wdyt?


- Edward


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36570/#review92089
---


On July 17, 2015, 3:27 p.m., Grant Henke wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36570/
> ---
> 
> (Updated July 17, 2015, 3:27 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2337
> https://issues.apache.org/jira/browse/KAFKA-2337
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2337: Verify that metric names will not collide when creating new topics
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> a90aa8787ff21b963765a547980154363c1c93c6 
>   core/src/main/scala/kafka/common/Topic.scala 
> 32595d6fe432141119db26d3b5ebe229aac40805 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
>   core/src/test/scala/unit/kafka/common/TopicTest.scala 
> 79532c89c41572ba953c4dc3319a05354927e961 
> 
> Diff: https://reviews.apache.org/r/36570/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Grant Henke
> 
>



  1   2   >