[jira] [Assigned] (KAFKA-2685) "alter topic" on non-existent topic exits without error

2015-10-22 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro reassigned KAFKA-2685:
-

Assignee: Edward Ribeiro

> "alter topic" on non-existent topic exits without error
> ---
>
> Key: KAFKA-2685
> URL: https://issues.apache.org/jira/browse/KAFKA-2685
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Edward Ribeiro
>
> When running:
> kafka-topics --zookeeper localhost:2181 --alter --topic test --config 
> unclean.leader.election.enable=false
> and topic "test" does not exist, the command simply return with no error 
> message.
> We expect to see an error when trying to modify non-existing topics, so user 
> will have a chance to catch and correct typos.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-10-15 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Assignee: (was: Edward Ribeiro)

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Priority: Critical
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch, 
> KAFKA-2338_2015-09-02_19:27:17.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-10-15 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Assignee: Ben Stopford

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ben Stopford
>Priority: Critical
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch, 
> KAFKA-2338_2015-09-02_19:27:17.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-10-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14940026#comment-14940026
 ] 

Edward Ribeiro commented on KAFKA-2601:
---

Hi [~GabrielNicolasAvellaneda], 

LGTM, but I am a newbie too (started about three months ago, but has been on 
and off due to work commitments, not consistently participating **yet**). So, I 
will pass the review torch to [~gwenshap], she is a Kafka committer and the 
coolest person you will interact with. :) They are all very busy, so be 
patient. ;)

ps: send message to the dev mailing list asking to include you as a 
contributor, so you can assign issues to yourself.

Welcome!


> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellaneda
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-10-01 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2601:
--
Reviewer:   (was: Edward Ribeiro)

> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellaneda
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-09-30 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2601:
--
Assignee: (was: Edward Ribeiro)

> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellnaeda
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-09-30 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14939213#comment-14939213
 ] 

Edward Ribeiro commented on KAFKA-2601:
---

Excuse me, Gabriel, I jumped too soon on this issue without asking if you are 
working or want to take a stab at it? If not, let me know. ;)

> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellnaeda
>Assignee: Edward Ribeiro
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-09-30 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14939213#comment-14939213
 ] 

Edward Ribeiro edited comment on KAFKA-2601 at 10/1/15 3:17 AM:


Excuse me, Gabriel, I jumped too soon on this issue without asking if you are 
working or want to take a stab at it? If not, then, please, let me know. ;)


was (Author: eribeiro):
Excuse me, Gabriel, I jumped too soon on this issue without asking if you are 
working or want to take a stab at it? If not, let me know. ;)

> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellnaeda
>Assignee: Edward Ribeiro
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2601) ConsoleProducer tool shows stacktrace on invalid command parameters

2015-09-30 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro reassigned KAFKA-2601:
-

Assignee: Edward Ribeiro

> ConsoleProducer tool shows stacktrace on invalid command parameters
> ---
>
> Key: KAFKA-2601
> URL: https://issues.apache.org/jira/browse/KAFKA-2601
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
>Reporter: Gabriel Nicolas Avellnaeda
>Assignee: Edward Ribeiro
>Priority: Minor
>  Labels: easyfix
>
> Running kafka-console-producer tool with an invalid argument shows a full 
> stack trace instead of a user-friendly error:
> {code}
> vagrant@vagrant-ubuntu-trusty-64:/vagrant/kafka/kafka_2.9.1-0.8.2.1$ 
> bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic another-topic
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 
> 'zookeeper' is not a recognized option
> at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:93)
> at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:402)
> at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:55)
> at joptsimple.OptionParser.parse(OptionParser.java:392)
> at 
> kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:216)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-09-29 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2599:
--
Status: Patch Available  (was: Open)

> Metadata#getClusterForCurrentTopics can throw NPE even with null checking
> -
>
> Key: KAFKA-2599
> URL: https://issues.apache.org/jira/browse/KAFKA-2599
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Assignee: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.1.2, 0.9.0.0
>
>
> While working on another issue I have just seen the following:
> {code}
> private Cluster getClusterForCurrentTopics(Cluster cluster) {
> Collection partitionInfos = new ArrayList<>();
> if (cluster != null) {
> for (String topic : this.topics) {
> partitionInfos.addAll(cluster.partitionsForTopic(topic));
> }
> }
> return new Cluster(cluster.nodes(), partitionInfos);
> }
> {code}
> Well, there's a null check for cluster, but if cluster is null it will throw 
> NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
> quick fix and changed {{MetadataTest}} to reproduce this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-09-29 Thread Edward Ribeiro (JIRA)
Edward Ribeiro created KAFKA-2599:
-

 Summary: Metadata#getClusterForCurrentTopics can throw NPE even 
with null checking
 Key: KAFKA-2599
 URL: https://issues.apache.org/jira/browse/KAFKA-2599
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.1.2, 0.9.0.0


While working on another issue I have just seen the following:

{code}
private Cluster getClusterForCurrentTopics(Cluster cluster) {
Collection partitionInfos = new ArrayList<>();
if (cluster != null) {
for (String topic : this.topics) {
partitionInfos.addAll(cluster.partitionsForTopic(topic));
}
}
return new Cluster(cluster.nodes(), partitionInfos);
}
{code}

Well, there's a null check for cluster, but if cluster is null it will throw 
NPE. So, I put together a quick fix and changed {{MetadataTest}} to reproduce 
this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-29 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2578:
--
Status: Patch Available  (was: Open)

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-09-29 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2599:
--
Description: 
While working on another issue I have just seen the following:

{code}
private Cluster getClusterForCurrentTopics(Cluster cluster) {
Collection partitionInfos = new ArrayList<>();
if (cluster != null) {
for (String topic : this.topics) {
partitionInfos.addAll(cluster.partitionsForTopic(topic));
}
}
return new Cluster(cluster.nodes(), partitionInfos);
}
{code}

Well, there's a null check for cluster, but if cluster is null it will throw 
NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
quick fix and changed {{MetadataTest}} to reproduce this error.

  was:
While working on another issue I have just seen the following:

{code}
private Cluster getClusterForCurrentTopics(Cluster cluster) {
Collection partitionInfos = new ArrayList<>();
if (cluster != null) {
for (String topic : this.topics) {
partitionInfos.addAll(cluster.partitionsForTopic(topic));
}
}
return new Cluster(cluster.nodes(), partitionInfos);
}
{code}

Well, there's a null check for cluster, but if cluster is null it will throw 
NPE. So, I put together a quick fix and changed {{MetadataTest}} to reproduce 
this error.


> Metadata#getClusterForCurrentTopics can throw NPE even with null checking
> -
>
> Key: KAFKA-2599
> URL: https://issues.apache.org/jira/browse/KAFKA-2599
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Assignee: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.1.2, 0.9.0.0
>
>
> While working on another issue I have just seen the following:
> {code}
> private Cluster getClusterForCurrentTopics(Cluster cluster) {
> Collection partitionInfos = new ArrayList<>();
> if (cluster != null) {
> for (String topic : this.topics) {
> partitionInfos.addAll(cluster.partitionsForTopic(topic));
> }
> }
> return new Cluster(cluster.nodes(), partitionInfos);
> }
> {code}
> Well, there's a null check for cluster, but if cluster is null it will throw 
> NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
> quick fix and changed {{MetadataTest}} to reproduce this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-29 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934290#comment-14934290
 ] 

Edward Ribeiro edited comment on KAFKA-2578 at 9/29/15 9:36 PM:


Thanks!

[~jasong35], Hi, Jason! When I pushed to the github repo I forgot to prefix it 
with the "KAFKA-XXX" number, and even after doing a {{commit --amend}} the 
asfbot didn't pick it up to show here on the JIRA. :( I tried to close and open 
and the PR, but no success either.

Nevertheless, the PR is at:

https://github.com/apache/kafka/pull/263


was (Author: eribeiro):
Thanks!

[~jasong35], Hi, Jason! When I pushed to the github repo I forgot to prefix it 
with the "KAFKA-XXX" number, and even after doing a {{commit --amend}} the 
asfbot didn't pick it up to show here on the JIRA. Nevertheless, the PR is at:

https://github.com/apache/kafka/pull/263

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-28 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934290#comment-14934290
 ] 

Edward Ribeiro commented on KAFKA-2578:
---

Thanks, mate! :)

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-28 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro reassigned KAFKA-2578:
-

Assignee: Edward Ribeiro  (was: Jason Gustafson)

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-28 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934269#comment-14934269
 ] 

Edward Ribeiro commented on KAFKA-2578:
---

Hey, [~hachikuji], this patch is so trivial... would you mind if I take a stab 
at it? 

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2563) Version number major.major.minor.fix in kafka-merge-pr.py

2015-09-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14901212#comment-14901212
 ] 

Edward Ribeiro commented on KAFKA-2563:
---

This is a duplicate of KAFKA-2548. ;)



> Version number major.major.minor.fix in kafka-merge-pr.py
> -
>
> Key: KAFKA-2563
> URL: https://issues.apache.org/jira/browse/KAFKA-2563
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Saw the following error while using the script to resolve a re-opened ticket 
> a couple of times:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> Actually it is not related to re-opened tickets, but related to the new 
> version number that the code tries to cope with:
> {code}
> (major, minor, patch) = v.split(".")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-09-14 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744252#comment-14744252
 ] 

Edward Ribeiro commented on KAFKA-2515:
---

Ewen was able to simulate the bug with a simple config with two brokers, 
adjusting message.max.bytes and replica.fetch.max.bytes. Then he created a 
topic with replication factor 2 and used console producer to send data of 
different sizes to test the output. Unfortunately KAFKA-2338 has been on hold 
for the past two months so I am back to square 1. :( Gonna try to repro here 
following Ewen's strategy, but let's keep in touch.

Regards,

> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-09-14 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744102#comment-14744102
 ] 

Edward Ribeiro commented on KAFKA-2515:
---

[~onurkaraman], isn't this issue related to KAFKA-2338? I mean, I would have to 
modify the consumer for address it, but I can limit myself to logging on the 
server side. But it would be cool to exchange ideas about this oversized 
message problem, so I would suggest to link those two issues.

Cheers!


> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-09-02 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728149#comment-14728149
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Edward Ribeiro
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch, 
> KAFKA-2338_2015-09-02_19:27:17.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-09-02 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-09-02_19:27:17.patch

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Edward Ribeiro
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch, 
> KAFKA-2338_2015-09-02_19:27:17.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726069#comment-14726069
 ] 

Edward Ribeiro commented on KAFKA-2499:
---

Hi [~benstopford], I have a tidy bit of previous experience with synthetic data 
generation. If you are not going to work on this, I can provide some additional 
code if you assign this issue to me. Or I can provide you some classes for 
generating those random values. Up to you. :)

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725882#comment-14725882
 ] 

Edward Ribeiro commented on KAFKA-2498:
---

Hi [~gundun], I am afraid you lost me here. If you are having a trouble 
building apache kafka from sources then I suggest that you post 
dev@kafka.apache.org so that we can solve your doubts there. Only open a JIRA 
if you identified a bug and, even so, only after confirming that with the 
community at the mailing list. The pages of Kafka below have many resources to 
help you get started:

See the last section at: 
https://cwiki.apache.org/confluence/display/KAFKA/Index  

And this link
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup

But all in all, don't open a JIRA to ask questions. ;) 

Cheers!



> need build steps/instruction while building apache kafka from source github 
> branch 0.8.2
> 
>
> Key: KAFKA-2498
> URL: https://issues.apache.org/jira/browse/KAFKA-2498
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.0
> Environment: I am working rhel7.1 machine
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have followed the steps from the github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar (success)
> ./gradlew srcJar (success)
> ./gradlew test ( one test case failed)
> so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2015-09-01 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726134#comment-14726134
 ] 

Edward Ribeiro commented on KAFKA-2499:
---

okay, thanks! 

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Edward Ribeiro
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) Ensuring registered broker host:port is unique

2015-08-25 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Summary: Ensuring registered broker host:port is unique  (was: ensuring 
registered broker host:port is unique)

 Ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2473) Not able to find release version with patch jira-1591

2015-08-25 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14712009#comment-14712009
 ] 

Edward Ribeiro edited comment on KAFKA-2473 at 8/25/15 9:30 PM:


Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. Open a ticket only after 
confirming this as a bug.

Best regards!


was (Author: eribeiro):
Hi Amar, please, post this kind of question on mailing list first 
(us...@kafka.apache.org) before opening a JIRA. It is preferable to open a 
ticket only after confirming this as a bug.

Best regards!

 Not able to find release version with patch jira-1591
 -

 Key: KAFKA-2473
 URL: https://issues.apache.org/jira/browse/KAFKA-2473
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Amar

 Hi,
 WE are seeing lot of  un-necessary logging in server.log. As per Jeera, it 
 got fixed in 1591. But I couldn't able to find matched release. Can you 
 please advise 
 https://issues.apache.org/jira/browse/KAFKA-1591 
 Thanks
 Amar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-08-24 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709685#comment-14709685
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Edward Ribeiro
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-08-24_14:32:38.patch

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Edward Ribeiro
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Status: Patch Available  (was: In Progress)

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Edward Ribeiro
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-08-24 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709707#comment-14709707
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Hi, [~gwenshap], thanks for the kind words. :) I am sorry for not being able to 
give the necessary love to this patch :( (much because of my inexperience with 
the code base, I guess). I hope I can dig more about max message size problems 
soon tough. I have just rebased the patch and it compiles successfully now with 
latest trunk.

Oh, one thing that has caught my attention is that some chunk of code (below) 
was removed from TopicCommand, specifically in the alterTopic() method, in the 
context of KAFKA-2198 (a7e0ac):. Seems to indicate that now topic configuration 
cannot be altered, right?

{code}
  val configs = AdminUtils.fetchTopicConfig(zkClient, topic)
  if(opts.options.has(opts.configOpt) || 
opts.options.has(opts.deleteConfigOpt)) {
val configsToBeAdded = parseTopicConfigsToBeAdded(opts)
val configsToBeDeleted = parseTopicConfigsToBeDeleted(opts)
// compile the final set of configs
configs.putAll(configsToBeAdded)
configsToBeDeleted.foreach(config = configs.remove(config))
AdminUtils.changeTopicConfig(zkClient, topic, configs)
println(Updated config for topic \%s\..format(topic))
  }
{code}

Sorry if my doubt is naive/stupid. And feel free to merge this patch, but take 
a look to see if I am doing it right. :) 

Thanks!
Edward

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
Assignee: Edward Ribeiro
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch, KAFKA-2338_2015-08-24_14:32:38.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709767#comment-14709767
 ] 

Edward Ribeiro commented on KAFKA-1811:
---

Created reviewboard https://reviews.apache.org/r/37723/diff/
 against branch origin/trunk

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811-2.patch, KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Status: Patch Available  (was: Open)

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811-2.patch, KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Attachment: KAFKA-1811.patch

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811-2.patch, KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Attachment: (was: KAFKA-1811-2.patch)

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709794#comment-14709794
 ] 

Edward Ribeiro commented on KAFKA-1811:
---

Hi [~gwenshap] and [~nehanarkhede], I have just uploaded a first cut to address 
this issue. I looked for a ready ZK lock recipe, but couldn't find one in 
either ZkClient nor Kafka, so I rolled out one. Hopefully, as Curator is 
integrated into Kafka we will have the chance of replacing it by a superior 
implementation. :) If [~fpj] could take a look at my naive ZKLock, I would be 
really glad. ;) As this patch touches a critical code path, I had to adjust it 
as some unit tests failing (mainly the time sensitive ones), but I didn't have 
the opportunity to run all the test suite, so any feedback about this is 
welcome. Please, let me know if this patch is really worth. 

Thanks!

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709794#comment-14709794
 ] 

Edward Ribeiro edited comment on KAFKA-1811 at 8/24/15 6:42 PM:


Hi [~gwenshap] and [~nehanarkhede], I have just uploaded a first cut to address 
this issue. I looked for a ready ZK lock recipe, but couldn't find one in 
either ZkClient nor Kafka, so I rolled out one. Hopefully, as Curator is 
integrated into Kafka we will have the chance of replacing it by a superior 
implementation. :) If [~fpj] could take a look at my naive ZKLock, I would be 
really glad. ;) As this patch touches a critical code path, I had to adjust it 
as some unit tests failing (mainly the time sensitive ones), but I didn't have 
the opportunity to run all the test suite, so any feedback about this is 
welcome. Please, let me know if this patch is really worth. 

update: this is my last use of old review process, gonna switch to Github next. 
:)

Thanks!


was (Author: eribeiro):
Hi [~gwenshap] and [~nehanarkhede], I have just uploaded a first cut to address 
this issue. I looked for a ready ZK lock recipe, but couldn't find one in 
either ZkClient nor Kafka, so I rolled out one. Hopefully, as Curator is 
integrated into Kafka we will have the chance of replacing it by a superior 
implementation. :) If [~fpj] could take a look at my naive ZKLock, I would be 
really glad. ;) As this patch touches a critical code path, I had to adjust it 
as some unit tests failing (mainly the time sensitive ones), but I didn't have 
the opportunity to run all the test suite, so any feedback about this is 
welcome. Please, let me know if this patch is really worth. 

Thanks!

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-24 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Comment: was deleted

(was: A tentative (git) patch cut for addressing this issue.)

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1811 started by Edward Ribeiro.
-
 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-1811:
--
Attachment: KAFKA-1811-2.patch

A tentative (git) patch cut for addressing this issue.

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA-1811-2.patch, KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705911#comment-14705911
 ] 

Edward Ribeiro commented on KAFKA-873:
--

Sorry, I didn't follow here. Guava is not already bundled with Kafka, so there 
are no clients relying on it to pull guava, right?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705909#comment-14705909
 ] 

Edward Ribeiro commented on KAFKA-873:
--

Sorry, I didn't follow here. Guava is not already bundled with Kafka, so there 
are no clients relying on it to pull guava, right?

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2015-08-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705442#comment-14705442
 ] 

Edward Ribeiro commented on KAFKA-873:
--

[~granthenke], [~fpj], [~ijuma] Please, count on me to help you on whatever you 
need to have this port happen. :) I am working on a ticket that can be 
simplified a lot just by using Curator.

 Consider replacing zkclient with curator (with zkclient-bridge)
 ---

 Key: KAFKA-873
 URL: https://issues.apache.org/jira/browse/KAFKA-873
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Scott Clasen
Assignee: Grant Henke

 If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
 be initially a drop-in replacement
 https://github.com/Netflix/curator/wiki/ZKClient-Bridge
 With the addition of a few more props to ZkConfig, and a bit of code this 
 would open up the possibility of using ACLs in zookeeper (which arent 
 supported directly by zkclient), as well as integrating with netflix 
 exhibitor for those of us using that.
 Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2418) Typo on official KAFKA documentation

2015-08-12 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693984#comment-14693984
 ] 

Edward Ribeiro commented on KAFKA-2418:
---

Hi folks, if you have any free time could someone review this and push it to 
trunk, please? :) 

I didn't setup a reviewer yet because I know committers and contributors are 
super busy with coding/reviewing more important stuff. 

Cheers!

/cc [~junrao] [~gwenshap] ?

 Typo on official KAFKA documentation
 

 Key: KAFKA-2418
 URL: https://issues.apache.org/jira/browse/KAFKA-2418
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.8.0, 0.8.1, 0.8.2.0
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Trivial
 Attachments: KAFKA-2418.patch


 I have just seen the typo below at http://kafka.apache.org/documentation.html 
 . By the end of the document there's a reference to JMZ instead of JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1811) ensuring registered broker host:port is unique

2015-08-10 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro reassigned KAFKA-1811:
-

Assignee: Edward Ribeiro

 ensuring registered broker host:port is unique
 --

 Key: KAFKA-1811
 URL: https://issues.apache.org/jira/browse/KAFKA-1811
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
Assignee: Edward Ribeiro
  Labels: newbie
 Attachments: KAFKA_1811.patch


 Currently, we expect each of the registered broker to have a unique host:port 
 pair. However, we don't enforce that, which causes various weird problems. It 
 would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2418) Typo on official KAFKA documentation

2015-08-10 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2418:
--
Affects Version/s: 0.8.0
   0.8.1
   0.8.2.0

 Typo on official KAFKA documentation
 

 Key: KAFKA-2418
 URL: https://issues.apache.org/jira/browse/KAFKA-2418
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.8.0, 0.8.1, 0.8.2.0
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Trivial
 Attachments: KAFKA-2418.patch


 I have just seen the typo below at http://kafka.apache.org/documentation.html 
 . By the end of the document there's a reference to JMZ instead of JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2418) Typo on official KAFKA documentation

2015-08-10 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2418:
--
Attachment: KAFKA-2418.patch

Adding a patch to web site documentation.

 Typo on official KAFKA documentation
 

 Key: KAFKA-2418
 URL: https://issues.apache.org/jira/browse/KAFKA-2418
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Trivial
 Attachments: KAFKA-2418.patch


 I have just seen the typo below at http://kafka.apache.org/documentation.html 
 . By the end of the document there's a reference to JMZ instead of JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2418) Typo on official KAFKA documentation

2015-08-10 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2418:
--
Status: Patch Available  (was: Open)

 Typo on official KAFKA documentation
 

 Key: KAFKA-2418
 URL: https://issues.apache.org/jira/browse/KAFKA-2418
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.8.2.0, 0.8.1, 0.8.0
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Trivial
 Attachments: KAFKA-2418.patch


 I have just seen the typo below at http://kafka.apache.org/documentation.html 
 . By the end of the document there's a reference to JMZ instead of JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-23 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14639011#comment-14639011
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Hi [~ijuma], no problem, mate. If it looks good to you then you can assign it 
to you and commit, please? Cheers. :)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch, 
 KAFKA-2355_2015-07-22_22:23:13.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-23 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14639095#comment-14639095
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Oh, thanks [~gwenshap]! Sorry for any inconvenience or misstep, if any.

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Assignee: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch, 
 KAFKA-2355_2015-07-22_22:23:13.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-22 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14638036#comment-14638036
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Updated reviewboard https://reviews.apache.org/r/36670/diff/
 against branch origin/trunk

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch, 
 KAFKA-2355_2015-07-22_22:23:13.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-22 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Attachment: KAFKA-2355_2015-07-22_22:23:13.patch

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch, 
 KAFKA-2355_2015-07-22_22:23:13.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-22 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Attachment: KAFKA-2355_2015-07-22_21:37:51.patch

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-22 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14637984#comment-14637984
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Updated reviewboard https://reviews.apache.org/r/36670/diff/
 against branch origin/trunk

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-22 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14637996#comment-14637996
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Hi [~ijuma] and [~granthenke], I have updated the patch as requested in the 
latest review. Let me know if something is missing or I misinterpreted, please. 
:)

ps: could one of you assign this little ticket to me? I asked for inclusion 
into colaborators' list so that I could assign issues to myself yesterday I 
guess, but it was not granted yet. :-P Cheers!

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch, KAFKA-2355_2015-07-22_21:37:51.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-07-21_13:21:19.patch

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635208#comment-14635208
 ] 

Edward Ribeiro commented on KAFKA-251:
--

[~ijuma] Hi, if it is still relevant and no one is working on it could you 
assign it to me, please?

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635224#comment-14635224
 ] 

Edward Ribeiro commented on KAFKA-251:
--

I cannot. Afaik, it depends on the project, and some (many?) are restricting 
the assign operation to committers and core members, what I can understand 
perfectly and comply with.

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635216#comment-14635216
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Oh, ignore the previous message, [~gwenshap], I will be reworking the patch to 
fix it.

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635330#comment-14635330
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2355) Creating a unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)
Edward Ribeiro created KAFKA-2355:
-

 Summary: Creating a unit test to validate the deletion of a 
partition marked as deleted
 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor


Trying to delete a partition marked as deleted throws 
{{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Summary: Add an unit test to validate the deletion of a partition marked as 
deleted  (was: Creating a unit test to validate the deletion of a partition 
marked as deleted)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Issue Type: Test  (was: Sub-task)
Parent: (was: KAFKA-2345)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636116#comment-14636116
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Created reviewboard https://reviews.apache.org/r/36670/diff/
 against branch origin/trunk

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Attachment: KAFKA-2355.patch

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Affects Version/s: 0.8.2.1
   Status: Patch Available  (was: Open)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636119#comment-14636119
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Hi [~singhashish] and [~gwenshap]. I hope you don't get annoyed, but I saw that 
KAFKA-2345 was lacking a unit test to validate the new exception being thrown 
by KAFKA-2345. As this issue was already closed I created a new ticket, and 
added the unit test. Please, let me know what you think. Cheers! :)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635864#comment-14635864
 ] 

Edward Ribeiro commented on KAFKA-2354:
---

Hi [~Skandragon], unfortunately, I was unable to reproduce this issue on both 
trunk and 0.8.2.1. Could you provide more details of your reproduce steps, 
please? In the meantime more experienced Kafka devs can be luckier/smarter than 
me to reproduce it.

 setting log.dirs property makes tools fail if there is a comma
 --

 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff

 If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
 kafka-topics --describe --zookeeper localhost/kafka
 Error: Could not find or load main class .u1.kafka,
 The broker will start, however.  If the tools are run from a machine without 
 multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634296#comment-14634296
 ] 

Edward Ribeiro commented on KAFKA-2350:
---

Nit: wouldn't the inverse of ``pause`` to be ``resume``?

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-20 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634465#comment-14634465
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Hi [~gwenshap], whenever you have a free cycles, could you please review this 
puppy? :) Still getting acquainted with the code base, so I hope not far off 
what's required. 

Thanks!

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338.patch

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631784#comment-14631784
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Created reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631785#comment-14631785
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Hi [~ewencp], as this my first official Kafka patch (yay!) it should have some 
misunderstandings or plain errors (I was particularly in doubt about the 
logging of a useful message by the broker's replica fetcher). Please, whenever 
you have time, see if it's okay. Thanks!

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14632244#comment-14632244
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-17 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-07-18_00:37:31.patch

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2335) Javadoc for Consumer says that it's thread-safe

2015-07-15 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629030#comment-14629030
 ] 

Edward Ribeiro commented on KAFKA-2335:
---

Thanks [~hachikuji]! Going to tackle it asap. :)

 Javadoc for Consumer says that it's thread-safe
 ---

 Key: KAFKA-2335
 URL: https://issues.apache.org/jira/browse/KAFKA-2335
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Ismael Juma
Assignee: Jason Gustafson
 Fix For: 0.8.3


 This looks like it was left there by mistake:
 {quote}
  * The consumer is thread safe but generally will be used only from within a 
 single thread. The consumer client has no threads of it's own, all work is 
 done in the caller's thread when calls are made on the various methods 
 exposed.
 {quote}
 A few paragraphs below it says:
 {quote}
 The Kafka consumer is NOT thread-safe. All network I/O happens in the thread 
 of the application making the call. It is the responsibility of the user to 
 ensure that multi-threaded access is properly synchronized. Un-synchronized 
 access will result in {@link ConcurrentModificationException}.
 {quote}
 This matches what the code does, so the former quoted section should probably 
 be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2335) Javadoc for Consumer says that it's thread-safe

2015-07-15 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628383#comment-14628383
 ] 

Edward Ribeiro commented on KAFKA-2335:
---

LOL, no need to apologize. :) I have just joined the project, so still feeling 
the temperature of the water. Other issues to come (extra help appreciated if 
you can point me out to lhf). ;) 

As for the patch, I have taken a look and it seems alright. LGTM. 

Cheers!

 Javadoc for Consumer says that it's thread-safe
 ---

 Key: KAFKA-2335
 URL: https://issues.apache.org/jira/browse/KAFKA-2335
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Ismael Juma
Assignee: Jason Gustafson
 Fix For: 0.8.3


 This looks like it was left there by mistake:
 {quote}
  * The consumer is thread safe but generally will be used only from within a 
 single thread. The consumer client has no threads of it's own, all work is 
 done in the caller's thread when calls are made on the various methods 
 exposed.
 {quote}
 A few paragraphs below it says:
 {quote}
 The Kafka consumer is NOT thread-safe. All network I/O happens in the thread 
 of the application making the call. It is the responsibility of the user to 
 ensure that multi-threaded access is properly synchronized. Un-synchronized 
 access will result in {@link ConcurrentModificationException}.
 {quote}
 This matches what the code does, so the former quoted section should probably 
 be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2335) Javadoc for Consumer says that it's thread-safe

2015-07-15 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628365#comment-14628365
 ] 

Edward Ribeiro commented on KAFKA-2335:
---

I was going to work on this, but go ahead. :)

 Javadoc for Consumer says that it's thread-safe
 ---

 Key: KAFKA-2335
 URL: https://issues.apache.org/jira/browse/KAFKA-2335
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Jason Gustafson

 This looks like it was left there by mistake:
 {quote}
  * The consumer is thread safe but generally will be used only from within a 
 single thread. The consumer client has no threads of it's own, all work is 
 done in the caller's thread when calls are made on the various methods 
 exposed.
 {quote}
 A few paragraphs below it says:
 {quote}
 The Kafka consumer is NOT thread-safe. All network I/O happens in the thread 
 of the application making the call. It is the responsibility of the user to 
 ensure that multi-threaded access is properly synchronized. Un-synchronized 
 access will result in {@link ConcurrentModificationException}.
 {quote}
 This matches what the code does, so the former quoted section should probably 
 be deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-07-14 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627292#comment-14627292
 ] 

Edward Ribeiro commented on KAFKA-2210:
---

Hi [~parth.brahmbhatt], left some minor comments on the review board. See if it 
makes sense. Cheers!

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2327) broker doesn't start if config defines advertised.host but not advertised.port

2015-07-09 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621176#comment-14621176
 ] 

Edward Ribeiro commented on KAFKA-2327:
---

Left some minor review comments, hope you don't mind. :)

 broker doesn't start if config defines advertised.host but not advertised.port
 --

 Key: KAFKA-2327
 URL: https://issues.apache.org/jira/browse/KAFKA-2327
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Geoffrey Anderson
Assignee: Geoffrey Anderson
Priority: Minor

 To reproduce locally, in server.properties, define advertised.host and 
 port, but not advertised.port 
 port=9092
 advertised.host.name=localhost
 Then start zookeeper and try to start kafka. The result is an error like so:
 [2015-07-09 11:29:20,760] FATAL  (kafka.Kafka$)
 kafka.common.KafkaException: Unable to parse PLAINTEXT://localhost:null to a 
 broker endpoint
   at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:309)
   at 
 kafka.server.KafkaConfig.getAdvertisedListeners(KafkaConfig.scala:728)
   at kafka.server.KafkaConfig.init(KafkaConfig.scala:668)
   at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:541)
   at kafka.Kafka$.main(Kafka.scala:58)
   at kafka.Kafka.main(Kafka.scala)
 Looks like this was changed in 5c9040745466945a04ea0315de583ccdab0614ac
 the cause seems to be in KafkaConfig.scala in the getAdvertisedListeners 
 method, and I believe the fix is (starting at line 727)
 {code}
 ...
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 getString(KafkaConfig.AdvertisedHostNameProp) + : + 
 getInt(KafkaConfig.AdvertisedPortProp))
 ...
 {code}
 -
 {code}
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 advertisedHostName + : + advertisedPort
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1227) Code dump of new producer

2014-02-04 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890594#comment-13890594
 ] 

Edward Ribeiro commented on KAFKA-1227:
---

Two review board entries were created for the same issue. Which one is valid? :)

 Code dump of new producer
 -

 Key: KAFKA-1227
 URL: https://issues.apache.org/jira/browse/KAFKA-1227
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jay Kreps
 Attachments: KAFKA-1227.patch, KAFKA-1227.patch, KAFKA-1227.patch


 The plan is to take a dump of the producer code as is and then do a series 
 of post-commit reviews to get it into shape. This bug tracks just the code 
 dump.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (KAFKA-1227) Code dump of new producer

2014-01-27 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13882758#comment-13882758
 ] 

Edward Ribeiro commented on KAFKA-1227:
---

Hello folks,

I've just started to look into your new API design and would like to register a 
few observations, from an API design perspective, for now. I hope you enjoy my 
suggestions and, please, let me know what you think about them. Excuse me in 
advance for the long message. Well, let's start:

It is a good practice to replace the implementation specific (List) of the 
parameter by a more general (abstract or interface) type so

{code}
Cluster(java.util.ListNode nodes, java.util.ListPartitionInfo partitions) 
{code}

becomes

{code}
Cluster(java.util.CollectionNode nodes, java.util.CollectionPartitionInfo 
partitions) 
{code}

This makes it possible to pass a Set, a List, for example. The same goes to 

{code}
bootstrap(java.util.Listjava.net.InetSocketAddress addresses) 
{code}

that becomes

{code}
bootstrap(java.util.Collectionjava.net.InetSocketAddress addresses) 
{code}

This can seem futile, but I have been parts of ZooKeeper API that need to fix a 
thing, but are literally freezed because their API published a concrete class. 
:( 

Also, the methods who return a collection should also return a more generic 
collection so that the swap in the future (say change the List by a Set) 
doesn't become too difficult. Therefore,

{code}
java.util.ListPartitionInfo   partitionsFor(java.lang.String topic) 
{code}

becomes

{code}
java.util.CollectionPartitionInfo partitionsFor(java.lang.String topic) 
{code}

I have also looked into empty() method. Hey, it returns a new object each time 
it's called! See

{code}
/**
 * Create an empty cluster instance with no nodes and no topic-partitions.
 */
public static Cluster empty() {
return new Cluster(new ArrayListNode(0), new 
ArrayListPartitionInfo(0));
}
{code}

There's no need to do this. You can create a EMPTY_CLUSTER field as below and 
then return it each time the method is called. See
{code}
   private static final Cluster EMPTY_CLUSTER = new 
Cluster(Collections.NodeemptyList(), Collections.NodeemptyList());

... 

/**
 * Create an empty cluster instance with no nodes and no topic-partitions.
 */
public static Cluster empty() {
return EMPTY_CLUSTER;
}
{code}

This option saves creation of unnecessary objects, and makes it easy to perform 
comparison as if (myNode == myCluster.empty()) in the client side.;)

I also see the necessity of adding an 'isEmpty()' method so that users can 
check if the Cluster is empty. In the case of adding an isEmpty then the 
declaration of EMPTY_CLUSTER becomes

{code}
   private static final Cluster EMPTY_CLUSTER = new 
Cluster(Collections.NodeemptyList(), Collections.NodeemptyList()) {
public boolean 
isEmpty() {
return true;
}
}

{code}

As I said, it was just the first glance over the code. I can possibly have 
further suggestions, more algorithmic oriented or yet from an API design 
perspective, but that's all for now. I hope you like it.

Cheers,
Edward Ribeiro

 Code dump of new producer
 -

 Key: KAFKA-1227
 URL: https://issues.apache.org/jira/browse/KAFKA-1227
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jay Kreps
 Attachments: KAFKA-1227.patch


 The plan is to take a dump of the producer code as is and then do a series 
 of post-commit reviews to get it into shape. This bug tracks just the code 
 dump.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)