Re: Review Request 36590: Patch for KAFKA-2275

2015-07-21 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36590/#review92557
---


Hey guys, I realized that most of this code has changed as the design to 
address this itself changed a bit. The new diff is available at 
https://reviews.apache.org/r/36681/. My apologies for the inconvenience, I 
should have vetted out design details before posting a patch.

- Ashish Singh


On July 20, 2015, 5:44 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36590/
> ---
> 
> (Updated July 20, 2015, 5:44 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2275
> https://issues.apache.org/jira/browse/KAFKA-2275
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Add logic to get all topics when needMetadataForAllTopics is set on metadata
> 
> 
> Return metadata for all topics if empty list is passed to partitionsFor
> 
> 
> KAFKA-2275: Add a "Map> partitionsFor(String... 
> topics)" API to the new consumer
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/Metadata.java 
> 0387f2602c93a62cd333f1b3c569ca6b66b5b779 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 48fe7961e2215372d8033ece4af739ea06c6457b 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> 252b759c0801f392e3526b0f31503b4b8fbf1c8a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> bea3d737c51be77d5b5293cdd944d33b905422ba 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> c14eed1e95f2e682a235159a366046f00d1d90d6 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
>  9517d9d0cd480d5ba1d12f1fde7963e60528d2f8 
>   clients/src/main/java/org/apache/kafka/common/Cluster.java 
> 60594a7dce90130911a626ea80cf80d815aeb46e 
>   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
> 3eb5f95731a3f06f662b334ab2b3d0ad7fa9e1ca 
> 
> Diff: https://reviews.apache.org/r/36590/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



[jira] [Updated] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-21 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2275:
--
Attachment: KAFKA-2275.patch

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275.patch, 
> KAFKA-2275_2015-07-17_21:39:27.patch, KAFKA-2275_2015-07-20_10:44:19.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2275) Add a ListTopics() API to the new consumer

2015-07-21 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636378#comment-14636378
 ] 

Ashish K Singh commented on KAFKA-2275:
---

Created reviewboard https://reviews.apache.org/r/36681/
 against branch trunk

> Add a ListTopics() API to the new consumer
> --
>
> Key: KAFKA-2275
> URL: https://issues.apache.org/jira/browse/KAFKA-2275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Ashish K Singh
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-2275.patch, KAFKA-2275.patch, 
> KAFKA-2275_2015-07-17_21:39:27.patch, KAFKA-2275_2015-07-20_10:44:19.patch
>
>
> With regex subscription like
> {code}
> consumer.subscribe("topic*")
> {code}
> The partition assignment is automatically done at the Kafka side, while there 
> are some use cases where consumers want regex subscriptions but not 
> Kafka-side partition assignment, rather with their own specific partition 
> assignment. With ListTopics() they can periodically check for topic list 
> changes and specifically subscribe to the partitions of the new topics.
> For implementation, it involves sending a TopicMetadataRequest to a random 
> broker and parse the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36681: Patch for KAFKA-2275

2015-07-21 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36681/
---

Review request for kafka.


Bugs: KAFKA-2275
https://issues.apache.org/jira/browse/KAFKA-2275


Repository: kafka


Description
---

KAFKA-2275: Add a ListTopics() API to the new consumer


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
ed4c0d98596cc294757f35df8c8cbc8e36ff42de 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
48fe7961e2215372d8033ece4af739ea06c6457b 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
252b759c0801f392e3526b0f31503b4b8fbf1c8a 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
bea3d737c51be77d5b5293cdd944d33b905422ba 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
c14eed1e95f2e682a235159a366046f00d1d90d6 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 9517d9d0cd480d5ba1d12f1fde7963e60528d2f8 
  core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
3eb5f95731a3f06f662b334ab2b3d0ad7fa9e1ca 

Diff: https://reviews.apache.org/r/36681/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Commented] (KAFKA-2092) New partitioning for better load balancing

2015-07-21 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636353#comment-14636353
 ] 

Jason Gustafson commented on KAFKA-2092:


[~azaroth], I'm actually not sure that Samza is the better place. I thought 
that it would have a better chance of integration since Smaza already exposes a 
processing layer, which seems to be needed to use this partitioner. I think 
perhaps what we should do is open this discussion to the Kafka dev list. I 
think we're missing some guidance on what partitioners should be included in 
Kafka. As far as I know, Kafka has only provided primitive partitioning 
strategies up to now, but as Kafka starts to provide more core facilities for 
stream processing, perhaps it should start to include more.

> New partitioning for better load balancing
> --
>
> Key: KAFKA-2092
> URL: https://issues.apache.org/jira/browse/KAFKA-2092
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Gianmarco De Francisci Morales
>Assignee: Jun Rao
> Attachments: KAFKA-2092-v1.patch, KAFKA-2092-v2.patch
>
>
> We have recently studied the problem of load balancing in distributed stream 
> processing systems such as Samza [1].
> In particular, we focused on what happens when the key distribution of the 
> stream is skewed when using key grouping.
> We developed a new stream partitioning scheme (which we call Partial Key 
> Grouping). It achieves better load balancing than hashing while being more 
> scalable than round robin in terms of memory.
> In the paper we show a number of mining algorithms that are easy to implement 
> with partial key grouping, and whose performance can benefit from it. We 
> think that it might also be useful for a larger class of algorithms.
> PKG has already been integrated in Storm [2], and I would like to be able to 
> use it in Samza as well. As far as I understand, Kafka producers are the ones 
> that decide how to partition the stream (or Kafka topic).
> I do not have experience with Kafka, however partial key grouping is very 
> easy to implement: it requires just a few lines of code in Java when 
> implemented as a custom grouping in Storm [3].
> I believe it should be very easy to integrate.
> For all these reasons, I believe it will be a nice addition to Kafka/Samza. 
> If the community thinks it's a good idea, I will be happy to offer support in 
> the porting.
> References:
> [1] 
> https://melmeric.files.wordpress.com/2014/11/the-power-of-both-choices-practical-load-balancing-for-distributed-stream-processing-engines.pdf
> [2] https://issues.apache.org/jira/browse/STORM-632
> [3] https://github.com/gdfm/partial-key-grouping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36670: Patch for KAFKA-2355

2015-07-21 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36670/#review92551
---

Ship it!


Thanks for the patch Edward.

- Ashish Singh


On July 22, 2015, 1:46 a.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36670/
> ---
> 
> (Updated July 22, 2015, 1:46 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2355
> https://issues.apache.org/jira/browse/KAFKA-2355
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2355 Add an unit test to validate the deletion of a partition marked as 
> deleted
> 
> 
> Diffs
> -
> 
>   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
> fa8ce259a2832ab86f9dda8c1d409b2c42d43ae9 
> 
> Diff: https://reviews.apache.org/r/36670/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin


> On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
> > Thanks for looking into that. Exception handling was the most challenging 
> > part of rewriting SocketServer, so I'm glad to see more eyes on this 
> > implementation.
> > 
> > I have a concern regarding the right way to handle an unexpected exceptions.

Hi Gwen, thanks for the quick review. I replied to your comments below. Mind 
taking another look?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


On July 22, 2015, 5:02 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36664/
> ---
> 
> (Updated July 22, 2015, 5:02 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2353
> https://issues.apache.org/jira/browse/KAFKA-2353
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Gwen's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36664/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Comment Edited] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636319#comment-14636319
 ] 

Jiangjie Qin edited comment on KAFKA-2350 at 7/22/15 5:27 AM:
--

[~jkreps][~hachikuji], I actually was not proposing reuse
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
So I was thinking we follow the current convention which is:
1. If you are subscribing/unsubscribing to a partition explicitly, you are on 
your own
2. If you are subscribing/unsubscribing to a topic, we use consumer coordinator 
for partition assignment.

I assume the only use case we are trying to address is when user is using 
consumer coordinator and want to temporarily stop consuming from a topic 
without triggering a consumer rebalance.

If so, to unsubscribe from a topic we can do something like fowllowing
{code}
...
for(TopicPartition tp : consumer.assignedTopicPartitions.get(topic)) {
unsubscribe(tp);
}
{code}
To resubscribe, we can do the similar but just call subscribe(tp) instead

For [~jkreps]'s concern. Currently subscribe/unsubscribe to a partition would 
get an error if user are using consumer coordinator to assign partitions. In 
this approach, it becomes if user subscribe/unsubscribe to a partition that was 
not assigned by the consumer coordinator, that is an error. So the subscription 
set must be a subset of assignedTopicPartition set. 

This approach might need to expose an interface of assignedTopicPartitions(), 
but I can see that useful in quite a few use cases.


was (Author: becket_qin):
[~jkreps][~hachikuji], I actually was not proposing reuse
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
So I was thinking we follow the current convention which is:
1. If you are subscribing/unsubscribing to a partition explicitly, you are on 
your own
2. If you are subscribing/unsubscribing to a topic, we use consumer coordinator 
for partition assignment.

I assume the only use case we are trying to address is when user is using 
consumer coordinator and want to temporarily stop consuming from a topic 
without triggering a consumer rebalance.

If so, to unsubscribe from a topic we can do something like fowllowing
{code}
...
for(TopicPartition tp : consumer.assignedTopicPartitions.get(topic)) {
unsubscribe(tp);
}
{code}
To resubscribe, we can do the similar but just call subscribe(tp) instead

This approach might need to expose an interface of assignedTopicPartitions(), 
but I can see that useful in quite a few use cases.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636319#comment-14636319
 ] 

Jiangjie Qin commented on KAFKA-2350:
-

[~jkreps][~hachikuji], I actually was not proposing reuse
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
So I was thinking we follow the current convention which is:
1. If you are subscribing/unsubscribing to a partition explicitly, you are on 
your own
2. If you are subscribing/unsubscribing to a topic, we use consumer coordinator 
for partition assignment.

I assume the only use case we are trying to address is when user is using 
consumer coordinator and want to temporarily stop consuming from a topic 
without triggering a consumer rebalance.

If so, to unsubscribe from a topic we can do something like fowllowing
{code}
...
for(TopicPartition tp : consumer.assignedTopicPartitions.get(topic)) {
unsubscribe(tp);
}
{code}
To resubscribe, we can do the similar but just call subscribe(tp) instead

This approach might need to expose an interface of assignedTopicPartitions(), 
but I can see that useful in quite a few use cases.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2353:

Attachment: KAFKA-2353_2015-07-21_22:02:24.patch

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch, KAFKA-2353_2015-07-21_22:02:24.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636235#comment-14636235
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

Updated reviewboard https://reviews.apache.org/r/36664/diff/
 against branch origin/trunk

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch, KAFKA-2353_2015-07-21_22:02:24.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/
---

(Updated July 22, 2015, 5:02 a.m.)


Review request for kafka.


Bugs: KAFKA-2353
https://issues.apache.org/jira/browse/KAFKA-2353


Repository: kafka


Description (updated)
---

Addressed Gwen's comments


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36664/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin


> On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
> > core/src/main/scala/kafka/network/SocketServer.scala, line 465
> > 
> >
> > Turns out that catching Throwable is a really bad idea: 
> > https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/

Ah... Didn't know that before. I explicitly listed the exceptions.


> On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
> > core/src/main/scala/kafka/network/SocketServer.scala, line 400
> > 
> >
> > So in case of unexpected exception, we log an error and keep running?
> > 
> > Isn't it better to kill the processor, since we don't know what's the 
> > state of the system? If the acceptor keeps placing messages in the queue 
> > for a dead processor, isn't it a separate issue?

This part I'm not quite sure. I am not very experienced in the error handling 
in such case, so please correct me if I missed something.
Here is what I thought.

The way it currently works is that the acceptor will 
1. accept new connection request and create new socket channel
2. choose a processor and put the socket channel into the processor's new 
connection queue

The processor will just take the socket channels from the queue and register it 
to the selector.
If the processor runs and get an uncaught exception, there are several 
possibilities. 
Case 1: The exception was from one socket channel. 
Case 2: The exception was associated with a bad request. 
In case 1, ideally we should just disconnect that socket channel without 
affecting other socket channels.
In case 2, I think we should log the error and skip the message - assuming 
client will retry sending data if no response was received for a given peoriod 
of time.

I am not sure if letting processor exit is a good idea because this will lead 
to the result of a badly behaving client screw the entire cluster - it might 
screw processors one by one. Comparing with that, I kind of leaning towards 
keeping the processor running and serving other normal TCP connections if 
possible, but log the error so monitoring system can detect and see if human 
intervention is needed.

Also, I don't know what to do here to prevent the thread from exiting without 
catching all the throwables.
According to this blog 
http://www.tzavellas.com/techblog/2010/09/20/catching-throwable-in-scala/
I guess I can rethrow all the ControlThrowables, but intercept the rests?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


On July 21, 2015, 11:03 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36664/
> ---
> 
> (Updated July 21, 2015, 11:03 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2353
> https://issues.apache.org/jira/browse/KAFKA-2353
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Catch exception in kafka.network.Processor to avoid socket leak and exiting 
> unexpectedly.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36664/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Issue Type: Test  (was: Sub-task)
Parent: (was: KAFKA-2345)

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2355.patch
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Affects Version/s: 0.8.2.1
   Status: Patch Available  (was: Open)

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2355.patch
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-21 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92526
---



core/src/main/scala/kafka/server/AbstractFetcherThread.scala (line 166)


Well, 2 things. First, I only took a quick look after I couldn't reproduce 
the issue, so I could have missed an error code that can be returned and which 
*would* allow us to issue a warning here.

Second, the other warning is still useful. If users start to see stalls in 
their consumer, they may start looking at consumer logs, but eventually it will 
make sense to look at the brokers as well. And for the replication failure, 
they're likely to look at all the brokers for the given partition. Providing 
help *anywhere* is much better than the current situation of just silently 
stalling.


- Ewen Cheslack-Postava


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> ---
> 
> (Updated July 21, 2015, 4:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
> https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



[jira] [Resolved] (KAFKA-863) System Test - update 0.7 version of kafka-run-class.sh for Migration Tool test cases

2015-07-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-863.
-
   Resolution: Won't Fix
 Assignee: Ewen Cheslack-Postava
Fix Version/s: 0.8.3

Actually, this looks like it was resolved in KAFKA-1645 since the migration 
tool system tests were removed. In any case, those tests have been deprecated 
by KIP-25 and will be removed soon.

> System Test - update 0.7 version of kafka-run-class.sh for Migration Tool 
> test cases
> 
>
> Key: KAFKA-863
> URL: https://issues.apache.org/jira/browse/KAFKA-863
> Project: Kafka
>  Issue Type: Task
>Reporter: John Fung
>Assignee: Ewen Cheslack-Postava
>  Labels: kafka-0.8, replication-testing
> Fix For: 0.8.3
>
> Attachments: kafka-863-v1.patch
>
>
> The 0.7 version is located at: 
> system_test/migration_tool_testsuite/0.7/bin/kafka-run-class.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636119#comment-14636119
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Hi [~singhashish] and [~gwenshap]. I hope you don't get annoyed, but I saw that 
KAFKA-2345 was lacking a unit test to validate the new exception being thrown 
by KAFKA-2345. As this issue was already closed I created a new ticket, and 
added the unit test. Please, let me know what you think. Cheers! :)

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2355.patch
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636116#comment-14636116
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Created reviewboard https://reviews.apache.org/r/36670/diff/
 against branch origin/trunk

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2355.patch
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Attachment: KAFKA-2355.patch

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2355.patch
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36670: Patch for KAFKA-2355

2015-07-21 Thread Edward Ribeiro

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36670/
---

Review request for kafka.


Bugs: KAFKA-2355
https://issues.apache.org/jira/browse/KAFKA-2355


Repository: kafka


Description
---

KAFKA-2355 Add an unit test to validate the deletion of a partition marked as 
deleted


Diffs
-

  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
fa8ce259a2832ab86f9dda8c1d409b2c42d43ae9 

Diff: https://reviews.apache.org/r/36670/diff/


Testing
---


Thanks,

Edward Ribeiro



[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636105#comment-14636105
 ] 

Jason Gustafson commented on KAFKA-2350:


Hey [~becket_qin], thanks for the suggestion. I think my only concern is that 
this would make the API more confusing. It would give two meanings to 
subscribe(partition) which depend on whether automatic assignment is used. I 
agree with you about minimizing the complexity of the consumer API, but I'd 
probably rather have the explicit methods if we think the use case is valid.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Summary: Add an unit test to validate the deletion of a partition marked as 
deleted  (was: Creating a unit test to validate the deletion of a partition 
marked as deleted)

> Add an unit test to validate the deletion of a partition marked as deleted
> --
>
> Key: KAFKA-2355
> URL: https://issues.apache.org/jira/browse/KAFKA-2355
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Edward Ribeiro
>Priority: Minor
> Fix For: 0.8.3
>
>
> Trying to delete a partition marked as deleted throws 
> {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
> validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2355) Creating a unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)
Edward Ribeiro created KAFKA-2355:
-

 Summary: Creating a unit test to validate the deletion of a 
partition marked as deleted
 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor


Trying to delete a partition marked as deleted throws 
{{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636098#comment-14636098
 ] 

Jay Kreps commented on KAFKA-2350:
--

+1 on resume instead of unpause though it doesn't match subscribe/unsubscribe.

The original motivation for this was to be able to subscribe at the topic level 
but be able to say that while you're still subscribed to a given partition you 
can't take more data for that partition at this particular moment. Generalizing 
that to allow pausing a whole topic makes sense too.

[~becket_qin] I think your idea is having unsubscribe(partition) have the same 
effect as pause(partition) when you are subscribed at the topic level would be 
intuitive, but the logic of how that would work might be a bit complex. If 
someone is smart enough to work out the details that could be more elegant than 
a new api. The challenge is that partition level subscribe/unsubscribe is 
currently an error if you are subscribed at the topic level and the details of 
that control whether group management etc is used too.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka High level consumer rebalancing

2015-07-21 Thread Pranay Agarwal
Thanks Mayuresh,

Can I at least control the rebalance of consumers? Currently consumes die
after specific partition has no more messages, and there is rebalance of
consumes triggered, which causes more consumers to die who get assigned to
empty partition(because zookeeper treat empty partition no differently).

Is there way I can control so that there is no rebalancing of consumers
when some consumer die?

Thanks
-Pranay

On Tue, Jul 21, 2015 at 11:25 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Not sure if you can do that with High level consumer.
>
> Thanks,
>
> Mayuresh
>
> On Tue, Jul 21, 2015 at 10:53 AM, Pranay Agarwal  >
> wrote:
>
> > Any ideas?
> >
> > On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal <
> agarwalpran...@gmail.com>
> > wrote:
> >
> > > Hi all,
> > >
> > > Is there any way I can force Zookeeper/Kafka to rebalance new consumers
> > > only for subset of total number of partitions. I have a situation where
> > out
> > > of 120 partitions 60 have been already consumed, but the zookeeper also
> > > assigns these empty/inactive partitions as well for the re-balancing, I
> > > want my resources to be used only for the partitions which still have
> > some
> > > messages left to read.
> > >
> > > Thanks
> > > -Pranay
> > >
> >
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>


[jira] [Comment Edited] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636074#comment-14636074
 ] 

Jiangjie Qin edited comment on KAFKA-2350 at 7/22/15 12:44 AM:
---

I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition validation set and subscription set. If 
Kafka based partition assignment is not used, assigned partition validation set 
will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

Every time when we call poll, we only use the subscription set.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.


was (Author: becket_qin):
I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition set and subscription set. If Kafka based 
partition assignment is not used, assigned partition set will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636074#comment-14636074
 ] 

Jiangjie Qin commented on KAFKA-2350:
-

I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition set and subscription set. If Kafka based 
partition assignment is not used, assigned partition set will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.

> Add KafkaConsumer pause capability
> --
>
> Key: KAFKA-2350
> URL: https://issues.apache.org/jira/browse/KAFKA-2350
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are some use cases in stream processing where it is helpful to be able 
> to pause consumption of a topic. For example, when joining two topics, you 
> may need to delay processing of one topic while you wait for the consumer of 
> the other topic to catch up. The new consumer currently doesn't provide a 
> nice way to do this. If you skip poll() or if you unsubscribe, then a 
> rebalance will be triggered and your partitions will be reassigned.
> One way to achieve this would be to add two new methods to KafkaConsumer:
> {code}
> void pause(String... topics);
> void unpause(String... topics);
> {code}
> When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
> new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Edward Ribeiro
+1 (non binding)

On Tue, Jul 21, 2015 at 7:36 PM, Jay Kreps  wrote:

> +1
>
> -Jay
>
> On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > I would like to start a vote on switching to GitHub pull requests for new
> > contributions. To be precise, the vote is on whether we should:
> >
> > * Update the documentation to tell users to use pull requests instead of
> > patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
> > * Use pull requests for new contributions
> >
> > In a previous discussion[1], everyone that participated was in favour.
> It's
> > also worth reading the "Contributing Code Changes" wiki page[2] (if you
> > haven't already) to understand the flow.
> >
> > A number of pull requests have been merged in the last few weeks to test
> > this flow and I believe it's working well enough. As usual, there is
> always
> > room for improvement and I expect is to tweak things as time goes on.
> >
> > The main downside of using GitHub pull requests is that we don't have
> write
> > access to https://github.com/apache/kafka. That means that we rely on
> > commit hooks to close integrated pull requests (the merge script takes
> care
> > of formatting the message so that this happens) and the PR creator or
> > Apache Infra to close pull requests that are not integrated.
> >
> > Regarding existing contributions, I think it's up to the contributor to
> > decide whether they want to resubmit it as a pull request or not. I
> expect
> > that there will be a transition period where the old and new way will
> > co-exist. But that can be discussed separately.
> >
> > The vote will run for 72 hours.
> >
> > +1 (non-binding) from me.
> >
> > Best,
> > Ismael
> >
> > [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
> > [2]
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> >
>


Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 14, 2015, 11:09 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/PermissionType.scala, line 40
> > 
> >
> > Same comment of Operation.scala also applies here. In addition, the 
> > return is redundant, right?

Done


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review91675
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 14, 2015, 11:12 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Resource.scala, line 25
> > 
> >
> > In KafkaPrincipal you split like:
> > 
> > val arr: Array[String] = str.split(Separator, 2) //only split in two 
> > parts
> > 
> > But here you just call split without a second parameter. Choose one way 
> > and uniform the use. :)

done.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review91676
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:30 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 71
> > 
> >
> > Disclaimer: I am not claiming that you should change the code commented 
> > here.
> > 
> > Okay, for getting rid of the dreaded 
> > ``collection.mutable.HashSet[Acl]()``, you have two options, afaik:
> > 
> > 1. use ``(for (i <- list) yield i).toSet``. In the current code it 
> > would be something like:
> > 
> > ```
> > val acls = (for (item <- aclSet) {
> > val principals: List[KafkaPrincipal] = 
> > item(PrincipalKey).asInstanceOf[List[String]].map(principal => 
> > KafkaPrincipal.fromString(principal))
> > val permissionType: PermissionType = 
> > PermissionType.fromString(item(PermissionTypeKey).asInstanceOf[String])
> > val operations: List[Operation] = 
> > item(OperationKey).asInstanceOf[List[String]].map(operation => 
> > Operation.fromString(operation))
> > val hosts: List[String] = item(HostsKey).asInstanceOf[List[String]]
> > 
> > yield new Acl(principals.toSet, permissionType, hosts.toSet, 
> > operations.toSet)
> > }).toSet
> > ```
> > 
> > The surrounding parenthesis around the ``for`` comprehesion are 
> > important as ``yield`` would return the same Collection type from aclSet (a 
> > List in this case).
> > 
> > 
> > 2. To use a (private) helper recursive function like, for example:
> > 
> > ```
> > private def listToSet(list: List[Map[String, Any]]): Set[Acl] = {
> > list match {
> >case item::tail => {
> >  
> >  // L#72 - L#75 processing over `item`
> >  
> >  Set(new Acl(...)) ++ listToSet(tail)
> >}
> >case Nil => Set.empty[Acl]
> > }
> > }
> > ```
> > 
> > can call it from ``fromJson`` on ``aclSet`` instead of doing a 
> > ``foreach``.
> > 
> > 
> > In fact, most of lines  L#72 - L#75 are composed of Lists that 
> > eventually get converted to set (principals, hosts, operations and acls), 
> > so you could "generify" the helper function above, so that you could pass a 
> > 'convertion' function, but here I am wary of the complexity of the code 
> > starting to outweight the benefits (?) of not using mutable data 
> > structures... Nevertheless, it would look like:
> > 
> > ```
> > def listToSet[T,K](list: List[T], f: T => K): Set[K] = {
> > list match {
> > case head::tail => Set(f(head)) ++ listToSet(tail, f)
> > case Nil => Set.empty[K]
> >  }
> > }
> > ```

I haven't changed it for now dont really think to .toSet will be that bad.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92345
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5f

Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 12:49 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 86
> > 
> >
> > It's better to return None here, no?

Can't return None or nil where a Map[String, Any] is expected :-(.


> On July 21, 2015, 12:49 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 104
> > 
> >
> > Please, put a space between ``if`` and ``(``.

done.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92341
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 14, 2015, 10:59 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Operation.scala, line 35
> > 
> >
> > Scala's match is a powerful mechanism but using it to decode as below 
> > seems boilterplate-ish. Why not use something like:
> > 
> > def fromString(operation: String): Operation = {
> >   val op = 
> > values().filter(_.name.equalsIgnoreCase(operation)).headOption
> >   op match {
> >  Some(x) => x
> >   }
> > }
> > 
> > or even:
> > 
> > def fromString(operation: String): Operation = {
> >   val Some(op) = 
> > values().filter(_.name.equalsIgnoreCase(operation)).headOption
> >   op
> > }

new to the whole scala thing and so I still find some of these options too 
cryptic to read but I am going to assume you know the idioms better than me. 
Changed it to second option you proposed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review91671
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 14, 2015, 11:26 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 110
> > 
> >
> > As the values are fixed you could have written toMap() as below so that 
> > we can save ourselves from creating a mutable Map just to convert it to an 
> > immutable Map at the end:
> > 
> >  def toMap() : Map[String, Any] = {
> > Map(Acl.PrincipalKey -> principals.map(principal => 
> > principal.toString),
> > Acl.PermissionTypeKey -> permissionType.name),
> > Acl.OperationKey -> operations.map(operation => operation.name),
> > Acl.HostsKey -> hosts)
> >   }

Done.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review91679
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 14, 2015, 11:01 p.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 24
> > 
> >
> > nit: what about switch this lines 23 and 24 and then use WildCardHost 
> > as replacement for the second literal parameter of new KafkaPrincipal?

I actually like it the way it is right now.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review91672
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:37 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/PermissionType.scala, line 47
> > 
> >
> > The ``return`` here is redundant. In fact the L#46 - L#48 could be 
> > rewritten as either:
> > 
> > ```
> > def values() : List[PermissionType] = {
> >List(Allow, Deny)
> > }
> > ```
> > 
> > or just
> > 
> > ```
> > def values = List(Allow, Deny)
> > ```
> > 
> > please, check with the committers what they prefer. In any case the 
> > return is a unnecessary. ;)

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92347
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:40 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 136
> > 
> >
> > The return is redundant here.

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92349
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:49 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala, line 55
> > 
> >
> > the parenthesis after the ``!`` is not required.

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92352
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:47 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Acl.scala, line 86
> > 
> >
> > Or maybe an Map.empty[String, Any]

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92351
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:50 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala, line 28
> > 
> >
> > Please, put a space between ``if`` and ``(``.

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92353
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:59 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 540
> > 
> >
> > Please, put a space between ``if`` and ``(``.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92356
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:43 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 624
> > 
> >
> > Lines L#620 and L#621 could be merged (with a &&)  into a single 
> > if-condition. No need for nested if-conditions here. ;-)

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92350
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:55 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/Operation.scala, line 43
> > 
> >
> > The ``return`` here is redundant.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92354
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 2:02 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 139
> > 
> >
> > Put a space between ``if`` and ``(``.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92357
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 1:57 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 151
> > 
> >
> > Please, put a space between ``if`` and ``(`` here.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92355
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 2:09 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 166
> > 
> >
> > Please, put a space between ``if`` and ``(``.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92358
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 2:15 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/PermissionType.scala, line 21
> > 
> >
> > Just kidding, please remove it.

removed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92361
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On July 21, 2015, 2:13 a.m., Edward Ribeiro wrote:
> > core/src/main/scala/kafka/security/auth/PermissionType.scala, line 21
> > 
> >
> > the semi-colon is trying to scape here, catch it! :)

Indeed it was, removed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92359
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-07-21 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-07-21_17:08:21.patch

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


> On June 1, 2015, 1:11 a.m., Jun Rao wrote:
> > Thanks for that patch. A few comments below.
> > 
> > Also, two common types of users are consumers and publishers. Currently, if 
> > you want to allow a user to consume from topic t in consumer group g, you 
> > have to grant (1) read permission on topic t; (2) read permission on group 
> > g; (3) describe permission on topic t; (4) describe permission on group g; 
> > (5) create permission on offset topic; (6) describe permission on offset 
> > topic. Similarly, if you want to allow a user to publish to a topic t, you 
> > need to grant (1) write permission to topic t; (2) create permission on the 
> > cluster; (3) describe permission on topic t. These are a quite a few 
> > individual permission for the admin to remember and set. I am wondering if 
> > we can grant permissions to these two types of users in a simpler way, at 
> > least through the cli. For example, for a consumer, based on the topics and 
> > the consumer group, it would be nice if the cli can grant the necessary 
> > permissions automatically, instead of having to require the admin to set 
> > each indivial permission.

I will handle this as part of the CLI PR.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review85791
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated July 22, 2015, 12:08 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 18f5b5b895af1469876b2223841fd90a2dd255e0 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
> 04a02e08a54139ee1a298c5354731bae009efef3 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-07-21 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636051#comment-14636051
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated July 22, 2015, 12:08 a.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Diffs (updated)
-

  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
18f5b5b895af1469876b2223841fd90a2dd255e0 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
04a02e08a54139ee1a298c5354731bae009efef3 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636040#comment-14636040
 ] 

Gwen Shapira commented on KAFKA-2353:
-

I left comments in RB :)

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636020#comment-14636020
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

[~gwenshap] Can you help take a look at this patch when get some time since you 
made a lot of change in the socket server in KAFKA-1928 :) Thanks.

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


Thanks for looking into that. Exception handling was the most challenging part 
of rewriting SocketServer, so I'm glad to see more eyes on this implementation.

I have a concern regarding the right way to handle an unexpected exceptions.


core/src/main/scala/kafka/network/SocketServer.scala (line 400)


So in case of unexpected exception, we log an error and keep running?

Isn't it better to kill the processor, since we don't know what's the state 
of the system? If the acceptor keeps placing messages in the queue for a dead 
processor, isn't it a separate issue?



core/src/main/scala/kafka/network/SocketServer.scala (line 461)


Turns out that catching Throwable is a really bad idea: 
https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/


- Gwen Shapira


On July 21, 2015, 11:03 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36664/
> ---
> 
> (Updated July 21, 2015, 11:03 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2353
> https://issues.apache.org/jira/browse/KAFKA-2353
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Catch exception in kafka.network.Processor to avoid socket leak and exiting 
> unexpectedly.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36664/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635987#comment-14635987
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

Created reviewboard https://reviews.apache.org/r/36664/diff/
 against branch origin/trunk

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2353:

Status: Patch Available  (was: Open)

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2353:

Attachment: KAFKA-2353.patch

> SocketServer.Processor should catch exception and close the socket properly 
> in configureNewConnections.
> ---
>
> Key: KAFKA-2353
> URL: https://issues.apache.org/jira/browse/KAFKA-2353
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2353.patch
>
>
> We see an increasing number of sockets in CLOSE_WAIT status in our production 
> environment in recent couple of days. From the thread dump it seems one of 
> the Processor thread has died but the acceptor was still putting many new 
> connections its new connection queue.
> The cause of dead Processor thread was due to we are not catching all the 
> exceptions in the Processor thread. For example, in our case it seems to be 
> an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/
---

Review request for kafka.


Bugs: KAFKA-2353
https://issues.apache.org/jira/browse/KAFKA-2353


Repository: kafka


Description
---

Catch exception in kafka.network.Processor to avoid socket leak and exiting 
unexpectedly.


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36664/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Jay Kreps
+1

-Jay

On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma  wrote:

> Hi all,
>
> I would like to start a vote on switching to GitHub pull requests for new
> contributions. To be precise, the vote is on whether we should:
>
> * Update the documentation to tell users to use pull requests instead of
> patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
> * Use pull requests for new contributions
>
> In a previous discussion[1], everyone that participated was in favour. It's
> also worth reading the "Contributing Code Changes" wiki page[2] (if you
> haven't already) to understand the flow.
>
> A number of pull requests have been merged in the last few weeks to test
> this flow and I believe it's working well enough. As usual, there is always
> room for improvement and I expect is to tweak things as time goes on.
>
> The main downside of using GitHub pull requests is that we don't have write
> access to https://github.com/apache/kafka. That means that we rely on
> commit hooks to close integrated pull requests (the merge script takes care
> of formatting the message so that this happens) and the PR creator or
> Apache Infra to close pull requests that are not integrated.
>
> Regarding existing contributions, I think it's up to the contributor to
> decide whether they want to resubmit it as a pull request or not. I expect
> that there will be a transition period where the old and new way will
> co-exist. But that can be discussed separately.
>
> The vote will run for 72 hours.
>
> +1 (non-binding) from me.
>
> Best,
> Ismael
>
> [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
> [2]
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
>


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92488
---

Ship it!


Latest patch looks good to me.

- Jiangjie Qin


On July 21, 2015, 9:58 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 9:58 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Resolved] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Graff resolved KAFKA-2354.
--
Resolution: Not A Problem

> setting log.dirs property makes tools fail if there is a comma
> --
>
> Key: KAFKA-2354
> URL: https://issues.apache.org/jira/browse/KAFKA-2354
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
> Environment: centos
>Reporter: Michael Graff
>
> If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
> kafka-topics --describe --zookeeper localhost/kafka
> Error: Could not find or load main class .u1.kafka,
> The broker will start, however.  If the tools are run from a machine without 
> multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635915#comment-14635915
 ] 

Michael Graff commented on KAFKA-2354:
--

Closing as this now appears to be a local error on our wrappers.

> setting log.dirs property makes tools fail if there is a comma
> --
>
> Key: KAFKA-2354
> URL: https://issues.apache.org/jira/browse/KAFKA-2354
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
> Environment: centos
>Reporter: Michael Graff
>
> If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
> kafka-topics --describe --zookeeper localhost/kafka
> Error: Could not find or load main class .u1.kafka,
> The broker will start, however.  If the tools are run from a machine without 
> multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Attachment: KAFKA-2351_2015-07-21_14:58:13.patch

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch, KAFKA-2351_2015-07-21_14:58:13.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635885#comment-14635885
 ] 

Mayuresh Gharat commented on KAFKA-2351:


Updated reviewboard https://reviews.apache.org/r/36652/diff/
 against branch origin/trunk

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch, KAFKA-2351_2015-07-21_14:58:13.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/
---

(Updated July 21, 2015, 9:58 p.m.)


Review request for kafka.


Bugs: KAFKA-2351
https://issues.apache.org/jira/browse/KAFKA-2351


Repository: kafka


Description
---

Added a try-catch to catch any exceptions thrown by the nioSelector


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36652/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Commented] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635864#comment-14635864
 ] 

Edward Ribeiro commented on KAFKA-2354:
---

Hi [~Skandragon], unfortunately, I was unable to reproduce this issue on both 
trunk and 0.8.2.1. Could you provide more details of your reproduce steps, 
please? In the meantime more experienced Kafka devs can be luckier/smarter than 
me to reproduce it.

> setting log.dirs property makes tools fail if there is a comma
> --
>
> Key: KAFKA-2354
> URL: https://issues.apache.org/jira/browse/KAFKA-2354
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
> Environment: centos
>Reporter: Michael Graff
>
> If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
> kafka-topics --describe --zookeeper localhost/kafka
> Error: Could not find or load main class .u1.kafka,
> The broker will start, however.  If the tools are run from a machine without 
> multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Graff updated KAFKA-2354:
-
Description: 
If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.

  was:
If one sets log.dirs=/path1/kafka,/path2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.


> setting log.dirs property makes tools fail if there is a comma
> --
>
> Key: KAFKA-2354
> URL: https://issues.apache.org/jira/browse/KAFKA-2354
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
> Environment: centos
>Reporter: Michael Graff
>
> If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
> kafka-topics --describe --zookeeper localhost/kafka
> Error: Could not find or load main class .u1.kafka,
> The broker will start, however.  If the tools are run from a machine without 
> multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)
Michael Graff created KAFKA-2354:


 Summary: setting log.dirs property makes tools fail if there is a 
comma
 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff


If one sets log.dirs=/path1/kafka,/path2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Resolved this.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 12:59 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Yes.
>
> Thanks,
>
> Mayuresh
>
> On Tue, Jul 21, 2015 at 12:27 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
>> Did you setup your jira.ini?
>>
>> On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat <
>> gharatmayures...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > I had to clean up existing kafka repo on my linux box and start with a
>> > fresh one.
>> >
>> > I followed the instructions here :
>> >
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review
>> >
>> > I am trying to upload a patch and I am getting these errors :
>> >
>> > Configuring reviewboard url to https://reviews.apache.org
>> > Updating your remote branches to pull the latest changes
>> > Verifying JIRA connection configurations
>> >
>> >
>> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
>> > InsecurePlatformWarning: A true SSLContext object is not available. This
>> > prevents urllib3 from configuring SSL appropriately and may cause
>> certain
>> > SSL connections to fail. For more information, see
>> >
>> >
>> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
>> > .
>> >   InsecurePlatformWarning
>> >
>> >
>> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
>> > SecurityWarning: Certificate has no `subjectAltName`, falling back to
>> check
>> > for a `commonName` for now. This feature is being removed by major
>> browsers
>> > and deprecated by RFC 2818. (See
>> > https://github.com/shazow/urllib3/issues/497 for details.)
>> >   SecurityWarning
>> > Failed to login to the JIRA instance 
>> > HTTP 403: "CAPTCHA_CHALLENGE; login-url=
>> > https://issues.apache.org/jira/login.jsp";
>> > https://issues.apache.org/jira/rest/auth/1/session
>> >
>> >
>> >
>> > Any help here will be appreciated.
>> >
>> > --
>> > -Regards,
>> > Mayuresh R. Gharat
>> > (862) 250-7125
>> >
>>
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Grant Henke


> On July 21, 2015, 8:26 p.m., Grant Henke wrote:
> > core/src/main/scala/kafka/network/SocketServer.scala, line 266
> > 
> >
> > What errors were seen that should be caught here? Can we catch a more 
> > specific exception and provide a better message?
> 
> Mayuresh Gharat wrote:
> The nioSelector can throw different exceptions : IOException,  
> ClosedSelectorException, IllegalArgumentException. We can have different 
> catch for each of them. But we thought that the log will telll us what 
> exception was thrown when we pass it to error()

I assumed you are adding this because you saw a specific error. I wasn't sure 
what error you saw and if more context could be given to the user. Perhaps the 
error you saw is fairly common during shutdown and should be ignored, and not 
logged at the error level. But all others should be handle as you are here.


- Grant


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 8:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat


> On July 21, 2015, 8:26 p.m., Grant Henke wrote:
> > core/src/main/scala/kafka/network/SocketServer.scala, line 266
> > 
> >
> > What errors were seen that should be caught here? Can we catch a more 
> > specific exception and provide a better message?

The nioSelector can throw different exceptions : IOException,  
ClosedSelectorException, IllegalArgumentException. We can have different catch 
for each of them. But we thought that the log will telll us what exception was 
thrown when we pass it to error()


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 8:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---



core/src/main/scala/kafka/network/SocketServer.scala (line 266)


What errors were seen that should be caught here? Can we catch a more 
specific exception and provide a better message?


- Grant Henke


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 8:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat


On July 21, 2015, 8:18 p.m., Mayuresh Gharat wrote:
> > T

Yes. Got it, I thought that we should be catching all exceptions and exit. But 
doing the above will catch the exception and exit when its shutting down and 
thats the only thing that this ticket considers.


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92465
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 8:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92465
---


Thanks for the patch, some comments.


core/src/main/scala/kafka/network/SocketServer.scala (lines 234 - 235)


We probably want to keep the try-catch inside the while loop.



core/src/main/scala/kafka/network/SocketServer.scala (line 235)


Open source Kafka convention is to put the bracket on the same line.


T

- Jiangjie Qin


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36652/
> ---
> 
> (Updated July 21, 2015, 8:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2351
> https://issues.apache.org/jira/browse/KAFKA-2351
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added a try-catch to catch any exceptions thrown by the nioSelector
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> 91319fa010b140cca632e5fa8050509bd2295fc9 
> 
> Diff: https://reviews.apache.org/r/36652/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Commented] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635734#comment-14635734
 ] 

Mayuresh Gharat commented on KAFKA-2351:


Created reviewboard https://reviews.apache.org/r/36652/diff/
 against branch origin/trunk

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Attachment: KAFKA-2351.patch

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Status: Patch Available  (was: Open)

> Brokers are having a problem shutting down correctly
> 
>
> Key: KAFKA-2351
> URL: https://issues.apache.org/jira/browse/KAFKA-2351
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Attachments: KAFKA-2351.patch
>
>
> The run() in Acceptor during shutdown might throw an exception that is not 
> caught and it never reaches shutdownComplete due to which the latch is not 
> counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/
---

Review request for kafka.


Bugs: KAFKA-2351
https://issues.apache.org/jira/browse/KAFKA-2351


Repository: kafka


Description
---

Added a try-catch to catch any exceptions thrown by the nioSelector


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36652/diff/


Testing
---


Thanks,

Mayuresh Gharat



Re: Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Yes.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 12:27 PM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:

> Did you setup your jira.ini?
>
> On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
>
> > Hi,
> >
> > I had to clean up existing kafka repo on my linux box and start with a
> > fresh one.
> >
> > I followed the instructions here :
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review
> >
> > I am trying to upload a patch and I am getting these errors :
> >
> > Configuring reviewboard url to https://reviews.apache.org
> > Updating your remote branches to pull the latest changes
> > Verifying JIRA connection configurations
> >
> >
> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
> > InsecurePlatformWarning: A true SSLContext object is not available. This
> > prevents urllib3 from configuring SSL appropriately and may cause certain
> > SSL connections to fail. For more information, see
> >
> >
> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
> > .
> >   InsecurePlatformWarning
> >
> >
> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
> > SecurityWarning: Certificate has no `subjectAltName`, falling back to
> check
> > for a `commonName` for now. This feature is being removed by major
> browsers
> > and deprecated by RFC 2818. (See
> > https://github.com/shazow/urllib3/issues/497 for details.)
> >   SecurityWarning
> > Failed to login to the JIRA instance 
> > HTTP 403: "CAPTCHA_CHALLENGE; login-url=
> > https://issues.apache.org/jira/login.jsp";
> > https://issues.apache.org/jira/rest/auth/1/session
> >
> >
> >
> > Any help here will be appreciated.
> >
> > --
> > -Regards,
> > Mayuresh R. Gharat
> > (862) 250-7125
> >
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Parth Brahmbhatt
+1 (non-binding)

Thanks
Parth

On 7/21/15, 10:24 AM, "Gwen Shapira"  wrote:

>+1 (binding) on using PRs.
>
>It sounds like we need additional discussion on how the transition
>will happen. Maybe move that to a separate thread, to keep the vote
>easy to follow.
>
>On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma  wrote:
>> Hi all,
>>
>> I would like to start a vote on switching to GitHub pull requests for
>>new
>> contributions. To be precise, the vote is on whether we should:
>>
>> * Update the documentation to tell users to use pull requests instead of
>> patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
>> * Use pull requests for new contributions
>>
>> In a previous discussion[1], everyone that participated was in favour.
>>It's
>> also worth reading the "Contributing Code Changes" wiki page[2] (if you
>> haven't already) to understand the flow.
>>
>> A number of pull requests have been merged in the last few weeks to test
>> this flow and I believe it's working well enough. As usual, there is
>>always
>> room for improvement and I expect is to tweak things as time goes on.
>>
>> The main downside of using GitHub pull requests is that we don't have
>>write
>> access to https://github.com/apache/kafka. That means that we rely on
>> commit hooks to close integrated pull requests (the merge script takes
>>care
>> of formatting the message so that this happens) and the PR creator or
>> Apache Infra to close pull requests that are not integrated.
>>
>> Regarding existing contributions, I think it's up to the contributor to
>> decide whether they want to resubmit it as a pull request or not. I
>>expect
>> that there will be a transition period where the old and new way will
>> co-exist. But that can be discussed separately.
>>
>> The vote will run for 72 hours.
>>
>> +1 (non-binding) from me.
>>
>> Best,
>> Ismael
>>
>> [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
>> [2]
>> 
>>https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Chang
>>es
>



Re: Submitting a patch (Jira errors)

2015-07-21 Thread Aditya Auradkar
Did you setup your jira.ini?

On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Hi,
>
> I had to clean up existing kafka repo on my linux box and start with a
> fresh one.
>
> I followed the instructions here :
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review
>
> I am trying to upload a patch and I am getting these errors :
>
> Configuring reviewboard url to https://reviews.apache.org
> Updating your remote branches to pull the latest changes
> Verifying JIRA connection configurations
>
> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
> InsecurePlatformWarning: A true SSLContext object is not available. This
> prevents urllib3 from configuring SSL appropriately and may cause certain
> SSL connections to fail. For more information, see
>
> https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
> .
>   InsecurePlatformWarning
>
> /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
> SecurityWarning: Certificate has no `subjectAltName`, falling back to check
> for a `commonName` for now. This feature is being removed by major browsers
> and deprecated by RFC 2818. (See
> https://github.com/shazow/urllib3/issues/497 for details.)
>   SecurityWarning
> Failed to login to the JIRA instance 
> HTTP 403: "CAPTCHA_CHALLENGE; login-url=
> https://issues.apache.org/jira/login.jsp";
> https://issues.apache.org/jira/rest/auth/1/session
>
>
>
> Any help here will be appreciated.
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>


[jira] [Created] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2353:
---

 Summary: SocketServer.Processor should catch exception and close 
the socket properly in configureNewConnections.
 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


We see an increasing number of sockets in CLOSE_WAIT status in our production 
environment in recent couple of days. From the thread dump it seems one of the 
Processor thread has died but the acceptor was still putting many new 
connections its new connection queue.

The cause of dead Processor thread was due to we are not catching all the 
exceptions in the Processor thread. For example, in our case it seems to be an 
exception thrown in configureNewConnections().





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-2299) kafka-patch-review tool does not correctly capture testing done

2015-07-21 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh closed KAFKA-2299.
-

> kafka-patch-review tool does not correctly capture testing done
> ---
>
> Key: KAFKA-2299
> URL: https://issues.apache.org/jira/browse/KAFKA-2299
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Attachments: KAFKA-2299.patch
>
>
> kafka-patch-review tool does not correctly capture testing done when 
> specified with -t or --testing-done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2299) kafka-patch-review tool does not correctly capture testing done

2015-07-21 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2299:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Moving to Github PRs, so this may not be useful anymore.

> kafka-patch-review tool does not correctly capture testing done
> ---
>
> Key: KAFKA-2299
> URL: https://issues.apache.org/jira/browse/KAFKA-2299
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Attachments: KAFKA-2299.patch
>
>
> kafka-patch-review tool does not correctly capture testing done when 
> specified with -t or --testing-done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Hi,

I had to clean up existing kafka repo on my linux box and start with a
fresh one.

I followed the instructions here :

https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review

I am trying to upload a patch and I am getting these errors :

Configuring reviewboard url to https://reviews.apache.org
Updating your remote branches to pull the latest changes
Verifying JIRA connection configurations
/usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
InsecurePlatformWarning: A true SSLContext object is not available. This
prevents urllib3 from configuring SSL appropriately and may cause certain
SSL connections to fail. For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
.
  InsecurePlatformWarning
/usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
SecurityWarning: Certificate has no `subjectAltName`, falling back to check
for a `commonName` for now. This feature is being removed by major browsers
and deprecated by RFC 2818. (See
https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning
Failed to login to the JIRA instance 
HTTP 403: "CAPTCHA_CHALLENGE; login-url=
https://issues.apache.org/jira/login.jsp";
https://issues.apache.org/jira/rest/auth/1/session



Any help here will be appreciated.

-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Commented] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635596#comment-14635596
 ] 

Edward Ribeiro commented on KAFKA-2345:
---

[~singhashish], [~gwenshap], sorry, late to the party (already committed to 
trunk), but I think this patch could benefit from a unit test in 
{{DeleteTopicTest}}, right? I scribbled one below. Cheers!

{code}
  @Test
  def testDeleteTopicMarkedAsDeleted() {
val topicAndPartition = TopicAndPartition("test", 0)
val topic = topicAndPartition.topic
val servers = createTestTopicAndCluster(topic)
// start topic deletion
AdminUtils.deleteTopic(zkClient, topic)
try {
  // try to delete topic marked as deleted
  AdminUtils.deleteTopic(zkClient, topic)
  fail("Expected TopicAlreadyMarkedForDeletionException")
}
catch {
  case e: TopicAlreadyMarkedForDeletionException => // expected exception
}

TestUtils.verifyTopicDeletion(zkClient, topic, 1, servers)
servers.foreach(_.shutdown())
  }
{code}

> Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> --
>
> Key: KAFKA-2345
> URL: https://issues.apache.org/jira/browse/KAFKA-2345
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch
>
>
> Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
> A user does not necessarily have to know about involvement of zk in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka High level consumer rebalancing

2015-07-21 Thread Mayuresh Gharat
Not sure if you can do that with High level consumer.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 10:53 AM, Pranay Agarwal 
wrote:

> Any ideas?
>
> On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal 
> wrote:
>
> > Hi all,
> >
> > Is there any way I can force Zookeeper/Kafka to rebalance new consumers
> > only for subset of total number of partitions. I have a situation where
> out
> > of 120 partitions 60 have been already consumed, but the zookeeper also
> > assigns these empty/inactive partitions as well for the re-balancing, I
> > want my resources to be used only for the partitions which still have
> some
> > messages left to read.
> >
> > Thanks
> > -Pranay
> >
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-21 Thread Edward Ribeiro


> On July 21, 2015, 6:57 a.m., Ewen Cheslack-Postava wrote:
> > core/src/main/scala/kafka/admin/TopicCommand.scala, line 90
> > 
> >
> > This format call isn't working because it's being called on the second 
> > string due to the split across lines and +. Needs parens added or switched 
> > back to a single string.

Sorry, probably due to last minute Intellij 'auto-formating'. Fixed.


> On July 21, 2015, 6:57 a.m., Ewen Cheslack-Postava wrote:
> > core/src/main/scala/kafka/server/AbstractFetcherThread.scala, line 166
> > 
> >
> > My quick test indicates this doesn't work -- I don't think this is the 
> > code that would be returned anyway since I don't think max message size is 
> > checked for fetch requests, fetch.message.max.bytes for consumers and  
> > replica.fetch.max.bytes for brokers are both for aggregate data size. The 
> > problem occurs when the total aggregate size permitted per request is 
> > smaller than a single message. I think in that case we're just returning 0 
> > messages, but not an error code. After enough time the result is that the 
> > ISR shrinks (which is what my test showed in the broker logs).
> > 
> > I think to properly log this we might need to log it on the leader node 
> > not on the fetcher -- probably somewhere in ReplicaManager. If you're not 
> > familiar with this code, you'll want to start in KafkaApis.scala in 
> > handleOffsetRequest. However, I'm not sure what this means about issuing a 
> > useful warning to consumers since they wouldn't have easy access to broker 
> > logs. On the other hand, a stalled consumer is less of a problem than the 
> > ISR being forced down to a single replica.
> > 
> > For reference, I tested this with a simple config with two brokers, 
> > adjusting message.max.bytes and replica.fetch.max.bytes. Then I created a 
> > topic with replication factor 2 and used console producer to send data of 
> > different sizes to test the output.

Thanks, Ewen! I removed the call then, and will spend some quality time with 
KafkaApis and ReplicaManager. But, as you said, since this wouldn't have a 
useful warning message for consumers, it's worth digging it in this ticket? 
wdyt?


- Edward


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92374
---


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36578/
> ---
> 
> (Updated July 21, 2015, 4:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2338
> https://issues.apache.org/jira/browse/KAFKA-2338
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
> update broker and consumer settings
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
> 
> Diff: https://reviews.apache.org/r/36578/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Edward Ribeiro
> 
>



Re: Kafka High level consumer rebalancing

2015-07-21 Thread Pranay Agarwal
Any ideas?

On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal 
wrote:

> Hi all,
>
> Is there any way I can force Zookeeper/Kafka to rebalance new consumers
> only for subset of total number of partitions. I have a situation where out
> of 120 partitions 60 have been already consumed, but the zookeeper also
> assigns these empty/inactive partitions as well for the re-balancing, I
> want my resources to be used only for the partitions which still have some
> messages left to read.
>
> Thanks
> -Pranay
>


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-21 Thread Ben Kirwin
That's a fair point. I've added some imagined job logic to the KIP, so
we can make sure the proposal stays in sync with the usages we're
discussing. (The logic is just a quick sketch for now -- I expect I'll
need to elaborate it as we get into more detail, or to address other
concerns...)

On Tue, Jul 21, 2015 at 11:45 AM, Jun Rao  wrote:
> For 1, yes, when there is a transient leader change, it's guaranteed that a
> prefix of the messages in a request will be committed. However, it seems
> that the client needs to know what subset of messages are committed in
> order to resume the sending. Then the question is how.
>
> As Flavio indicated, for the use cases that you listed, it would be useful
> to figure out the exact logic by using this feature. For example, in the
> partition K/V store example, when we fail over to a new writer to the
> commit log, the zombie writer can publish new messages to the log after the
> new writer takes over, but before it publishes any message. We probably
> need to outline how this case can be handled properly.
>
> Thanks,
>
> Jun
>
> On Mon, Jul 20, 2015 at 12:05 PM, Ben Kirwin  wrote:
>
>> Hi Jun,
>>
>> Thanks for the close reading! Responses inline.
>>
>> > Thanks for the write-up. The single producer use case you mentioned makes
>> > sense. It would be useful to include that in the KIP wiki.
>>
>> Great -- I'll make sure that the wiki is clear about this.
>>
>> > 1. What happens when the leader of the partition changes in the middle
>> of a
>> > produce request? In this case, the producer client is not sure whether
>> the
>> > request succeeds or not. If there is only a single message in the
>> request,
>> > the producer can just resend the request. If it sees an OffsetMismatch
>> > error, it knows that the previous send actually succeeded and can proceed
>> > with the next write. This is nice since it not only allows the producer
>> to
>> > proceed during transient failures in the broker, it also avoids
>> duplicates
>> > during producer resend. One caveat is when there are multiple messages in
>> > the same partition in a produce request. The issue is that in our current
>> > replication protocol, it's possible for some, but not all messages in the
>> > request to be committed. This makes resend a bit harder to deal with
>> since
>> > on receiving an OffsetMismatch error, it's not clear which messages have
>> > been committed. One possibility is to expect that compression is enabled,
>> > in which case multiple messages are compressed into a single message. I
>> was
>> > thinking that another possibility is for the broker to return the current
>> > high watermark when sending an OffsetMismatch error. Based on this info,
>> > the producer can resend the subset of messages that have not been
>> > committed. However, this may not work in a compacted topic since there
>> can
>> > be holes in the offset.
>>
>> This is a excellent question. It's my understanding that at least a
>> *prefix* of messages will be committed (right?) -- which seems to be
>> enough for many cases. I'll try and come up with a more concrete
>> answer here.
>>
>> > 2. Is this feature only intended to be used with ack = all? The client
>> > doesn't get the offset with ack = 0. With ack = 1, it's possible for a
>> > previously acked message to be lost during leader transition, which will
>> > make the client logic more complicated.
>>
>> It's true that acks = 0 doesn't seem to be particularly useful; in all
>> the cases I've come across, the client eventually wants to know about
>> the mismatch error. However, it seems like there are some cases where
>> acks = 1 would be fine -- eg. in a bulk load of a fixed dataset,
>> losing messages during a leader transition just means you need to
>> rewind / restart the load, which is not especially catastrophic. For
>> many other interesting cases, acks = all is probably preferable.
>>
>> > 3. How does the producer client know the offset to send the first
>> message?
>> > Do we need to expose an API in producer to get the current high
>> watermark?
>>
>> You're right, it might be irritating to have to go through the
>> consumer API just for this. There are some cases where the offsets are
>> already available -- like the commit-log-for-KV-store example -- but
>> in general, being able to get the offsets from the producer interface
>> does sound convenient.
>>
>> > We plan to have a KIP discussion meeting tomorrow at 11am PST. Perhaps
>> you
>> > can describe this KIP a bit then?
>>
>> Sure, happy to join.
>>
>> > Thanks,
>> >
>> > Jun
>> >
>> >
>> >
>> > On Sat, Jul 18, 2015 at 10:37 AM, Ben Kirwin  wrote:
>> >
>> >> Just wanted to flag a little discussion that happened on the ticket:
>> >>
>> >>
>> https://issues.apache.org/jira/browse/KAFKA-2260?focusedCommentId=14632259&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14632259
>> >>
>> >> In particular, Yasuhiro Matsuda proposed an interesting variant on
>> >> this that performs the

[jira] [Updated] (KAFKA-2342) KafkaConsumer rebalance with in-flight fetch can cause invalid position

2015-07-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2342:
---
Summary: KafkaConsumer rebalance with in-flight fetch can cause invalid 
position  (was: transient unit test failure in 
testConsumptionWithBrokerFailures)

> KafkaConsumer rebalance with in-flight fetch can cause invalid position
> ---
>
> Key: KAFKA-2342
> URL: https://issues.apache.org/jira/browse/KAFKA-2342
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Jason Gustafson
>
> If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
> up updating the fetch position of a partition to an offset which is no longer 
> valid. The consequence is that we may end up either returning to the user 
> messages with an unexpected position or we may fail to give back the right 
> offset in position(). 
> Additionally, this bug causes transient test failures in 
> ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
> exception:
> kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2342) transient unit test failure in testConsumptionWithBrokerFailures

2015-07-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2342:
---
Description: 
If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end up 
updating the fetch position of a partition to an offset which is no longer 
valid. The consequence is that we may end up either returning to the user 
messages with an unexpected position or we may fail to give back the right 
offset in position(). 

Additionally, this bug causes transient test failures in 
ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
exception:

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)


  was:
Saw the following transient unit test failure.

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



> transient unit test failure in testConsumptionWithBrokerFailures
> 
>
> Key: KAFKA-2342
> URL: https://issues.apache.org/jira/browse/KAFKA-2342
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Jason Gustafson
>
> If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
> up updating the fetch position of a partition to an offset which is no longer 
> valid. The consequence is that we may end up either returning to the user 
> messages with an unexpected position or we may fail to give back the right 
> offset in position(). 
> Additionally, this bug causes transient test failures in 
> ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
> exception:
> kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Gwen Shapira
+1 (binding) on using PRs.

It sounds like we need additional discussion on how the transition
will happen. Maybe move that to a separate thread, to keep the vote
easy to follow.

On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma  wrote:
> Hi all,
>
> I would like to start a vote on switching to GitHub pull requests for new
> contributions. To be precise, the vote is on whether we should:
>
> * Update the documentation to tell users to use pull requests instead of
> patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
> * Use pull requests for new contributions
>
> In a previous discussion[1], everyone that participated was in favour. It's
> also worth reading the "Contributing Code Changes" wiki page[2] (if you
> haven't already) to understand the flow.
>
> A number of pull requests have been merged in the last few weeks to test
> this flow and I believe it's working well enough. As usual, there is always
> room for improvement and I expect is to tweak things as time goes on.
>
> The main downside of using GitHub pull requests is that we don't have write
> access to https://github.com/apache/kafka. That means that we rely on
> commit hooks to close integrated pull requests (the merge script takes care
> of formatting the message so that this happens) and the PR creator or
> Apache Infra to close pull requests that are not integrated.
>
> Regarding existing contributions, I think it's up to the contributor to
> decide whether they want to resubmit it as a pull request or not. I expect
> that there will be a transition period where the old and new way will
> co-exist. But that can be discussed separately.
>
> The vote will run for 72 hours.
>
> +1 (non-binding) from me.
>
> Best,
> Ismael
>
> [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
> [2]
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Guozhang Wang
+1

On Tue, Jul 21, 2015 at 9:31 AM, Grant Henke  wrote:

> +1 non-binding
>
> On Tue, Jul 21, 2015 at 11:19 AM, Neha Narkhede  wrote:
>
> > +1 (binding)
> >
> > Agree with Ismael. We may not want to rush to push the PR right away.
> > Having said that, if it works well with say, 10 patches, I'd consider
> that
> > enough to require the new JIRAs to submit patches using the PRs instead.
> >
> > Thanks,
> > Neha
> >
> > On Tue, Jul 21, 2015 at 8:19 AM, Sriharsha Chintalapani <
> > harsh...@fastmail.fm> wrote:
> >
> > > +1 . I think phasing out a good idea but rather than x months  we
> should
> > > move to github PRs for any new JIRAs that are not already in review
> > board.
> > > For the JIRA’s that are in review board we can continue to use that
> until
> > >  they merged in.
> > >
> > > -Harsha
> > >
> > >
> > > On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com)
> > wrote:
> > >
> > > +1 non-binding.
> > >
> > > A suggestion, we should try to phase out old system of reviews
> gradually,
> > > instead of forcing it over a night. Maybe a time bound switch? We can
> say
> > > like in x months from now we will completely move to PRs?
> > >
> > > On Tuesday, July 21, 2015, Ismael Juma  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I would like to start a vote on switching to GitHub pull requests for
> > new
> > > > contributions. To be precise, the vote is on whether we should:
> > > >
> > > > * Update the documentation to tell users to use pull requests instead
> > of
> > > > patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
> > > > * Use pull requests for new contributions
> > > >
> > > > In a previous discussion[1], everyone that participated was in
> favour.
> > > It's
> > > > also worth reading the "Contributing Code Changes" wiki page[2] (if
> you
> > > > haven't already) to understand the flow.
> > > >
> > > > A number of pull requests have been merged in the last few weeks to
> > test
> > > > this flow and I believe it's working well enough. As usual, there is
> > > always
> > > > room for improvement and I expect is to tweak things as time goes on.
> > > >
> > > > The main downside of using GitHub pull requests is that we don't have
> > > write
> > > > access to https://github.com/apache/kafka. That means that we rely
> on
> > > > commit hooks to close integrated pull requests (the merge script
> takes
> > > care
> > > > of formatting the message so that this happens) and the PR creator or
> > > > Apache Infra to close pull requests that are not integrated.
> > > >
> > > > Regarding existing contributions, I think it's up to the contributor
> to
> > > > decide whether they want to resubmit it as a pull request or not. I
> > > expect
> > > > that there will be a transition period where the old and new way will
> > > > co-exist. But that can be discussed separately.
> > > >
> > > > The vote will run for 72 hours.
> > > >
> > > > +1 (non-binding) from me.
> > > >
> > > > Best,
> > > > Ismael
> > > >
> > > > [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
> > > > [2]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> > > >
> > >
> > >
> > > --
> > > Ashish 🎤h
> > >
> >
> >
> >
> > --
> > Thanks,
> > Neha
> >
>
>
>
> --
> Grant Henke
> Solutions Consultant | Cloudera
> ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-21 Thread Yasuhiro Matsuda
In KV store usage, all instances are writers, aren't they? There is no
leader or master, thus there is no fail over. The offset based CAS ensures
an update is based on the latest value and doesn't care who is writing the
new value.

I think the idea of the offset based CAS is great. I think it works very
well with Event Sourcing. It may be a bit weak for ensuring the single
writer though.


On Tue, Jul 21, 2015 at 8:45 AM, Jun Rao  wrote:

> For 1, yes, when there is a transient leader change, it's guaranteed that a
> prefix of the messages in a request will be committed. However, it seems
> that the client needs to know what subset of messages are committed in
> order to resume the sending. Then the question is how.
>
> As Flavio indicated, for the use cases that you listed, it would be useful
> to figure out the exact logic by using this feature. For example, in the
> partition K/V store example, when we fail over to a new writer to the
> commit log, the zombie writer can publish new messages to the log after the
> new writer takes over, but before it publishes any message. We probably
> need to outline how this case can be handled properly.
>
> Thanks,
>
> Jun
>
> On Mon, Jul 20, 2015 at 12:05 PM, Ben Kirwin  wrote:
>
> > Hi Jun,
> >
> > Thanks for the close reading! Responses inline.
> >
> > > Thanks for the write-up. The single producer use case you mentioned
> makes
> > > sense. It would be useful to include that in the KIP wiki.
> >
> > Great -- I'll make sure that the wiki is clear about this.
> >
> > > 1. What happens when the leader of the partition changes in the middle
> > of a
> > > produce request? In this case, the producer client is not sure whether
> > the
> > > request succeeds or not. If there is only a single message in the
> > request,
> > > the producer can just resend the request. If it sees an OffsetMismatch
> > > error, it knows that the previous send actually succeeded and can
> proceed
> > > with the next write. This is nice since it not only allows the producer
> > to
> > > proceed during transient failures in the broker, it also avoids
> > duplicates
> > > during producer resend. One caveat is when there are multiple messages
> in
> > > the same partition in a produce request. The issue is that in our
> current
> > > replication protocol, it's possible for some, but not all messages in
> the
> > > request to be committed. This makes resend a bit harder to deal with
> > since
> > > on receiving an OffsetMismatch error, it's not clear which messages
> have
> > > been committed. One possibility is to expect that compression is
> enabled,
> > > in which case multiple messages are compressed into a single message. I
> > was
> > > thinking that another possibility is for the broker to return the
> current
> > > high watermark when sending an OffsetMismatch error. Based on this
> info,
> > > the producer can resend the subset of messages that have not been
> > > committed. However, this may not work in a compacted topic since there
> > can
> > > be holes in the offset.
> >
> > This is a excellent question. It's my understanding that at least a
> > *prefix* of messages will be committed (right?) -- which seems to be
> > enough for many cases. I'll try and come up with a more concrete
> > answer here.
> >
> > > 2. Is this feature only intended to be used with ack = all? The client
> > > doesn't get the offset with ack = 0. With ack = 1, it's possible for a
> > > previously acked message to be lost during leader transition, which
> will
> > > make the client logic more complicated.
> >
> > It's true that acks = 0 doesn't seem to be particularly useful; in all
> > the cases I've come across, the client eventually wants to know about
> > the mismatch error. However, it seems like there are some cases where
> > acks = 1 would be fine -- eg. in a bulk load of a fixed dataset,
> > losing messages during a leader transition just means you need to
> > rewind / restart the load, which is not especially catastrophic. For
> > many other interesting cases, acks = all is probably preferable.
> >
> > > 3. How does the producer client know the offset to send the first
> > message?
> > > Do we need to expose an API in producer to get the current high
> > watermark?
> >
> > You're right, it might be irritating to have to go through the
> > consumer API just for this. There are some cases where the offsets are
> > already available -- like the commit-log-for-KV-store example -- but
> > in general, being able to get the offsets from the producer interface
> > does sound convenient.
> >
> > > We plan to have a KIP discussion meeting tomorrow at 11am PST. Perhaps
> > you
> > > can describe this KIP a bit then?
> >
> > Sure, happy to join.
> >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Sat, Jul 18, 2015 at 10:37 AM, Ben Kirwin  wrote:
> > >
> > >> Just wanted to flag a little discussion that happened on the ticket:
> > >>
> > >>
> >
> https://issues.apache.org/jira/browse/KAFKA-2260?focusedComm

Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Grant Henke
+1 non-binding

On Tue, Jul 21, 2015 at 11:19 AM, Neha Narkhede  wrote:

> +1 (binding)
>
> Agree with Ismael. We may not want to rush to push the PR right away.
> Having said that, if it works well with say, 10 patches, I'd consider that
> enough to require the new JIRAs to submit patches using the PRs instead.
>
> Thanks,
> Neha
>
> On Tue, Jul 21, 2015 at 8:19 AM, Sriharsha Chintalapani <
> harsh...@fastmail.fm> wrote:
>
> > +1 . I think phasing out a good idea but rather than x months  we should
> > move to github PRs for any new JIRAs that are not already in review
> board.
> > For the JIRA’s that are in review board we can continue to use that until
> >  they merged in.
> >
> > -Harsha
> >
> >
> > On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com)
> wrote:
> >
> > +1 non-binding.
> >
> > A suggestion, we should try to phase out old system of reviews gradually,
> > instead of forcing it over a night. Maybe a time bound switch? We can say
> > like in x months from now we will completely move to PRs?
> >
> > On Tuesday, July 21, 2015, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > I would like to start a vote on switching to GitHub pull requests for
> new
> > > contributions. To be precise, the vote is on whether we should:
> > >
> > > * Update the documentation to tell users to use pull requests instead
> of
> > > patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
> > > * Use pull requests for new contributions
> > >
> > > In a previous discussion[1], everyone that participated was in favour.
> > It's
> > > also worth reading the "Contributing Code Changes" wiki page[2] (if you
> > > haven't already) to understand the flow.
> > >
> > > A number of pull requests have been merged in the last few weeks to
> test
> > > this flow and I believe it's working well enough. As usual, there is
> > always
> > > room for improvement and I expect is to tweak things as time goes on.
> > >
> > > The main downside of using GitHub pull requests is that we don't have
> > write
> > > access to https://github.com/apache/kafka. That means that we rely on
> > > commit hooks to close integrated pull requests (the merge script takes
> > care
> > > of formatting the message so that this happens) and the PR creator or
> > > Apache Infra to close pull requests that are not integrated.
> > >
> > > Regarding existing contributions, I think it's up to the contributor to
> > > decide whether they want to resubmit it as a pull request or not. I
> > expect
> > > that there will be a transition period where the old and new way will
> > > co-exist. But that can be discussed separately.
> > >
> > > The vote will run for 72 hours.
> > >
> > > +1 (non-binding) from me.
> > >
> > > Best,
> > > Ismael
> > >
> > > [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
> > > [2]
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> > >
> >
> >
> > --
> > Ashish 🎤h
> >
>
>
>
> --
> Thanks,
> Neha
>



-- 
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: Review Request 36565: Patch for KAFKA-2345

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36565/#review92422
---

Ship it!


Ship It!

- Grant Henke


On July 17, 2015, 5:21 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36565/
> ---
> 
> (Updated July 17, 2015, 5:21 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2345
> https://issues.apache.org/jira/browse/KAFKA-2345
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2345: Attempt to delete a topic already marked for deletion throws 
> ZkNodeExistsException
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> f06edf41c732a7b794e496d0048b0ce6f897e72b 
>   
> core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/36565/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-07-21_13:21:19.patch

> Warn users if they change max.message.bytes that they also need to update 
> broker and consumer settings
> --
>
> Key: KAFKA-2338
> URL: https://issues.apache.org/jira/browse/KAFKA-2338
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
> KAFKA-2338_2015-07-21_13:21:19.patch
>
>
> We already have KAFKA-1756 filed to more completely address this issue, but 
> it is waiting for some other major changes to configs to completely protect 
> users from this problem.
> This JIRA should address the low hanging fruit to at least warn users of the 
> potential problems. Currently the only warning is in our documentation.
> 1. Generate a warning in the kafka-topics.sh tool when they change this 
> setting on a topic to be larger than the default. This needs to be very 
> obvious in the output.
> 2. Currently, the broker's replica fetcher isn't logging any useful error 
> messages when replication can't succeed because a message size is too large. 
> Logging an error here would allow users that get into a bad state to find out 
> why it is happening more easily. (Consumers should already be logging a 
> useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >