[jira] [Commented] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-02 Thread naresh gundu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726992#comment-14726992
 ] 

naresh gundu commented on KAFKA-2498:
-

okay 



> need build steps/instruction while building apache kafka from source github 
> branch 0.8.2
> 
>
> Key: KAFKA-2498
> URL: https://issues.apache.org/jira/browse/KAFKA-2498
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.0
> Environment: I am working rhel7.1 machine
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have followed the steps from the github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar (success)
> ./gradlew srcJar (success)
> ./gradlew test ( one test case failed)
> so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-02 Thread naresh gundu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

naresh gundu closed KAFKA-2498.
---

invalid ticket

> need build steps/instruction while building apache kafka from source github 
> branch 0.8.2
> 
>
> Key: KAFKA-2498
> URL: https://issues.apache.org/jira/browse/KAFKA-2498
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.0
> Environment: I am working rhel7.1 machine
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have followed the steps from the github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar (success)
> ./gradlew srcJar (success)
> ./gradlew test ( one test case failed)
> so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2498) need build steps/instruction while building apache kafka from source github branch 0.8.2

2015-09-02 Thread naresh gundu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

naresh gundu resolved KAFKA-2498.
-
Resolution: Auto Closed

> need build steps/instruction while building apache kafka from source github 
> branch 0.8.2
> 
>
> Key: KAFKA-2498
> URL: https://issues.apache.org/jira/browse/KAFKA-2498
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.0
> Environment: I am working rhel7.1 machine
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have followed the steps from the github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar (success)
> ./gradlew srcJar (success)
> ./gradlew test ( one test case failed)
> so, please provide me the steps or confirm the above steps are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1893) Allow regex subscriptions in the new consumer

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1893:
---
Priority: Blocker  (was: Critical)

> Allow regex subscriptions in the new consumer
> -
>
> Key: KAFKA-1893
> URL: https://issues.apache.org/jira/browse/KAFKA-1893
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Ashish K Singh
>Priority: Blocker
> Fix For: 0.8.3
>
>
> The consumer needs to handle subscribing to regular expressions. Presumably 
> this would be done as a new api,
> {code}
>   void subscribe(java.util.regex.Pattern pattern);
> {code}
> Some questions/thoughts to work out:
>  - It should not be possible to mix pattern subscription with partition 
> subscription.
>  - Is it allowable to mix this with normal topic subscriptions? Logically 
> this is okay but a bit complex to implement.
>  - We need to ensure we regularly update the metadata and recheck our regexes 
> against the metadata to update subscriptions for new topics that are created 
> or old topics that are deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2120) Add a request timeout to NetworkClient

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2120:
---
Priority: Blocker  (was: Major)

> Add a request timeout to NetworkClient
> --
>
> Key: KAFKA-2120
> URL: https://issues.apache.org/jira/browse/KAFKA-2120
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2120.patch, KAFKA-2120_2015-07-27_15:31:19.patch, 
> KAFKA-2120_2015-07-29_15:57:02.patch, KAFKA-2120_2015-08-10_19:55:18.patch, 
> KAFKA-2120_2015-08-12_10:59:09.patch
>
>
> Currently NetworkClient does not have a timeout setting for requests. So if 
> no response is received for a request due to reasons such as broker is down, 
> the request will never be completed.
> Request timeout will also be used as implicit timeout for some methods such 
> as KafkaProducer.flush() and kafkaProducer.close().
> KIP-19 is created for this public interface change.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2211) KafkaAuthorizer: Add simpleACLAuthorizer implementation.

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2211:
---
Priority: Blocker  (was: Major)

> KafkaAuthorizer: Add simpleACLAuthorizer implementation.
> 
>
> Key: KAFKA-2211
> URL: https://issues.apache.org/jira/browse/KAFKA-2211
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2211.patch
>
>
> Subtask-2 for Kafka-1688. 
> Please see KIP-11 to get details on out of box SimpleACLAuthorizer 
> implementation 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2136:
---
Priority: Blocker  (was: Major)

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2136) Client side protocol changes to return quota delays

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar resolved KAFKA-2136.

Resolution: Fixed

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
> KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
> KAFKA-2136_2015-06-09_10:10:25.patch, KAFKA-2136_2015-06-30_19:43:55.patch, 
> KAFKA-2136_2015-07-13_13:34:03.patch, KAFKA-2136_2015-08-18_13:19:57.patch, 
> KAFKA-2136_2015-08-18_13:24:00.patch, KAFKA-2136_2015-08-21_16:29:17.patch, 
> KAFKA-2136_2015-08-24_10:33:10.patch, KAFKA-2136_2015-08-25_11:29:52.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2500:
--

Assignee: Jason Gustafson  (was: Neha Narkhede)

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.3
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.8.3
>
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2503) Metrics should be better documented

2015-09-02 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2503:
---

 Summary: Metrics should be better documented
 Key: KAFKA-2503
 URL: https://issues.apache.org/jira/browse/KAFKA-2503
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Priority: Minor


metric.reporters configuration is missing from our docs.

In addition some explanation about the metric reporters and a pointer to list 
of available metrics will be helpful.

Once we move away from Yammer, we will also need to document how to write a 
reporter for KafkaMetrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Exclude conflicting zookeeper version from 'co...

2015-09-02 Thread shtratos
Github user shtratos closed the pull request at:

https://github.com/apache/kafka/pull/162


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Replace "it's" with "its" where appropriate

2015-09-02 Thread magnusr
GitHub user magnusr opened a pull request:

https://github.com/apache/kafka/pull/186

Replace "it's" with "its" where appropriate

No Jira ticket created, as the Contributing Code Changes doc says it's not 
necessary for javadoc typo fixes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/magnusr/kafka feature/its

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/186.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #186


commit 1c4668a81135bdce9bf77634269aa5d872a42546
Author: Magnus Reftel 
Date:   2015-09-02T07:15:57Z

Replace "it's" with "its" where appropriate




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Replace "it's" with "its" where appropriate

2015-09-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/186


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-09-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2500:
---
Issue Type: Sub-task  (was: Improvement)
Parent: KAFKA-2387

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.8.3
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.8.3
>
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2502) Quotas documentation for 0.8.3

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2502:
---
Labels: quotas  (was: )

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Complete quotas documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2502) Quotas documentation for 0.8.3

2015-09-02 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2502:
--

 Summary: Quotas documentation for 0.8.3
 Key: KAFKA-2502
 URL: https://issues.apache.org/jira/browse/KAFKA-2502
 Project: Kafka
  Issue Type: Task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
Priority: Blocker
 Fix For: 0.8.3


Complete quotas documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2209) Change client quotas dynamically using DynamicConfigManager

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2209:
---
Labels: quotas  (was: )

> Change client quotas dynamically using DynamicConfigManager
> ---
>
> Key: KAFKA-2209
> URL: https://issues.apache.org/jira/browse/KAFKA-2209
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2394) Use RollingFileAppender by default in log4j.properties

2015-09-02 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727725#comment-14727725
 ] 

Jason Gustafson commented on KAFKA-2394:


[~jinxing6...@126.com] You should be able to assign tasks to yourself now.

> Use RollingFileAppender by default in log4j.properties
> --
>
> Key: KAFKA-2394
> URL: https://issues.apache.org/jira/browse/KAFKA-2394
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Minor
>  Labels: newbie
>
> The default log4j.properties bundled with Kafka uses ConsoleAppender and 
> DailyRollingFileAppender, which offer no protection to users from spammy 
> logging. In extreme cases (such as when issues like KAFKA-1461 are 
> encountered), the logs can exhaust the local disk space. This could be a 
> problem for Kafka adoption since new users are less likely to adjust the 
> logging properties themselves, and are more likely to have configuration 
> problems which result in log spam. 
> To fix this, we can use RollingFileAppender, which offers two settings for 
> controlling the maximum space that log files will use.
> maxBackupIndex: how many backup files to retain
> maxFileSize: the max size of each log file
> One question is whether this change is a compatibility concern? The backup 
> strategy and filenames used by RollingFileAppender are different from those 
> used by DailyRollingFileAppender, so any tools which depend on the old format 
> will break. If we think this is a serious problem, one solution would be to 
> provide two versions of log4j.properties and add a flag to enable the new 
> one. Another solution would be to include the RollingFileAppender 
> configuration in the default log4j.properties, but commented out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2443) Expose windowSize on Measurable

2015-09-02 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2443:
---
Labels: quotas  (was: )

> Expose windowSize on Measurable
> ---
>
> Key: KAFKA-2443
> URL: https://issues.apache.org/jira/browse/KAFKA-2443
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> Currently, we dont have a means to measure the size of the metric window 
> since the final sample can be incomplete.
> Expose windowSize(now) on Measurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2211) KafkaAuthorizer: Add simpleACLAuthorizer implementation.

2015-09-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727618#comment-14727618
 ] 

Ismael Juma commented on KAFKA-2211:


I was trying to set this to "In Progress", but seems like I am not able to do 
it. :(

> KafkaAuthorizer: Add simpleACLAuthorizer implementation.
> 
>
> Key: KAFKA-2211
> URL: https://issues.apache.org/jira/browse/KAFKA-2211
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2211.patch
>
>
> Subtask-2 for Kafka-1688. 
> Please see KIP-11 to get details on out of box SimpleACLAuthorizer 
> implementation 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2210:
---
Priority: Blocker  (was: Major)

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2452) enable new consumer in mirror maker

2015-09-02 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727733#comment-14727733
 ] 

Jiangjie Qin commented on KAFKA-2452:
-

I want to wait for KAFKA-2389 to be checked in before submitting patch for this 
ticket, so we don't need to change code because of new consumer API change.

> enable new consumer in mirror maker
> ---
>
> Key: KAFKA-2452
> URL: https://issues.apache.org/jira/browse/KAFKA-2452
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.8.3
>
>
> We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-09-02 Thread Manikumar Reddy
Jun/Gwen/Guozhang,
   Need your help to complete this.

  (1) Copy latest docs to kafka repo:
https://github.com/apache/kafka/pull/171

  (2) svn site repo -> git site repo migration : need committer help to
create a branch "asf-site".

   new git site repo :
https://git-wip-us.apache.org/repos/asf/kafka-site.git

Kumar

On Wed, Aug 26, 2015 at 7:43 PM, Manikumar Reddy 
wrote:

> Hi Guozhang,
>
>   Our plan is to follow Gwen's suggested approach and migrate the existing
> svn site repo to new git repo.
>
>   (1) Gwen's suggestion will help to us maintain latest docs in Kafka repo
> itself.  We periodically need to copy these latest docs to site repo. I
> will submit patch for this.
>
>   (2)  svn repo -> git repo  migration will help us to integrate site repo
> to git tooling/github. It will be easy to maintain the site repo and
> changes.  So we have created new git repo for docs and need committer help
> to create a branch "asf-site".
>
>new git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
>
>   Hope this clears the confusion.
>
> Kumar
> I thought Gwen's suggestion was to us a separate folder in the same repo
> for docs instead of a separate branch, Gwen can correct me if I was wrong?
>
> Guozhang
>
> On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy 
> wrote:
>
> > Hi,
> >
> >Infra team created git repo for kafka site docs.
> >
> >Gwen/Guozhang,
> >Need your help to create a branch "asf-site" and copy the exiting
> > svn contents to that branch.
> >
> > git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
> >
> >
> >
> https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14709630#comment-14709630
> >
> > Kumar
> >
> > On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma  wrote:
> >
> > > My preference would be to do `2` because it reduces the number of tools
> > we
> > > need to know. If we want to clone the repo for the generated site, we
> can
> > > use the same tools as we do for the code repo and we can watch for
> > changes
> > > on GitHub, etc.
> > >
> > > Ismael
> > >
> > > On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy  >
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > Can we finalize the  approach? So that we can proceed further.
> > > >
> > > > 1. Gwen's suggestion + existing svn repo
> > > > 2. Gwen's suggestion + new git repo for docs
> > > >
> > > > kumar
> > > >
> > > > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy <
> > ku...@nmsworks.co.in>
> > > > wrote:
> > > >
> > > > >   Also can we migrate svn repo to git repo? This will help us to
> fix
> > > > > occasional  doc changes/bug fixes through github PR.
> > > > >
> > > > > On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > >> Gwen: I remembered it wrong. We would not need another round of
> > > voting.
> > > > >>
> > > > >> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira 
> > > > wrote:
> > > > >>
> > > > >> > Looking back at this thread, the +1 mention "same repo", so I'm
> > not
> > > > >> sure a
> > > > >> > new vote is required.
> > > > >> >
> > > > >> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang <
> > wangg...@gmail.com>
> > > > >> wrote:
> > > > >> >
> > > > >> > > So I think we have two different approaches here. The original
> > > > >> proposal
> > > > >> > > from Aseem is to move website from SVN to a separate Git repo,
> > and
> > > > >> hence
> > > > >> > > have separate commits on code / doc changes. For that we have
> > > > >> accumulated
> > > > >> > > enough binging +1s to move on; Gwen's proposal is to move
> > website
> > > > into
> > > > >> > the
> > > > >> > > same repo under a different folder. If people feel they prefer
> > > this
> > > > >> over
> > > > >> > > the previous approach I would like to call for another round
> of
> > > > >> voting.
> > > > >> > >
> > > > >> > > Guozhang
> > > > >> > >
> > > > >> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish <
> > paliwalash...@gmail.com
> > > >
> > > > >> > wrote:
> > > > >> > >
> > > > >> > > > +1 to what Gwen has suggested. This is what we follow in
> > Flume.
> > > > >> > > >
> > > > >> > > > All the latest doc changes are in git, once ready you move
> > > changes
> > > > >> to
> > > > >> > > > svn to update website.
> > > > >> > > > The only catch is, when you need to update specific changes
> to
> > > > >> website
> > > > >> > > > outside release cycle, need to be a bit careful :)
> > > > >> > > >
> > > > >> > > > On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira <
> > > g...@confluent.io>
> > > > >> > wrote:
> > > > >> > > > > Yeah, so the way this works in few other projects I worked
> > on
> > > > is:
> > > > >> > > > >
> > > > >> > > > > * The code repo has a /docs directory with the latest
> > revision
> > > > of
> > > > >> the
> > > > >> > > > docs
> > > > >> > > > > (not multiple 

Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97466
---

Ship it!


Parth, thanks a lot for the latest patch. Overall, it looks pretty good. I only 
have a few minor comments below. Also, do you want to include the changes 
recommended by Ismael?


core/src/main/scala/kafka/common/ErrorMapping.scala (lines 54 - 56)


Could you add the missing error cords 23-28 in the comment?



core/src/main/scala/kafka/server/KafkaApis.scala (line 679)


Unused val?



core/src/main/scala/kafka/server/KafkaServer.scala (line 187)


space after if


- Jun Rao


On Sept. 1, 2015, 10:36 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 1, 2015, 10:36 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> e17e390c507eca0eba28a2763c0e35d66077d1f2 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> d547a01cf7098f216a3775e1e1901c5794e1b24c 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 17db4fa3c3a146f03a35dd7671ad1b06d122bb59 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 3da666f73227fc7ef7093e3790546344065f6825 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Updated] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-09-02 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-1387:

Fix Version/s: 0.8.3

> Kafka getting stuck creating ephemeral node it has already created when two 
> zookeeper sessions are established in a very short period of time
> -
>
> Key: KAFKA-1387
> URL: https://issues.apache.org/jira/browse/KAFKA-1387
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>Assignee: Flavio Junqueira
>Priority: Blocker
>  Labels: newbie, patch, zkclient-problems
> Fix For: 0.8.3
>
> Attachments: KAFKA-1387.patch, kafka-1387.patch
>
>
> Kafka broker re-registers itself in zookeeper every time handleNewSession() 
> callback is invoked.
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
>  
> Now imagine the following sequence of events.
> 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
> the zkClient, but not invoked yet.
> 2) Zookeeper session reestablishes again, queueing callback second time.
> 3) First callback is invoked, creating /broker/[id] ephemeral path.
> 4) Second callback is invoked and it tries to create /broker/[id] path using 
> createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
> already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
> stuck in the infinite loop.
> Seems like controller election code have the same issue.
> I'am able to reproduce this issue on the 0.8.1 branch from github using the 
> following configs.
> # zookeeper
> tickTime=10
> dataDir=/tmp/zk/
> clientPort=2101
> maxClientCnxns=0
> # kafka
> broker.id=1
> log.dir=/tmp/kafka
> zookeeper.connect=localhost:2101
> zookeeper.connection.timeout.ms=100
> zookeeper.sessiontimeout.ms=100
> Just start kafka and zookeeper and then pause zookeeper several times using 
> Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2211) KafkaAuthorizer: Add simpleACLAuthorizer implementation.

2015-09-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2211:
---
Status: Open  (was: Patch Available)

[~parth.brahmbhatt] said that he would create a new PR for this issue once 
KAFKA-2210 is done.

> KafkaAuthorizer: Add simpleACLAuthorizer implementation.
> 
>
> Key: KAFKA-2211
> URL: https://issues.apache.org/jira/browse/KAFKA-2211
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2211.patch
>
>
> Subtask-2 for Kafka-1688. 
> Please see KIP-11 to get details on out of box SimpleACLAuthorizer 
> implementation 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2492) Upgrade zkclient dependency to 0.6

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727657#comment-14727657
 ] 

ASF GitHub Bot commented on KAFKA-2492:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/184


> Upgrade zkclient dependency to 0.6
> --
>
> Key: KAFKA-2492
> URL: https://issues.apache.org/jira/browse/KAFKA-2492
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Stevo Slavic
>Priority: Trivial
> Fix For: 0.8.3
>
>
> If zkclient does not get replaced with curator (via KAFKA-873) sooner please 
> consider upgrading zkclient dependency to recently released 0.6.
> zkclient 0.6 has few important changes included like:
> - 
> [fix|https://github.com/sgroschupf/zkclient/commit/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe]
>  to fail retryUntilConnected actions with clear exception in case client gets 
> closed
> - [upgraded zookeeper dependency from 3.4.6 to 
> 3.4.3|https://github.com/sgroschupf/zkclient/commit/8975c1790f7f36cc5d4feea077df337fb1ddabdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2492) Upgrade zkclient dependency to 0.6

2015-09-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2492:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 184
[https://github.com/apache/kafka/pull/184]

> Upgrade zkclient dependency to 0.6
> --
>
> Key: KAFKA-2492
> URL: https://issues.apache.org/jira/browse/KAFKA-2492
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Stevo Slavic
>Priority: Trivial
> Fix For: 0.8.3
>
>
> If zkclient does not get replaced with curator (via KAFKA-873) sooner please 
> consider upgrading zkclient dependency to recently released 0.6.
> zkclient 0.6 has few important changes included like:
> - 
> [fix|https://github.com/sgroschupf/zkclient/commit/0630c9c6e67ab49a51e80bfd939e4a0d01a69dfe]
>  to fail retryUntilConnected actions with clear exception in case client gets 
> closed
> - [upgraded zookeeper dependency from 3.4.6 to 
> 3.4.3|https://github.com/sgroschupf/zkclient/commit/8975c1790f7f36cc5d4feea077df337fb1ddabdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2492; Upgraded zkclient dependency from ...

2015-09-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/184


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2504) Stop logging WARN when client disconnects

2015-09-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727886#comment-14727886
 ] 

Ismael Juma commented on KAFKA-2504:


Probably a similar one was fixed, this one has been there since January 
according to git:

{code}
catch (IOException e) {
String desc = channel.socketDescription();
if (e instanceof EOFException || e instanceof ConnectException)
log.debug("Connection {} disconnected", desc);
else
log.warn("Error in I/O with connection to {}", desc, e);
close(channel);
this.disconnected.add(channel.id());
}
{code}

> Stop logging WARN when client disconnects
> -
>
> Key: KAFKA-2504
> URL: https://issues.apache.org/jira/browse/KAFKA-2504
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
> Fix For: 0.8.3
>
>
> I thought we fixed this one, but it came back. This can be fill logs and is 
> fairly useless. Should be logged at DEBUG level:
> {code}
> [2015-09-02 12:05:59,743] WARN Error in I/O with connection to /10.191.0.36 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:111)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:296)
>   at kafka.network.Processor.run(SocketServer.scala:405)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2437) Controller does not handle zk node deletion correctly.

2015-09-02 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728248#comment-14728248
 ] 

Jiangjie Qin commented on KAFKA-2437:
-

Debugged with [~jjkoshy] and found the following root cause.

zkClient determine whether to find handleDataChanged() or handledDataDeleted() 
in the following way. When receive event from zookeeper, it tries to read the 
data from the watched path. If the path does not exist any more, 
handledDataDeleted() will be fired. Otherwise, handleDataChange() will be fired.

When we delete /controller path. zkClient watcher will receive zk event, but 
before zkClient read data from the watched path, the path got created again by 
another broker. In this case, only handleDataChange() will fire, i.e. a broker 
will miss a node deletion event. If the broker missed the node deletion event 
happen to be the old controller, it will not resign and the cluster will end up 
with more than one controller.

> Controller does not handle zk node deletion correctly.
> --
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2114) Unable to change min.insync.replicas default

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2114:
---
Fix Version/s: (was: 0.8.2.2)

> Unable to change min.insync.replicas default
> 
>
> Key: KAFKA-2114
> URL: https://issues.apache.org/jira/browse/KAFKA-2114
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bryan Baugher
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-2114.patch
>
>
> Following the comment here[1] I was unable to change the min.insync.replicas 
> default value. I tested this by setting up a 3 node cluster, wrote to a topic 
> with a replication factor of 3, using request.required.acks=-1 and setting 
> min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
> brokers but I was still able to write successfully. Only after running the 
> alter topic command setting min.insync.replicas=2 on the topic did I see 
> write failures.
> [1] - 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201504.mbox/%3CCANZ-JHF71yqKE6%2BKKhWe2EGUJv6R3bTpoJnYck3u1-M35sobgg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1883) NullPointerException in RequestSendThread

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1883:
---
Fix Version/s: (was: 0.8.2.2)

> NullPointerException in RequestSendThread
> -
>
> Key: KAFKA-1883
> URL: https://issues.apache.org/jira/browse/KAFKA-1883
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: jaikiran pai
>Assignee: jaikiran pai
> Fix For: 0.8.3
>
> Attachments: KAFKA-1883.patch
>
>
> I often see the following exception while running some tests
> (ProducerFailureHandlingTest.testNoResponse is one such instance):
> {code}
> [2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
> Controller 0 fails to send a request to broker
> id:1,host:localhost,port:56729 (kafka.controller.RequestSendThread:103)
> java.lang.NullPointerException
> at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.
> scala:150)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> {code}
> Looking at that code in question, I can see that the NPE can be triggered
> when the "receive" is null which can happen if the "isRunning" is false
> (i.e a shutdown has been requested).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1668) TopicCommand doesn't warn if --topic argument doesn't match any topics

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1668:
---
Fix Version/s: (was: 0.8.2.2)

> TopicCommand doesn't warn if --topic argument doesn't match any topics
> --
>
> Key: KAFKA-1668
> URL: https://issues.apache.org/jira/browse/KAFKA-1668
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ryan Berdeen
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1668.patch
>
>
> Running {{kafka-topics.sh --alter}} with an invalid {{--topic}} argument 
> produces no output and exits with 0, indicating success.
> {code}
> $ bin/kafka-topics.sh --topic does-not-exist --alter --config invalid=xxx 
> --zookeeper zkhost:2181
> $ echo $?
> 0
> {code}
> An invalid topic name or a regular expression that matches 0 topics should at 
> least print a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1758) corrupt recovery file prevents startup

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1758:
---
Fix Version/s: (was: 0.8.2.2)

> corrupt recovery file prevents startup
> --
>
> Key: KAFKA-1758
> URL: https://issues.apache.org/jira/browse/KAFKA-1758
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Rosenberg
>Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1758.patch, KAFKA-1758_2015-05-09_12:29:20.patch
>
>
> Hi,
> We recently had a kafka node go down suddenly. When it came back up, it 
> apparently had a corrupt recovery file, and refused to startup:
> {code}
> 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
> KafkaServer
> java.lang.NumberFormatException: For input string: 
> "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
> at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> at kafka.log.LogManager.(LogManager.scala:57)
> at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> {code}
> And the app is under a monitor (so it was repeatedly restarting and failing 
> with this error for several minutes before we got to it)…
> We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
> then restarted cleanly (but of course re-synced all it’s data from replicas, 
> so we had no data loss).
> Anyway, I’m wondering if that’s the expected behavior? Or should it not 
> declare it corrupt and then proceed automatically to an unclean restart?
> Should this NumberFormatException be handled a bit more gracefully?
> We saved the corrupt file if it’s worth inspecting (although I doubt it will 
> be useful!)….
> The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2235) LogCleaner offset map overflow

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2235:
---
Fix Version/s: (was: 0.8.2.2)

> LogCleaner offset map overflow
> --
>
> Key: KAFKA-2235
> URL: https://issues.apache.org/jira/browse/KAFKA-2235
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 0.8.1, 0.8.2.0
>Reporter: Ivan Simoneko
>Assignee: Ivan Simoneko
> Fix For: 0.8.3
>
> Attachments: KAFKA-2235_v1.patch, KAFKA-2235_v2.patch
>
>
> We've seen log cleaning generating an error for a topic with lots of small 
> messages. It seems that cleanup map overflow is possible if a log segment 
> contains more unique keys than empty slots in offsetMap. Check for baseOffset 
> and map utilization before processing segment seems to be not enough because 
> it doesn't take into account segment size (number of unique messages in the 
> segment).
> I suggest to estimate upper bound of keys in a segment as a number of 
> messages in the segment and compare it with the number of available slots in 
> the map (keeping in mind desired load factor). It should work in cases where 
> an empty map is capable to hold all the keys for a single segment. If even a 
> single segment no able to fit into an empty map cleanup process will still 
> fail. Probably there should be a limit on the log segment entries count?
> Here is the stack trace for this error:
> 2015-05-19 16:52:48,758 ERROR [kafka-log-cleaner-thread-0] 
> kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
> java.lang.IllegalArgumentException: requirement failed: Attempt to add a new 
> entry to a full offset map.
>at scala.Predef$.require(Predef.scala:233)
>at kafka.log.SkimpyOffsetMap.put(OffsetMap.scala:79)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:543)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:538)
>at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
>at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>at kafka.message.MessageSet.foreach(MessageSet.scala:67)
>at 
> kafka.log.Cleaner.kafka$log$Cleaner$$buildOffsetMapForSegment(LogCleaner.scala:538)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:515)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:512)
>at scala.collection.immutable.Stream.foreach(Stream.scala:547)
>at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:512)
>at kafka.log.Cleaner.clean(LogCleaner.scala:307)
>at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:221)
>at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:199)
>at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1836) metadata.fetch.timeout.ms set to zero blocks forever

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1836:
---
Fix Version/s: (was: 0.8.2.2)

> metadata.fetch.timeout.ms set to zero blocks forever
> 
>
> Key: KAFKA-1836
> URL: https://issues.apache.org/jira/browse/KAFKA-1836
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0
>Reporter: Paul Pearcy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1836-new-patch.patch, KAFKA-1836.patch
>
>
> You can easily work around this by setting the timeout value to 1ms, but 0ms 
> should mean 0ms or at least have the behavior documented. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2096) Enable keepalive socket option for broker to prevent socket leak

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2096:
---
Fix Version/s: (was: 0.8.2.2)

> Enable keepalive socket option for broker to prevent socket leak
> 
>
> Key: KAFKA-2096
> URL: https://issues.apache.org/jira/browse/KAFKA-2096
> Project: Kafka
>  Issue Type: Improvement
>  Components: network
>Affects Versions: 0.8.2.1
>Reporter: Allen Wang
>Assignee: Allen Wang
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: patch.diff
>
>
> We run a Kafka 0.8.2.1 cluster in AWS with large number of producers (> 
> 1). Also the number of producer instances scale up and down significantly 
> on a daily basis.
> The issue we found is that after 10 days, the open file descriptor count will 
> approach the limit of 32K. An investigation of these open file descriptors 
> shows that a significant portion of these are from client instances that are 
> terminated during scaling down. Somehow they still show as "ESTABLISHED" in 
> netstat. We suspect that the AWS firewall between the client and broker 
> causes this issue.
> We attempted to use "keepalive" socket option to reduce this socket leak on 
> broker and it appears to be working. Specifically, we added this line to 
> kafka.network.Acceptor.accept():
>   socketChannel.socket().setKeepAlive(true)
> It is confirmed during our experiment of this change that entries in netstat 
> where the client instance is terminated were probed as configured in 
> operating system. After configured number of probes, the OS determined that 
> the peer is no longer alive and the entry is removed, possibly after an error 
> in Kafka to read from the channel and closing the channel. Also, our 
> experiment shows that after a few days, the instance was able to keep a 
> stable low point of open file descriptor count, compared with other instances 
> where the low point keeps increasing day to day.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1057) Trim whitespaces from user specified configs

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1057:
---
Fix Version/s: (was: 0.8.2.2)

> Trim whitespaces from user specified configs
> 
>
> Key: KAFKA-1057
> URL: https://issues.apache.org/jira/browse/KAFKA-1057
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Neha Narkhede
>Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1057.patch, KAFKA-1057_2014-10-04_20:15:32.patch
>
>
> Whitespaces in configs are a common problem that leads to config errors. It 
> will be nice if Kafka can trim the whitespaces from configs automatically



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-09-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1840:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

The code has been checked in KAFKA-1997

> Add a simple message handler in Mirror Maker
> 
>
> Key: KAFKA-1840
> URL: https://issues.apache.org/jira/browse/KAFKA-1840
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1840.patch, KAFKA-1840_2015-01-20_11:36:14.patch, 
> KAFKA-1840_2015-01-30_18:25:00.patch, KAFKA-1840_2015-02-01_00:16:53.patch
>
>
> Currently mirror maker simply mirror all the messages it consumes from the 
> source cluster to target cluster. It would be useful to allow user to do some 
> simple process such as filtering/reformatting in mirror maker. We can allow 
> user to wire in a message handler to handle messages. The default handler 
> could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-09-02_17:32:06.patch

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch, KAFKA-2210_2015-09-02_14:50:29.patch, 
> KAFKA-2210_2015-09-02_17:32:06.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728293#comment-14728293
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch, KAFKA-2210_2015-09-02_14:50:29.patch, 
> KAFKA-2210_2015-09-02_17:32:06.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated Sept. 3, 2015, 12:32 a.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Now addressing Ismael's comments. Case sensitive checks.


Addressing Jun's comments.


Merge remote-tracking branch 'origin/trunk' into az

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala
core/src/main/scala/kafka/server/KafkaServer.scala

Deleting KafkaConfigDefTest


Addressing comments from Ismael.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az


Consolidating KafkaPrincipal.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az

Conflicts:

clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java

clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
core/src/main/scala/kafka/server/KafkaApis.scala

Making Acl structure take only one principal, operation and host.


Merge remote-tracking branch 'origin/trunk' into az


Reverting uninteded new line change.


Addressing comments from Jun.


Merge remote-tracking branch 'origin/trunk' into az


Various tweaks that make the code more readable

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala

Fixing compilation errors after cherry-pocking.


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
641afa1b2474150fa1002e9fedca13ff55175a7e 
  
clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java 
b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
  
clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
 PRE-CREATION 
  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
d547a01cf7098f216a3775e1e1901c5794e1b24c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
756cf775cadbcaf01df7f691d8d01d9ff75db291 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
3da666f73227fc7ef7093e3790546344065f6825 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[GitHub] kafka pull request: KAFKA-2437: Fix ZookeeperLeaderElector to hand...

2015-09-02 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/189

KAFKA-2437: Fix ZookeeperLeaderElector to handle node deletion correctly.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2437

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #189


commit 11d9fd6595932553e138a3c3094322ebd9170d6c
Author: Jiangjie Qin 
Date:   2015-09-03T00:41:26Z

KAFKA-2437: Fix ZookeeperLeaderElector to handle node deletion correctly.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Updated testing readme

2015-09-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/187


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2502) Quotas documentation for 0.8.3

2015-09-02 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-2502:
--
Description: 
Complete quotas documentation

Also, 
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol 
needs to be updated with protocol changes introduced in KAFKA-2136

  was:Complete quotas documentation


> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.8.3
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2477) Replicas spuriously deleting all segments in partition

2015-09-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727933#comment-14727933
 ] 

HÃ¥kon Hitland commented on KAFKA-2477:
--

I don't think enabling trace logging would be practical in our production 
environment, unfortunately.

We see the error regularly in production, but I haven't been able to reproduce 
it locally.

> Replicas spuriously deleting all segments in partition
> --
>
> Key: KAFKA-2477
> URL: https://issues.apache.org/jira/browse/KAFKA-2477
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: HÃ¥kon Hitland
> Attachments: kafka_log.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2491; update ErrorMapping with new consu...

2015-09-02 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/188

KAFKA-2491; update ErrorMapping with new consumer errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2491

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #188


commit b6728fd9343d598c37c583cf4c1e2f04c9099367
Author: Jason Gustafson 
Date:   2015-09-02T20:06:34Z

KAFKA-2491; update ErrorMapping with new consumer errors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-824) java.lang.NullPointerException in commitOffsets

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-824.
---
   Resolution: Fixed
Fix Version/s: 0.8.3

[~parth.brahmbhatt], thanks for confirming this. Resolving this jira.

> java.lang.NullPointerException in commitOffsets 
> 
>
> Key: KAFKA-824
> URL: https://issues.apache.org/jira/browse/KAFKA-824
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.7.2, 0.8.2.0
>Reporter: Yonghui Zhao
>Assignee: Parth Brahmbhatt
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: ZkClient.0.3.txt, ZkClient.0.4.txt, screenshot-1.jpg
>
>
> Neha Narkhede
> "Yes, I have. Unfortunately, I never quite around to fixing it. My guess is
> that it is caused due to a race condition between the rebalance thread and
> the offset commit thread when a rebalance is triggered or the client is
> being shutdown. Do you mind filing a bug ?"
> 2013/03/25 12:08:32.020 WARN [ZookeeperConsumerConnector] [] 
> 0_lu-ml-test10.bj-1364184411339-7c88f710 exception during commitOffsets
> java.lang.NullPointerException
> at org.I0Itec.zkclient.ZkConnection.writeData(ZkConnection.java:111)
> at org.I0Itec.zkclient.ZkClient$10.call(ZkClient.java:813)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
> at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:809)
> at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:777)
> at kafka.utils.ZkUtils$.updatePersistentPath(ZkUtils.scala:103)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2$$anonfun$apply$4.apply(ZookeeperConsumerConnector.scala:251)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2$$anonfun$apply$4.apply(ZookeeperConsumerConnector.scala:248)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:549)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at 
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:570)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2.apply(ZookeeperConsumerConnector.scala:248)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$commitOffsets$2.apply(ZookeeperConsumerConnector.scala:246)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at kafka.utils.Pool$$anon$1.foreach(Pool.scala:53)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at kafka.utils.Pool.foreach(Pool.scala:24)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:246)
> at 
> kafka.consumer.ZookeeperConsumerConnector.autoCommit(ZookeeperConsumerConnector.scala:232)
> at 
> kafka.consumer.ZookeeperConsumerConnector$$anonfun$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:126)
> at kafka.utils.Utils$$anon$2.run(Utils.scala:58)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2209) Change client quotas dynamically using DynamicConfigManager

2015-09-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728228#comment-14728228
 ] 

Jun Rao commented on KAFKA-2209:


With the dynamic configuration, do we plan to remove the 
QuotaBytesPerSecondOverrides config in the broker?

> Change client quotas dynamically using DynamicConfigManager
> ---
>
> Key: KAFKA-2209
> URL: https://issues.apache.org/jira/browse/KAFKA-2209
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>  Labels: quotas
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2437) Controller does not handle zk node deletion correctly.

2015-09-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2437:

Summary: Controller does not handle zk node deletion correctly.  (was: 
Controller did not handle zk node deletion correctly.)

> Controller does not handle zk node deletion correctly.
> --
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2437) Controller did not handle zk node deletion correctly.

2015-09-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2437:

Summary: Controller did not handle zk node deletion correctly.  (was: 
Controller lost /controller zookeeper watcher.)

> Controller did not handle zk node deletion correctly.
> -
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated Sept. 3, 2015, 12:36 a.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Now addressing Ismael's comments. Case sensitive checks.


Addressing Jun's comments.


Merge remote-tracking branch 'origin/trunk' into az

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala
core/src/main/scala/kafka/server/KafkaServer.scala

Deleting KafkaConfigDefTest


Addressing comments from Ismael.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az


Consolidating KafkaPrincipal.


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az

Conflicts:

clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java

clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
core/src/main/scala/kafka/server/KafkaApis.scala

Making Acl structure take only one principal, operation and host.


Merge remote-tracking branch 'origin/trunk' into az


Reverting uninteded new line change.


Addressing comments from Jun.


Merge remote-tracking branch 'origin/trunk' into az


Various tweaks that make the code more readable

Conflicts:
core/src/main/scala/kafka/server/KafkaApis.scala

Fixing compilation errors after cherry-pocking.


Removing FIXME.


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
 35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
641afa1b2474150fa1002e9fedca13ff55175a7e 
  
clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java 
b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
  
clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
 PRE-CREATION 
  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
d547a01cf7098f216a3775e1e1901c5794e1b24c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
756cf775cadbcaf01df7f691d8d01d9ff75db291 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
3da666f73227fc7ef7093e3790546344065f6825 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Parth Brahmbhatt


> On Sept. 2, 2015, 4:36 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/common/ErrorMapping.scala, lines 55-57
> > 
> >
> > Could you add the missing error cords 23-28 in the comment?

copied the error constant from Error.java and added in comments. I am not sure 
if you already have a jira to actually fix this. If you do can you assign that 
to me? If you don't have a jira yet can I create one and assign it to myself?


> On Sept. 2, 2015, 4:36 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/KafkaApis.scala, line 680
> > 
> >
> > Unused val?

unused call given we agreed for offset topic creation we are not going to check 
for authorization.


> On Sept. 2, 2015, 4:36 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/server/KafkaServer.scala, line 187
> > 
> >
> > space after if

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97466
---


On Sept. 3, 2015, 12:36 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 3, 2015, 12:36 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Reverting uninteded new line change.
> 
> 
> Addressing comments from Jun.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Various tweaks that make the code more readable
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Fixing compilation errors after cherry-pocking.
> 
> 
> Removing FIXME.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 641afa1b2474150fa1002e9fedca13ff55175a7e 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> d547a01cf7098f216a3775e1e1901c5794e1b24c 
>   

Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Parth Brahmbhatt


> On Sept. 2, 2015, 1:11 p.m., Ismael Juma wrote:
> > Hi Parth, I finally had a bit of time to look into this in more detail and 
> > I pushed a commit with my suggestions to make the `KafkaApis` code a bit 
> > more readable:
> > 
> > https://github.com/ijuma/kafka/commit/7737a9feb0c6d8cb4be3fe22992f8dc10b657154
> > 
> > As you said, it's difficult to do much better given the lack of common 
> > interfaces.
> > 
> > Please incorporate the changes if you agree. Also, note that I added a 
> > FIXME in one case where we don't seem to use the data produced by the 
> > `partition` call.

Cherry picked. The FIXME does not need any change if you see 
https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L195
 it uses unauthorized partiton in constructing respoinse. where as the 
authorized part gets used 
https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L214
 and 
https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L255
 and the actual call back finally gets called here 
https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L273.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97432
---


On Sept. 3, 2015, 12:36 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 3, 2015, 12:36 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Reverting uninteded new line change.
> 
> 
> Addressing comments from Jun.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Various tweaks that make the code more readable
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Fixing compilation errors after cherry-pocking.
> 
> 
> Removing FIXME.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 641afa1b2474150fa1002e9fedca13ff55175a7e 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 

[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2210:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~parth.brahmbhatt], thanks a lot of the patch. +1 and committed to trunk.

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch, KAFKA-2210_2015-09-02_14:50:29.patch, 
> KAFKA-2210_2015-09-02_17:32:06.patch, KAFKA-2210_2015-09-02_17:36:47.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2437) Controller does not handle zk node deletion correctly.

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728351#comment-14728351
 ] 

ASF GitHub Bot commented on KAFKA-2437:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/189


> Controller does not handle zk node deletion correctly.
> --
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2437: Fix ZookeeperLeaderElector to hand...

2015-09-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/189


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2437) Controller does not handle zk node deletion correctly.

2015-09-02 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy resolved KAFKA-2437.
---
   Resolution: Fixed
Fix Version/s: 0.8.3

> Controller does not handle zk node deletion correctly.
> --
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1937) Mirror maker needs to clear the unacked offset map after rebalance.

2015-09-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1937:

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Not applicable any more.

> Mirror maker needs to clear the unacked offset map after rebalance.
> ---
>
> Key: KAFKA-1937
> URL: https://issues.apache.org/jira/browse/KAFKA-1937
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1937.patch
>
>
> Offset map needs to be clear during rebalance to avoid committing offsets to 
> partitions that are not owned by the consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97582
---

Ship it!


Thanks Parth, LGTM.

- Ismael Juma


On Sept. 3, 2015, 12:36 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 3, 2015, 12:36 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Reverting uninteded new line change.
> 
> 
> Addressing comments from Jun.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Various tweaks that make the code more readable
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Fixing compilation errors after cherry-pocking.
> 
> 
> Removing FIXME.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 641afa1b2474150fa1002e9fedca13ff55175a7e 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> d547a01cf7098f216a3775e1e1901c5794e1b24c 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 756cf775cadbcaf01df7f691d8d01d9ff75db291 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 3da666f73227fc7ef7093e3790546344065f6825 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Commented] (KAFKA-2437) Controller does not handle zk node deletion correctly.

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728309#comment-14728309
 ] 

ASF GitHub Bot commented on KAFKA-2437:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/189

KAFKA-2437: Fix ZookeeperLeaderElector to handle node deletion correctly.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2437

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #189


commit 11d9fd6595932553e138a3c3094322ebd9170d6c
Author: Jiangjie Qin 
Date:   2015-09-03T00:41:26Z

KAFKA-2437: Fix ZookeeperLeaderElector to handle node deletion correctly.




> Controller does not handle zk node deletion correctly.
> --
>
> Key: KAFKA-2437
> URL: https://issues.apache.org/jira/browse/KAFKA-2437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We see this issue occasionally. The symptom is that when /controller path got 
> deleted, the old controller does not resign so we end up having more than one 
> controller in the cluster (although the requests from controller with old 
> epoch will not be accepted). After checking zookeeper watcher by using wchp, 
> it looks the zookeeper session who created the /controller path does not have 
> a watcher on /controller. That causes the old controller not resigning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728299#comment-14728299
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch, KAFKA-2210_2015-09-02_14:50:29.patch, 
> KAFKA-2210_2015-09-02_17:32:06.patch, KAFKA-2210_2015-09-02_17:36:47.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Parth Brahmbhatt


> On Sept. 2, 2015, 11:13 p.m., Jun Rao wrote:
> > Hmm, did you attach the right patch? It doesn't seem to apply to trunk and 
> > my last round of minor comments didn't seem to get addressed. Also, one 
> > more suggestion below.

Can you take a look now? I am not sure why the patch looked weird the first 
time around. I have cherry picked Ismeal's change but I had to fix some 
compilation errors so you will see couple more commits on top of his commit.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97554
---


On Sept. 3, 2015, 12:36 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 3, 2015, 12:36 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Reverting uninteded new line change.
> 
> 
> Addressing comments from Jun.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Various tweaks that make the code more readable
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Fixing compilation errors after cherry-pocking.
> 
> 
> Removing FIXME.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 641afa1b2474150fa1002e9fedca13ff55175a7e 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> a3a8df0545c3f9390e0e04b8d2fab0134f5fd019 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> d547a01cf7098f216a3775e1e1901c5794e1b24c 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 756cf775cadbcaf01df7f691d8d01d9ff75db291 
>   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 3da666f73227fc7ef7093e3790546344065f6825 
> 
> Diff: https://reviews.apache.org/r/34492/diff/
> 
> 
> Testing
> 

[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-09-02 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-09-02_17:36:47.patch

> KafkaAuthorizer: Add all public entities, config changes and changes to 
> KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
> --
>
> Key: KAFKA-2210
> URL: https://issues.apache.org/jira/browse/KAFKA-2210
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
> KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
> KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
> KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch, 
> KAFKA-2210_2015-08-10_18:31:54.patch, KAFKA-2210_2015-08-20_11:27:18.patch, 
> KAFKA-2210_2015-08-25_17:59:22.patch, KAFKA-2210_2015-08-26_14:29:02.patch, 
> KAFKA-2210_2015-09-01_15:36:02.patch, KAFKA-2210_2015-09-02_14:50:29.patch, 
> KAFKA-2210_2015-09-02_17:32:06.patch, KAFKA-2210_2015-09-02_17:36:47.patch
>
>
> This is the first subtask for Kafka-1688. As Part of this jira we intend to 
> agree on all the public entities, configs and changes to existing kafka 
> classes to allow pluggable authorizer implementation.
> Please see KIP-11 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
>  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-09-02 Thread Ismael Juma


> On Sept. 2, 2015, 1:11 p.m., Ismael Juma wrote:
> > Hi Parth, I finally had a bit of time to look into this in more detail and 
> > I pushed a commit with my suggestions to make the `KafkaApis` code a bit 
> > more readable:
> > 
> > https://github.com/ijuma/kafka/commit/7737a9feb0c6d8cb4be3fe22992f8dc10b657154
> > 
> > As you said, it's difficult to do much better given the lack of common 
> > interfaces.
> > 
> > Please incorporate the changes if you agree. Also, note that I added a 
> > FIXME in one case where we don't seem to use the data produced by the 
> > `partition` call.
> 
> Parth Brahmbhatt wrote:
> Cherry picked. The FIXME does not need any change if you see 
> https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L195
>  it uses unauthorized partiton in constructing respoinse. where as the 
> authorized part gets used 
> https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L214
>  and 
> https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L255
>  and the actual call back finally gets called here 
> https://github.com/Parth-Brahmbhatt/kafka/blob/az/core/src/main/scala/kafka/server/KafkaApis.scala#L273.

Thanks for cherry-picking the changes. Regarding the FIXME, indeed my bad 
(tricked by IntelliJ's unused field syntax highlighting). I realised my mistake 
once you posted the updated patch a few minutes ago. :)


- Ismael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review97432
---


On Sept. 3, 2015, 12:36 a.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34492/
> ---
> 
> (Updated Sept. 3, 2015, 12:36 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2210
> https://issues.apache.org/jira/browse/KAFKA-2210
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing review comments from Jun.
> 
> 
> Adding CREATE check for offset topic only if the topic does not exist already.
> 
> 
> Addressing some more comments.
> 
> 
> Removing acl.json file
> 
> 
> Moving PermissionType to trait instead of enum. Following the convention for 
> defining constants.
> 
> 
> Adding authorizer.config.path back.
> 
> 
> Addressing more comments from Jun.
> 
> 
> Addressing more comments.
> 
> 
> Now addressing Ismael's comments. Case sensitive checks.
> 
> 
> Addressing Jun's comments.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
>   core/src/main/scala/kafka/server/KafkaServer.scala
> 
> Deleting KafkaConfigDefTest
> 
> 
> Addressing comments from Ismael.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> 
> Consolidating KafkaPrincipal.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into az
> 
> Conflicts:
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Making Acl structure take only one principal, operation and host.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Reverting uninteded new line change.
> 
> 
> Addressing comments from Jun.
> 
> 
> Merge remote-tracking branch 'origin/trunk' into az
> 
> 
> Various tweaks that make the code more readable
> 
> Conflicts:
>   core/src/main/scala/kafka/server/KafkaApis.scala
> 
> Fixing compilation errors after cherry-pocking.
> 
> 
> Removing FIXME.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
>  35d41685dd178bbdf77b2476e03ad51fc4adcbb6 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 641afa1b2474150fa1002e9fedca13ff55175a7e 
>   
> clients/src/main/java/org/apache/kafka/common/security/auth/KafkaPrincipal.java
>  b640ea0f4bdb694fc5524ef594aa125cc1ba4cf3 
>   
> clients/src/test/java/org/apache/kafka/common/security/auth/KafkaPrincipalTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/api/OffsetRequest.scala 
> f418868046f7c99aefdccd9956541a0cb72b1500 
>   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> c75c68589681b2c9d6eba2b440ce5e58cddf6370 
>   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
>   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
>   

[jira] [Commented] (KAFKA-1543) Changing replication factor

2015-09-02 Thread Alexander Pakulov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728340#comment-14728340
 ] 

Alexander Pakulov commented on KAFKA-1543:
--

I didn't check for this scenario, I'll keep you posted.

> Changing replication factor
> ---
>
> Key: KAFKA-1543
> URL: https://issues.apache.org/jira/browse/KAFKA-1543
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alexey Ozeritskiy
>Assignee: Alexander Pakulov
> Attachments: can-change-replication.patch
>
>
> It is difficult to change replication factor by manual editing json config.
> I propose to add a key to kafka-reassign-partitions.sh command to 
> automatically create json config.
> Example of usage
> {code}
> kafka-reassign-partitions.sh --zookeeper zk --replicas new-replication-factor 
> --topics-to-move-json-file topics-file --broker-list 1,2,3,4 --generate > 
> output
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2505) Add trace/debug description modes to the new Request/Response API

2015-09-02 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2505:
-

 Summary: Add trace/debug description modes to the new 
Request/Response API
 Key: KAFKA-2505
 URL: https://issues.apache.org/jira/browse/KAFKA-2505
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish K Singh
Assignee: Ashish K Singh


It was pointed out on KAFKA-2461 that trace/debug description modes are 
required to the new Request/Response API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #613

2015-09-02 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2397) leave group request

2015-09-02 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728410#comment-14728410
 ] 

Ewen Cheslack-Postava commented on KAFKA-2397:
--

[~hachikuji] My primary objection to that plan is that it might lead to us 
maintaining more complicated code if we support leave group via two mechanisms 
instead of one, and it's also more stuff the user has to understand.

On the other hand, I can see a case for supporting both: explicit leave group 
via a message is great for forcing the coordinator to trigger a rebalance ASAP, 
whereas an implicit leave group is a nice way to allow for fast reconnect in 
the case of a network hiccup without affecting membership/requiring another 
round but also allows the broker to boot the consumer from the group without 
waiting for the full session interval (and the client can also take this into 
account, stopping consumption after a heartbeat interval during which it cannot 
connect rather than waiting for a full session timeout). But since I'm not 
really clear on when we'd see such network hiccups that wouldn't be masked by 
TCP anyway, I'm not sure this is worth the more complicated model.

It does sound like it's probably complicated -- or at least a lot of code 
changes -- to make the lower level connection management and higher level 
protocol stuff coordinate. Since this issue actually slows things down for me 
on a daily basis now, I think the explicit leave group would make sense to get 
committed.

> leave group request
> ---
>
> Key: KAFKA-2397
> URL: https://issues.apache.org/jira/browse/KAFKA-2397
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
> Fix For: 0.8.3
>
>
> Let's say every consumer in a group has session timeout s. Currently, if a 
> consumer leaves the group, the worst case time to stabilize the group is 2s 
> (s to detect the consumer failure + s for the rebalance window). If a 
> consumer instead can declare they are leaving the group, the worst case time 
> to stabilize the group would just be the s associated with the rebalance 
> window.
> This is a low priority optimization!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Kafka 1686. Implement SASL/Kerberos.

2015-09-02 Thread harshach
Github user harshach closed the pull request at:

https://github.com/apache/kafka/pull/190


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728472#comment-14728472
 ] 

ASF GitHub Bot commented on KAFKA-1686:
---

GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/191

KAFKA-1686: Implement SASL/Kerberos.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka KAFKA-1686-V1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #191


commit 82737e5bb71f67271d90c059dede74935f8a5e56
Author: Sriharsha Chintalapani 
Date:   2015-08-31T23:07:15Z

KAFKA-1686. Implement SASL/Kerberos.

commit a3417d7f2c558c0082799b117a3c62c706ad519d
Author: Sriharsha Chintalapani 
Date:   2015-09-03T03:31:34Z

KAFKA-1686. Implement SASL/Kerberos.

commit 8f718ce6b03a9c86712dc8f960af2b739b8ed510
Author: Sriharsha Chintalapani 
Date:   2015-09-03T04:10:40Z

KAFKA-1686. Implement SASL/Kerberos.




> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1686: Implement SASL/Kerberos.

2015-09-02 Thread harshach
GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/191

KAFKA-1686: Implement SASL/Kerberos.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka KAFKA-1686-V1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #191


commit 82737e5bb71f67271d90c059dede74935f8a5e56
Author: Sriharsha Chintalapani 
Date:   2015-08-31T23:07:15Z

KAFKA-1686. Implement SASL/Kerberos.

commit a3417d7f2c558c0082799b117a3c62c706ad519d
Author: Sriharsha Chintalapani 
Date:   2015-09-03T03:31:34Z

KAFKA-1686. Implement SASL/Kerberos.

commit 8f718ce6b03a9c86712dc8f960af2b739b8ed510
Author: Sriharsha Chintalapani 
Date:   2015-09-03T04:10:40Z

KAFKA-1686. Implement SASL/Kerberos.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2506) Refactor KafkaConsumer.partitionsFor(topic) to get metadata of topic without modifying state of Metadata

2015-09-02 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2506:
-

 Summary: Refactor KafkaConsumer.partitionsFor(topic) to get 
metadata of topic without modifying state of Metadata
 Key: KAFKA-2506
 URL: https://issues.apache.org/jira/browse/KAFKA-2506
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish K Singh
Assignee: Ashish K Singh


While working on KAFKA-1893, we realized that it will be good to refactor 
KafkaConsumer.partitionsFor(topic) to get metadata of topic without modifying 
state of Metadata. It can follow an approach similar to listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2501) getting one test failed when building apache kafka

2015-09-02 Thread naresh gundu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

naresh gundu updated KAFKA-2501:

   Environment: Ubuntu 15.04 ppc64 le  (was: Ubuntu 15.04)
Remaining Estimate: 48h  (was: 1m)
 Original Estimate: 48h  (was: 1m)

> getting one test failed when building apache kafka
> --
>
> Key: KAFKA-2501
> URL: https://issues.apache.org/jira/browse/KAFKA-2501
> Project: Kafka
>  Issue Type: Test
>  Components: build
>Affects Versions: 0.8.2.0
> Environment: Ubuntu 15.04 ppc64 le
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I have run steps from github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar
> ./gradlew srcJar
> ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
> org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:73)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:77)
> at 
> org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
> at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
> ... 6 more
> Caused by:
> java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u
> nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: cannot open 
> shared object file: No such file or directory (Possible ca
> use: endianness mismatch)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
> at java.lang.Runtime.load0(Runtime.java:795)
> at java.lang.System.load(System.java:1062)
> at 
> org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
> at org.xerial.snappy.Snappy.(Snappy.java:47)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
> ... 11 more
> 267 tests completed, 1 failed
> :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2461) request logger no longer logs extra information in debug mode

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728509#comment-14728509
 ] 

ASF GitHub Bot commented on KAFKA-2461:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/169


> request logger no longer logs extra information in debug mode
> -
>
> Key: KAFKA-2461
> URL: https://issues.apache.org/jira/browse/KAFKA-2461
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Currently request logging calls are identical for trace and debug:
> {code}
> if(requestLogger.isTraceEnabled)
> requestLogger.trace("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
> .format(requestDesc, connectionId, totalTime, 
> requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, 
> responseSendTime))
>   else if(requestLogger.isDebugEnabled)
> requestLogger.debug("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
>   .format(requestDesc, connectionId, totalTime, requestQueueTime, 
> apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
> {code}
> I think in the past (3 refactoring steps ago), we used to print more 
> information about specific topics and partitions in debug mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2461) request logger no longer logs extra information in debug mode

2015-09-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2461:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 169
[https://github.com/apache/kafka/pull/169]

> request logger no longer logs extra information in debug mode
> -
>
> Key: KAFKA-2461
> URL: https://issues.apache.org/jira/browse/KAFKA-2461
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
>Priority: Blocker
> Fix For: 0.8.3
>
>
> Currently request logging calls are identical for trace and debug:
> {code}
> if(requestLogger.isTraceEnabled)
> requestLogger.trace("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
> .format(requestDesc, connectionId, totalTime, 
> requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, 
> responseSendTime))
>   else if(requestLogger.isDebugEnabled)
> requestLogger.debug("Completed request:%s from connection 
> %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
>   .format(requestDesc, connectionId, totalTime, requestQueueTime, 
> apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
> {code}
> I think in the past (3 refactoring steps ago), we used to print more 
> information about specific topics and partitions in debug mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2461: request logger no longer logs extr...

2015-09-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/169


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2505) Add trace/debug description modes to the new Request/Response API

2015-09-02 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2505:
--
Description: It was pointed out on KAFKA-2461 that trace/debug description 
modes are required for the new Request/Response API.  (was: It was pointed out 
on KAFKA-2461 that trace/debug description modes are required to the new 
Request/Response API.)

> Add trace/debug description modes to the new Request/Response API
> -
>
> Key: KAFKA-2505
> URL: https://issues.apache.org/jira/browse/KAFKA-2505
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> It was pointed out on KAFKA-2461 that trace/debug description modes are 
> required for the new Request/Response API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Kafka 1686. Implement SASL/Kerberos.

2015-09-02 Thread harshach
GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/190

Kafka 1686. Implement SASL/Kerberos.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka KAFKA-1686

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #190


commit 85a36d1c669e9c397083927caa22f8722989ccfe
Author: Sriharsha Chintalapani 
Date:   2015-08-31T23:07:15Z

KAFKA-1686. Implement SASL/Kerberos.

commit 5258682b0ca0cb765fd8bab33f57001ce5e72bec
Author: Sriharsha Chintalapani 
Date:   2015-09-03T03:31:34Z

KAFKA-1686. Implement SASL/Kerberos.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2501) getting one test failed when building apache kafka

2015-09-02 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728519#comment-14728519
 ] 

Ewen Cheslack-Postava commented on KAFKA-2501:
--

Can you find the snappy-java jar on your classpath and verify it actually 
contains the necessary file? It looks like the version of snappy-java from 
Central does contain a ppc64le library:

org/xerial/snappy/native/Linux/ppc64le/libsnappyjava.so

and the /tmp filename is probably required since the library has to be 
extracted to the filesystem before it's loaded. Perhaps something else cleaned 
up the file before it was loaded?

Also, is this the only test that's failing, and is it failing repeatedly? There 
should be more tests that require loading that same jar.

> getting one test failed when building apache kafka
> --
>
> Key: KAFKA-2501
> URL: https://issues.apache.org/jira/browse/KAFKA-2501
> Project: Kafka
>  Issue Type: Test
>  Components: build
>Affects Versions: 0.8.2.0
> Environment: Ubuntu 15.04 ppc64 le
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I have run steps from github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar
> ./gradlew srcJar
> ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
> org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:73)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:77)
> at 
> org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
> at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
> ... 6 more
> Caused by:
> java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u
> nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: cannot open 
> shared object file: No such file or directory (Possible ca
> use: endianness mismatch)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
> at java.lang.Runtime.load0(Runtime.java:795)
> at java.lang.System.load(System.java:1062)
> at 
> org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
> at org.xerial.snappy.Snappy.(Snappy.java:47)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
> ... 11 more
> 267 tests completed, 1 failed
> :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2501) getting one test failed when building apache kafka

2015-09-02 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728491#comment-14728491
 ] 

Ewen Cheslack-Postava commented on KAFKA-2501:
--

This looks like there's something wrong with the Snappy dependency, not with 
Kafka itself. What platform are you running this on? It also looks like it's 
trying to load the snappy library from a pretty unusual location -- normally 
libs shouldn't be under /tmp.

> getting one test failed when building apache kafka
> --
>
> Key: KAFKA-2501
> URL: https://issues.apache.org/jira/browse/KAFKA-2501
> Project: Kafka
>  Issue Type: Test
>  Components: build
>Affects Versions: 0.8.2.0
> Environment: Ubuntu 15.04
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I have run steps from github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar
> ./gradlew srcJar
> ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
> org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:73)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:77)
> at 
> org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
> at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
> ... 6 more
> Caused by:
> java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u
> nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: cannot open 
> shared object file: No such file or directory (Possible ca
> use: endianness mismatch)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
> at java.lang.Runtime.load0(Runtime.java:795)
> at java.lang.System.load(System.java:1062)
> at 
> org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
> at org.xerial.snappy.Snappy.(Snappy.java:47)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
> ... 11 more
> 267 tests completed, 1 failed
> :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2501) getting one test failed when building apache kafka

2015-09-02 Thread naresh gundu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728500#comment-14728500
 ] 

naresh gundu commented on KAFKA-2501:
-

i am working on rhel7.1 and Ubuntu 15.04 ppc64 le 

> getting one test failed when building apache kafka
> --
>
> Key: KAFKA-2501
> URL: https://issues.apache.org/jira/browse/KAFKA-2501
> Project: Kafka
>  Issue Type: Test
>  Components: build
>Affects Versions: 0.8.2.0
> Environment: Ubuntu 15.04
>Reporter: naresh gundu
>Priority: Critical
> Fix For: 0.8.2.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I have run steps from github https://github.com/apache/kafka
> cd source-code
> gradle
> ./gradlew jar
> ./gradlew srcJar
> ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
> org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:73)
> at 
> org.apache.kafka.common.record.Compressor.(Compressor.java:77)
> at 
> org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
> at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
> at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
> ... 6 more
> Caused by:
> java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u
> nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: cannot open 
> shared object file: No such file or directory (Possible ca
> use: endianness mismatch)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method)
> at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
> at java.lang.Runtime.load0(Runtime.java:795)
> at java.lang.System.load(System.java:1062)
> at 
> org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
> at org.xerial.snappy.Snappy.(Snappy.java:47)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
> at 
> org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
> ... 11 more
> 267 tests completed, 1 failed
> :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2506) Refactor KafkaConsumer.partitionsFor(topic) to get metadata of topic without modifying state of Metadata

2015-09-02 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728522#comment-14728522
 ] 

Ewen Cheslack-Postava commented on KAFKA-2506:
--

[~singhashish] While reviewing https://issues.apache.org/jira/browse/KAFKA-2464 
I noticed that it did just that. Do you want to take a look at the patch to see 
if it covers the change you wanted and then close this if it does?

> Refactor KafkaConsumer.partitionsFor(topic) to get metadata of topic without 
> modifying state of Metadata
> 
>
> Key: KAFKA-2506
> URL: https://issues.apache.org/jira/browse/KAFKA-2506
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> While working on KAFKA-1893, we realized that it will be good to refactor 
> KafkaConsumer.partitionsFor(topic) to get metadata of topic without modifying 
> state of Metadata. It can follow an approach similar to listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-09-02 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728503#comment-14728503
 ] 

Ashish K Singh commented on KAFKA-2447:
---

[~gwenshap] I think it will be nice to have this in 0.8.3. Would you agree?

> Add capability to KafkaLog4jAppender to be able to use SSL
> --
>
> Key: KAFKA-2447
> URL: https://issues.apache.org/jira/browse/KAFKA-2447
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
> able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-09-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728507#comment-14728507
 ] 

Gwen Shapira commented on KAFKA-2447:
-

Agree. But as nice-to-have, not a blocker on release, IMO.

I'm still horrified that people who care about security enough to want to 
encrypt Log4J are perfectly happy to put their keystore password in a log4j 
configuration file. WTF.
It does seem to be fairly standard though, so I guess we can do it :)

There are no unit tests here. Did you validate that this works?



> Add capability to KafkaLog4jAppender to be able to use SSL
> --
>
> Key: KAFKA-2447
> URL: https://issues.apache.org/jira/browse/KAFKA-2447
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
> able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2411) remove usage of BlockingChannel in the broker

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727861#comment-14727861
 ] 

ASF GitHub Bot commented on KAFKA-2411:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/151


> remove usage of BlockingChannel in the broker
> -
>
> Key: KAFKA-2411
> URL: https://issues.apache.org/jira/browse/KAFKA-2411
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.8.3
>
>
> In KAFKA-1690, we are adding the SSL support at Selector. However, there are 
> still a few places where we use BlockingChannel for inter-broker 
> communication. We need to replace those usage with Selector/NetworkClient to 
> enable inter-broker communication over SSL. Specially, BlockingChannel is 
> currently used in the following places.
> 1. ControllerChannelManager: for the controller to propagate metadata to the 
> brokers.
> 2. KafkaServer: for the broker to send controlled shutdown request to the 
> controller.
> 3. -AbstractFetcherThread: for the follower to fetch data from the leader 
> (through SimpleConsumer)- moved to KAFKA-2440



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-09-02 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727964#comment-14727964
 ] 

Jason Gustafson commented on KAFKA-1929:


[~jholoman] Are you still working on this? If not, perhaps I can take it?

> Convert core kafka module to use the errors in org.apache.kafka.common.errors
> -
>
> Key: KAFKA-1929
> URL: https://issues.apache.org/jira/browse/KAFKA-1929
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Grant Henke
> Attachments: KAFKA-1929.patch
>
>
> With the introduction of the common package there are now a lot of errors 
> duplicated in both the common package and in the server. We should refactor 
> the server code (but not the scala clients) to switch over to the exceptions 
> in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2212) KafkaAuthorizer: Add CLI for Acl management.

2015-09-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727975#comment-14727975
 ] 

Jun Rao commented on KAFKA-2212:


One thing that I realized is that not all operations make sense on all 
resources. For example, it doesn't make sense to enable WRITE for a consumer 
group. We can probably want error out in the CLI if a user does that.

> KafkaAuthorizer: Add CLI for Acl management. 
> -
>
> Key: KAFKA-2212
> URL: https://issues.apache.org/jira/browse/KAFKA-2212
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2212.patch
>
>
> This is subtask-3 for Kafka-1688.
> Please see KIP-11 for details on CLI for Authorizer. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Updated testing readme

2015-09-02 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/187

Updated testing readme

Minor update to point to testing tutorial, and install the correct version 
of vagrant-hostmanager

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka minor-testing-readme-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/187.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #187


commit 15ac78f36ed284a24113d7babfd5b11c92ca4b60
Author: Geoff Anderson 
Date:   2015-09-02T18:29:09Z

Updated testing readme




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2397) leave group request

2015-09-02 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727809#comment-14727809
 ] 

Jason Gustafson commented on KAFKA-2397:


Bumping this issue. One nice thing about the current patch is its simplicity 
(should be similar with the un-heartbeat approach). I wonder if it would be a 
bad thing to support explicit group departure with this patch and implicit 
departure with TCP disconnect? Then we could let this patch go through and 
consider the TCP disconnect in another JIRA.

> leave group request
> ---
>
> Key: KAFKA-2397
> URL: https://issues.apache.org/jira/browse/KAFKA-2397
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
> Fix For: 0.8.3
>
>
> Let's say every consumer in a group has session timeout s. Currently, if a 
> consumer leaves the group, the worst case time to stabilize the group is 2s 
> (s to detect the consumer failure + s for the rebalance window). If a 
> consumer instead can declare they are leaving the group, the worst case time 
> to stabilize the group would just be the s associated with the rebalance 
> window.
> This is a low priority optimization!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2504) Stop logging WARN when client disconnects

2015-09-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2504:

Fix Version/s: 0.8.3

> Stop logging WARN when client disconnects
> -
>
> Key: KAFKA-2504
> URL: https://issues.apache.org/jira/browse/KAFKA-2504
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
> Fix For: 0.8.3
>
>
> I thought we fixed this one, but it came back. This can be fill logs and is 
> fairly useless. Should be logged at DEBUG level:
> {code}
> [2015-09-02 12:05:59,743] WARN Error in I/O with connection to /10.191.0.36 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:111)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:296)
>   at kafka.network.Processor.run(SocketServer.scala:405)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #612

2015-09-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] TRIVIAL: Updated testing readme

[junrao] KAFKA-2411; remove usage of blocking channel

--
[...truncated 1667 lines...]
kafka.coordinator.ConsumerCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.integration.TopicMetadataTest > testTopicMetadataRequest PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.api.ConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.api.ConsumerTest > testSeek PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.api.ConsumerTest > testPositionAndCommit PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.api.ConsumerTest > testUnsubscribeTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.api.ConsumerTest > testListTopics PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.api.ConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED


[jira] [Created] (KAFKA-2504) Stop logging WARN when client disconnects

2015-09-02 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2504:
---

 Summary: Stop logging WARN when client disconnects
 Key: KAFKA-2504
 URL: https://issues.apache.org/jira/browse/KAFKA-2504
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


I thought we fixed this one, but it came back. This can be fill logs and is 
fairly useless. Should be logged at DEBUG level:

{code}
[2015-09-02 12:05:59,743] WARN Error in I/O with connection to /10.191.0.36 
(org.apache.kafka.common.network.Selector)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at 
org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:111)
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at 
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at 
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.poll(Selector.java:296)
at kafka.network.Processor.run(SocketServer.scala:405)
at java.lang.Thread.run(Thread.java:745)
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #611

2015-09-02 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2453) enable new consumer in EndToEndLatency

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727823#comment-14727823
 ] 

ASF GitHub Bot commented on KAFKA-2453:
---

Github user benstopford closed the pull request at:

https://github.com/apache/kafka/pull/158


> enable new consumer in EndToEndLatency
> --
>
> Key: KAFKA-2453
> URL: https://issues.apache.org/jira/browse/KAFKA-2453
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jun Rao
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.8.3
>
>
> We need to add an option to enable the new consumer in EndToEndLatency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2211) KafkaAuthorizer: Add simpleACLAuthorizer implementation.

2015-09-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727971#comment-14727971
 ] 

Jun Rao commented on KAFKA-2211:


A couple of other things:
1. Since the authorizer uses KafkaPrincipal for comparison, in SocketServer, 
when creating the session object, we should create a KafkaPrincipal instead of 
using KafkaChannel.principal(). Otherwise, it won't match the KafkaPrincipal 
used in authorizer. The type in KafkaPrincipal should always be USER and the 
name should be KafkaChannel.principal().getName().

2. We should add some unit tests to verify that a client response gets the 
correct unauthorized error code from the broker if the needed ACL is not set. 
Ideally we want to cover all types of request and have some mix of authorized 
and unauthorized topics. This can be done either in this jira or in KAFKA-2212.

> KafkaAuthorizer: Add simpleACLAuthorizer implementation.
> 
>
> Key: KAFKA-2211
> URL: https://issues.apache.org/jira/browse/KAFKA-2211
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-2211.patch
>
>
> Subtask-2 for Kafka-1688. 
> Please see KIP-11 to get details on out of box SimpleACLAuthorizer 
> implementation 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka New Consumer Performance Test ThroughPut Degradation

2015-09-02 Thread Poorna Chandra Tejashvi Reddy
Hi,

I have checked out the latest code out of https://github.com/apache/kafka based
on commit id e582447adb4708731aff74aa294e7ce2b30b0a41. Looks like the
performance test on the new-consumer is broken.

in/kafka-consumer-perf-test.sh --zookeeper zkip:2181 --broker-list
brokerIp:9092 --topic test --messages 5 --new-consumer

The test does not return any response. Is this expected and is there a
better way to test the new-consumer.


Thanks,

-Poorna


On Thu, Aug 27, 2015 at 2:25 PM, Poorna Chandra Tejashvi Reddy <
pctre...@gmail.com> wrote:

> Hi,
>
> We have built the latest kafka from https://github.com/apache/kafka based
> on this commit id 436b7ddc386eb688ba0f12836710f5e4bcaa06c8 .
> We ran the performance test on a 3 node kafka cluster. There is a huge
> throughput degradation using the new-consumer compared to the regular
> consumer. Below are the numbers that explain the same.
>
> bin/kafka-consumer-perf-test.sh --zookeeper zkIp:2181 --broker-list
> brokerIp:9092 --topics test --messages 500 : gives a throughput of 693 K
>
> bin/kafka-consumer-perf-test.sh --zookeeper zkIp:2181 --broker-list
> brokerIp:9092 --topics test --messages 500 --new-consumer : gives a
> throughput of  51k
>
> The whole set up is based on ec2, Kafka brokers running on r3.2x large.
>
> Are you guys aware of this performance degradation , do you have a JIRA
> for this, which can be used to track the resolution.
>
>
> Thanks,
>
> -Poorna
>


[jira] [Commented] (KAFKA-2491) Update ErrorMapping with New Consumer Errors

2015-09-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727935#comment-14727935
 ] 

ASF GitHub Bot commented on KAFKA-2491:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/188

KAFKA-2491; update ErrorMapping with new consumer errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2491

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #188


commit b6728fd9343d598c37c583cf4c1e2f04c9099367
Author: Jason Gustafson 
Date:   2015-09-02T20:06:34Z

KAFKA-2491; update ErrorMapping with new consumer errors




> Update ErrorMapping with New Consumer Errors
> 
>
> Key: KAFKA-2491
> URL: https://issues.apache.org/jira/browse/KAFKA-2491
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Minor
> Fix For: 0.8.3
>
>
> Some errors used by the new consumer have not been added to ErrorMapping. 
> Until this class is removed, it should probably be kept consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2453: Enable new consumer in EndToEndLat...

2015-09-02 Thread benstopford
GitHub user benstopford reopened a pull request:

https://github.com/apache/kafka/pull/158

KAFKA-2453: Enable new consumer in EndToEndLatency 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2453b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #158


commit 71b11fac33a67ef9f0ac8ac09bcbb5305a56047f
Author: Ben Stopford 
Date:   2015-08-21T16:44:48Z

KAFKA-2453: migrated EndToEndLatencyTest to new consumer API. Added feature 
for configuring message size. Added inline assertion.

commit 43d6a0678fd37d7f382d5f89c898571ae4e3cfbb
Author: Ben Stopford 
Date:   2015-08-21T16:45:19Z

KAFKA-2453: small change which prevents the ConsoleConsumer from throwing 
an exception when the Finalizer thread tries to close it.

commit cac85029c7d802da29d68073545c118804bd41cb
Author: Ben Stopford 
Date:   2015-08-21T17:08:29Z

KAFKA-2453: Added additional arguments to call to EndToEndLatency from 
Performance tests

commit 119a6fa545bcaf586c9cb110f0e13c1cfee1f56c
Author: Ben Stopford 
Date:   2015-09-01T10:20:14Z

KAFKA-2453: Rebased to trunk

KAFKA-2453: removed whitespace

KAFKA-2453: Formatting only

commit 954f076701dd8961e3f835b08a022fc31ae74943
Author: Ben Stopford 
Date:   2015-09-01T15:30:17Z

KAFKA-2453: Incorporate changes from KAFKA-2486

Previous version used an optional busy loop to get better performance by 
avoiding sleeps inside the API. These turned out to be a bug fixed in 
KAFKA-2486 so the optional busy loop has been removed.

commit 4b96fab3037e6ed2ee00a0ff370168000fddcd09
Author: Ben Stopford 
Date:   2015-09-01T18:17:29Z

KAFKA-2453: removed sleep which I believe is not longer needed now we have 
consumer.seekToEnd()

commit 3e52bca1f0f711b7abdee680253f808bb871057e
Author: Ben Stopford 
Date:   2015-09-01T22:04:17Z

KAFKA-2453: Producer acks can be a string

commit 29ce7cd9c95980f391bf3315099d285c120ace6e
Author: Ben Stopford 
Date:   2015-09-02T13:13:06Z

KAFKA-2453: Feedback from Gwen + fix to seek problem

- Fixed issue with seekToEnd evaluating lazily (i.e. when poll is called) 
meaning messages can be missed in slower environments (discovered when I ran 
this on EC2). Detailed in comments.
- Removed redundant retry backoff override (this was an artifact of 
KAFKA-2486)
- Forced producer acks to be 1 or all (i.e. synchronous)
- Reduced poll to a reasonable timeout to avoid hangs in erroneous 
situations
- Added check for results being non zero
- Added check that there is only a single message returned

commit d2d7378c7fdaf0a095141ece1dff8c52aa72a9ac
Author: Ben Stopford 
Date:   2015-09-02T14:09:33Z

KAFKA-2453: downgrade duplicate message exception to warning

commit a6544d19cb8d1da58a67f878d8204c10a3c42c1a
Author: Ben Stopford 
Date:   2015-09-02T16:05:12Z

KAFKA-2453: Added in support for ssl properties file. Ismael's changes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >