[jira] [Updated] (KAFKA-3724) Replace inter.broker.protocol.version with KIP-35 based version detection

2016-05-17 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3724:

Issue Type: Improvement  (was: Bug)

> Replace inter.broker.protocol.version with KIP-35 based version detection
> -
>
> Key: KAFKA-3724
> URL: https://issues.apache.org/jira/browse/KAFKA-3724
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>
> inter.broker.protocol.version configuration was nice for few releases, but it 
> is getting to be a pain to maintain, remember to update, it breaks on trunk 
> upgrades, etc.
> Since we have KIP-35, the controller can actually check for specific versions 
> supported by brokers for specific API requests. Why not use that and get rid 
> of the configuration? 
> We already maintain the protocol versions anyway, upgrades will be smoother, 
> and we'll support upgrades between any two patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #637

2016-05-17 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3721; Put UpdateMetadataRequest V2 in 0.10.0-IV1

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (Ubuntu ubuntu ubuntu-us golang-ppa) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 2bd7b64506a2a7ecef562f5b7db8a34e28d4e957 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2bd7b64506a2a7ecef562f5b7db8a34e28d4e957
 > git rev-list 9a44d938da4badec25e30733409cf7299e945bf6 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson9166066494572062005.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 23.972 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1861604005878744136.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 16.151 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66


[VOTE] 0.10.0.0 RC6

2016-05-17 Thread Gwen Shapira
Hello Kafka users, developers and client-developers,

This is the seventh (!) candidate for release of Apache Kafka
0.10.0.0. This is a major release that includes: (1) New message
format including timestamps (2) client interceptor API (3) Kafka
Streams.

This RC was rolled out to fix an issue with our packaging that caused
dependencies to leak in ways that broke our licensing, and an issue
with protocol versions that broke upgrade for LinkedIn and others who
may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
finding and fixing of issues.

Release notes for the 0.10.0.0 release:
http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html

Lets try to vote within the 72h release vote window and get this baby
out already!

*** Please download, test and vote by Friday, May 20, 23:59 PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~gwenshap/0.10.0.0-rc6/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* java-doc
http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/

* tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3

* Documentation:
http://kafka.apache.org/0100/documentation.html

* Protocol:
http://kafka.apache.org/0100/protocol.html

/**

Thanks,

Gwen


Jenkins build is back to normal : kafka-trunk-jdk7 #1301

2016-05-17 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-0.10.0-jdk7 #98

2016-05-17 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3721; Put UpdateMetadataRequest V2 in 0.10.0-IV1

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.0^{commit} # timeout=10
Checking out Revision e02a0dd6afa1a51bde4502ad4e733031bb13f6c3 
(refs/remotes/origin/0.10.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e02a0dd6afa1a51bde4502ad4e733031bb13f6c3
 > git rev-list 050ca1f0304ad35dfb727fc8f67813b39df16c22 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson8921282098107500299.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 18.827 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson6065083154180428757.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
org.gradle.api.internal.changedetection.state.FileCollectionSnapshotImpl cannot 
be cast to 
org.gradle.api.internal.changedetection.state.OutputFilesCollectionSnapshotter$OutputFilesSnapshot

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 20.329 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3721:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1400
[https://github.com/apache/kafka/pull/1400]

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288178#comment-15288178
 ] 

ASF GitHub Bot commented on KAFKA-3721:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1400


> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3721: Put UpdateMetadataRequest V2 in 0....

2016-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1400


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.10.0.0 RC5

2016-05-17 Thread Gwen Shapira
Holding off for another blocker:
https://github.com/apache/kafka/pull/1400

On Tue, May 17, 2016 at 12:48 PM, Gwen Shapira  wrote:
> Yeah, this means another RC.
>
> Lets try to roll one out later today.
>
> On Tue, May 17, 2016 at 12:43 PM, Ewen Cheslack-Postava
>  wrote:
>> FYI, there's a blocker issue with this RC due to Apache licensing
>> restrictions. One of Connect's dependencies transitively includes the
>> findbugs annotations jar, which is used for static analysis. Luckily it
>> doesn't affect functionality and looks like it can easily be filtered out.
>> We also discovered that some jars that had been explicitly filtered in core
>> had accidentally been pulled in as streams dependencies (junit, jline,
>> netty). These issues are being addressed here:
>> https://github.com/apache/kafka/pull/1396
>>
>> -Ewen
>>
>> On Mon, May 16, 2016 at 3:28 PM, Gwen Shapira  wrote:
>>
>>> Hello Kafka users, developers and client-developers,
>>>
>>> This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0.
>>> This is a major release that includes: (1) New message format
>>> including timestamps (2) client interceptor API (3) Kafka Streams.
>>> Since this is a major release, we will give people more time to try it
>>> out and give feedback.
>>>
>>> Release notes for the 0.10.0.0 release:
>>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/RELEASE_NOTES.html
>>>
>>> Special thanks for Liquan Pei and Tom Crayford for testing the
>>> previous release candidate and reporting back issues.
>>>
>>> Note that this is the sixth RC. I hope we are done with blockers,
>>> because I'm tired of RCs :)
>>>
>>> *** Please download, test and vote by Friday, May 20, 4pm PT
>>>
>>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>> http://kafka.apache.org/KEYS
>>>
>>> * Release artifacts to be voted upon (source and binary):
>>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/
>>>
>>> * Maven artifacts to be voted upon:
>>> https://repository.apache.org/content/groups/staging/org/apache/kafka
>>>
>>> * scala-doc
>>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/scaladoc
>>>
>>> * java-doc
>>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
>>>
>>> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=f68d6218478960b6cc6a01a18542825cc2fe8b80
>>>
>>> * Documentation:
>>> http://kafka.apache.org/0100/documentation.html
>>>
>>> * Protocol:
>>> http://kafka.apache.org/0100/protocol.html
>>>
>>> /**
>>>
>>> Thanks,
>>>
>>> Gwen
>>>
>>
>>
>>
>> --
>> Thanks,
>> Ewen


Build failed in Jenkins: kafka-0.10.0-jdk7 #97

2016-05-17 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Exclude jline, netty and findbugs annotations

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.0^{commit} # timeout=10
Checking out Revision 050ca1f0304ad35dfb727fc8f67813b39df16c22 
(refs/remotes/origin/0.10.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 050ca1f0304ad35dfb727fc8f67813b39df16c22
 > git rev-list 7c6ee8d5ecb2c6aca3bb92296f0e2c8b2ec0de6e # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson3470404907177461049.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 24.246 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson1729879237404581201.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
org.gradle.api.internal.changedetection.state.FileCollectionSnapshotImpl cannot 
be cast to 
org.gradle.api.internal.changedetection.state.OutputFilesCollectionSnapshotter$OutputFilesSnapshot

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 26.306 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #636

2016-05-17 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Exclude jline, netty and findbugs annotations

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9a44d938da4badec25e30733409cf7299e945bf6 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9a44d938da4badec25e30733409cf7299e945bf6
 > git rev-list ab8fc86382fc9f5a8a0f696a91c66806d51c05a5 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3441469110735875014.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 13.206 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8583861030896346411.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file '/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/build.gradle': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
org.gradle.api.internal.changedetection.state.FileCollectionSnapshotImpl cannot 
be cast to 
org.gradle.api.internal.changedetection.state.OutputFilesCollectionSnapshotter$OutputFilesSnapshot

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.269 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66


[GitHub] kafka pull request: MINOR: Exclude jline, netty and findbugs annot...

2016-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1396


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1300

2016-05-17 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3393; Updated the docs to reflect the deprecation of

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ab8fc86382fc9f5a8a0f696a91c66806d51c05a5 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ab8fc86382fc9f5a8a0f696a91c66806d51c05a5
 > git rev-list 53fd22a76613b309b7941a5b0c64f17523b39202 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7642771478030088865.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 28.929 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1662968156118239649.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava:263:
 warning: [deprecation] TIMEOUT_CONFIG in ProducerConfig has been deprecated
this.requestTimeoutMs = 
config.getInt(ProducerConfig.TIMEOUT_CONFIG);
^
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^

[jira] [Issue Comment Deleted] (KAFKA-3723) Cannot change size of schema cache for JSON converter

2016-05-17 Thread Christian Posta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Posta updated KAFKA-3723:
---
Comment: was deleted

(was: Pull request on github:

https://github.com/apache/kafka/pull/1401)

> Cannot change size of schema cache for JSON converter
> -
>
> Key: KAFKA-3723
> URL: https://issues.apache.org/jira/browse/KAFKA-3723
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Christian Posta
>Assignee: Ewen Cheslack-Postava
>
> using this worker config, value.converter.schemas.cache.size, we should be 
> able to change the size of the cache. however, because of an incorrect 
> integer cast, we cannot change it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3723) Cannot change size of schema cache for JSON converter

2016-05-17 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288059#comment-15288059
 ] 

Christian Posta commented on KAFKA-3723:


Pull request on github:

https://github.com/apache/kafka/pull/1401

> Cannot change size of schema cache for JSON converter
> -
>
> Key: KAFKA-3723
> URL: https://issues.apache.org/jira/browse/KAFKA-3723
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Christian Posta
>Assignee: Ewen Cheslack-Postava
>
> using this worker config, value.converter.schemas.cache.size, we should be 
> able to change the size of the cache. however, because of an incorrect 
> integer cast, we cannot change it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3723) Cannot change size of schema cache for JSON converter

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288058#comment-15288058
 ] 

ASF GitHub Bot commented on KAFKA-3723:
---

GitHub user christian-posta opened a pull request:

https://github.com/apache/kafka/pull/1401

KAFKA-3723 Cannot change size of schema cache for JSON converter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/christian-posta/kafka 
ceposta-connect-class-cast-error

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1401.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1401


commit add0c6f839f5a9e53e8140f0f93b47f655170411
Author: Christian Posta 
Date:   2016-05-18T01:38:18Z

KAFKA-3723 Cannot change size of schema cache for JSON converter




> Cannot change size of schema cache for JSON converter
> -
>
> Key: KAFKA-3723
> URL: https://issues.apache.org/jira/browse/KAFKA-3723
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Christian Posta
>Assignee: Ewen Cheslack-Postava
>
> using this worker config, value.converter.schemas.cache.size, we should be 
> able to change the size of the cache. however, because of an incorrect 
> integer cast, we cannot change it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3055) JsonConverter mangles schema during serialization (fromConnectData)

2016-05-17 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288056#comment-15288056
 ] 

Christian Posta commented on KAFKA-3055:


the correct config for the worker to change the cache size is 

value.converter.schemas.cache.size

Need to have this fixed before you can use it though: 
https://issues.apache.org/jira/browse/KAFKA-3723

> JsonConverter mangles schema during serialization (fromConnectData)
> ---
>
> Key: KAFKA-3055
> URL: https://issues.apache.org/jira/browse/KAFKA-3055
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Kishore Senji
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1, 0.10.0.0
>
>
> Test case is here: 
> https://github.com/ksenji/kafka-connect-test/tree/master/src/test/java/org/apache/kafka/connect/json
> If Caching is disabled, it behaves correctly and JsonConverterWithNoCacheTest 
> runs successfully. Otherwise the test JsonConverterTest fails.
> The reason is that the JsonConverter has a bug where it mangles the schema as 
> it assigns all String fields with the same name (and similar for all Int32 
> fields)
> This is how the schema & payload gets serialized for the Person Struct (with 
> caching disabled):
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"firstName"},{"type":"string","optional":false,"field":"lastName"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"age"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> where as when caching is enabled the same Struct gets serialized as (with 
> caching enabled) :
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"weightInKgs"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> As we can see all String fields became "email" and all int32 fields became 
> "weightInKgs". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3723) Cannot change size of schema cache for JSON converter

2016-05-17 Thread Christian Posta (JIRA)
Christian Posta created KAFKA-3723:
--

 Summary: Cannot change size of schema cache for JSON converter
 Key: KAFKA-3723
 URL: https://issues.apache.org/jira/browse/KAFKA-3723
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Christian Posta
Assignee: Ewen Cheslack-Postava


using this worker config, value.converter.schemas.cache.size, we should be able 
to change the size of the cache. however, because of an incorrect integer cast, 
we cannot change it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288052#comment-15288052
 ] 

Gwen Shapira commented on KAFKA-3721:
-

I'm ok with whatever [~ijuma] recommends.

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3721:

Status: Patch Available  (was: Open)

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288045#comment-15288045
 ] 

Jiangjie Qin commented on KAFKA-3721:
-

[~gwenshap] [~ijuma] I just submitted PR. The current patch is using 
0.10.0-IV0. Please let me know if you reach agreement on changing it to 0.10.0.

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288042#comment-15288042
 ] 

ASF GitHub Bot commented on KAFKA-3721:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1400

KAFKA-3721: Put UpdateMetadataRequest V2 in 0.10.0-IV1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3721

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1400.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1400


commit 123bd518c8ad3dbfba8a1c9a26d3bf659c61a514
Author: Jiangjie Qin 
Date:   2016-05-18T01:25:11Z

Put UpdateMetadataRequest V2 in 0.10.0-IV1




> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3721: Put UpdateMetadataRequest V2 in 0....

2016-05-17 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1400

KAFKA-3721: Put UpdateMetadataRequest V2 in 0.10.0-IV1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3721

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1400.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1400


commit 123bd518c8ad3dbfba8a1c9a26d3bf659c61a514
Author: Jiangjie Qin 
Date:   2016-05-18T01:25:11Z

Put UpdateMetadataRequest V2 in 0.10.0-IV1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3722) PlaintextChannelBuilder should not use ChannelBuilders.createPrincipalBuilder(configs) for creating instance of PrincipalBuilder

2016-05-17 Thread Mayuresh Gharat (JIRA)
Mayuresh Gharat created KAFKA-3722:
--

 Summary: PlaintextChannelBuilder should not use 
ChannelBuilders.createPrincipalBuilder(configs) for creating instance of 
PrincipalBuilder
 Key: KAFKA-3722
 URL: https://issues.apache.org/jira/browse/KAFKA-3722
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat


Consider this scenario :
1) We have a Kafka Broker running on  PlainText and SSL port simultaneously.

2)  We try to plugin a custom principal builder using the config 
"principal.builder.class" for the request coming over the SSL port.

3) The ChannelBuilders.createPrincipalBuilder(configs) first checks if a config 
"principal.builder.class" is specified in the passed in configs and tries to 
use that even when it is building the instance of PrincipalBuilder for the 
PlainText port, when that custom principal class is only menat for SSL port.

IMO, having a DefaultPrincipalBuilder for PalinText port should be fine.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288035#comment-15288035
 ] 

Jiangjie Qin commented on KAFKA-3721:
-

I am fine either way. My understanding of the original intention was to just 
let "0.10.0" pointing to the latest IV object so it is always ready for 
release. 
KIP-35 would be useful, although arguably there are some cases that people want 
to specify the version instead of doing auto negotiation.

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288024#comment-15288024
 ] 

Ismael Juma commented on KAFKA-3721:


[~becket_qin], are you intending to provide a PR today?

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288022#comment-15288022
 ] 

Ismael Juma commented on KAFKA-3721:


Btw, I think it's fine to name it KAFKA_0_10_0 now. My point was related to 
Becket's question: "In the future do we always create a new Object and name it 
without "IV" suffix even if there is no change compared with the previous 
ApiVersion?".

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288017#comment-15288017
 ] 

Gwen Shapira commented on KAFKA-3721:
-

I guess we had different plans :)

At this stage, I don't want to quibble over what we move it to. We just need to 
make sure our upgrade steps reflect reality and system upgrade tests don't 
break :)

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288007#comment-15288007
 ] 

Ismael Juma commented on KAFKA-3721:


As I understand, Jun's idea was for it to survive the release.

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288003#comment-15288003
 ] 

Gwen Shapira commented on KAFKA-3721:
-

I think the IV was intended as "intermediate version" and was not supposed to 
survive the release? We will need to update the docs to match though.

In the future, we need to see if we can make good use of KIP-35 to get rid of 
the configuration completely :)

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3393) Update site docs and javadoc based on max.block.ms changes

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288001#comment-15288001
 ] 

ASF GitHub Bot commented on KAFKA-3393:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1060


> Update site docs and javadoc based on max.block.ms changes
> --
>
> Key: KAFKA-3393
> URL: https://issues.apache.org/jira/browse/KAFKA-3393
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Mayuresh Gharat
> Fix For: 0.10.1.0
>
>
> KAFKA-2120 deprecated block.on.buffer.full in favor of max.block.ms. This 
> change alters the behavior of the KafkaProducer. Users may not be expecting 
> that change when upgrading from the 0.8.x clients. We should:
> - Update the KafkaProducer javadoc
> - Update the ProducerConfig docs and the generated site docs
> - Add an entry to the 0.9 upgrade notes (if appropriate) 
> Related discussion can be seen here: https://github.com/apache/kafka/pull/1058



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3393) Update site docs and javadoc based on max.block.ms changes

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3393.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1060
[https://github.com/apache/kafka/pull/1060]

> Update site docs and javadoc based on max.block.ms changes
> --
>
> Key: KAFKA-3393
> URL: https://issues.apache.org/jira/browse/KAFKA-3393
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Mayuresh Gharat
> Fix For: 0.10.1.0
>
>
> KAFKA-2120 deprecated block.on.buffer.full in favor of max.block.ms. This 
> change alters the behavior of the KafkaProducer. Users may not be expecting 
> that change when upgrading from the 0.8.x clients. We should:
> - Update the KafkaProducer javadoc
> - Update the ProducerConfig docs and the generated site docs
> - Add an entry to the 0.9 upgrade notes (if appropriate) 
> Related discussion can be seen here: https://github.com/apache/kafka/pull/1058



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3393 : Updated the docs to reflect the d...

2016-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1060


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287999#comment-15287999
 ] 

Jiangjie Qin commented on KAFKA-3721:
-

Hi Gwen, currently "0.10.0" is always pointing to the latest internal 
ApiVersion. Are you suggesting to create a new ApiVersion object KAFKA_0_10_0 
instead of KAFKA_0_10_0_IV1? That sounds reasonable to me in this case. In the 
future do we always create a new Object and name it without "IV" suffix even if 
there is no change compared with the previous ApiVersion?

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287989#comment-15287989
 ] 

Gwen Shapira commented on KAFKA-3721:
-

Since we are now releasing - can we modify the version to 0.10.0?

It will be more inline with what we had in the past.

> UpdateMetadataRequest V2 should be in API version 0.10.0-IV1
> 
>
> Key: KAFKA-3721
> URL: https://issues.apache.org/jira/browse/KAFKA-3721
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not 
> introduce a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes 
> problem for people who are running off the trunk and using message format 
> 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3721) UpdateMetadataRequest V2 should be in API version 0.10.0-IV1

2016-05-17 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3721:
---

 Summary: UpdateMetadataRequest V2 should be in API version 
0.10.0-IV1
 Key: KAFKA-3721
 URL: https://issues.apache.org/jira/browse/KAFKA-3721
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
Priority: Blocker
 Fix For: 0.10.0.0


When we introduce UpdateMetadataRequest V2 in KAFKA-1215, we did not introduce 
a new API Version 0.10.0-IV1 but reused 0.10.0-IV0. This causes problem for 
people who are running off the trunk and using message format 0.10.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: MINOR: Replace Scala style guide PDF link...

2016-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/13


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: MINOR: Replace Scala style guide PDF link...

2016-05-17 Thread ijuma
Github user ijuma commented on the pull request:

https://github.com/apache/kafka-site/pull/13#issuecomment-219895048
  
Thanks, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3720) Remove BufferExhaustException from doSend() in KafkaProducer

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287937#comment-15287937
 ] 

Ismael Juma commented on KAFKA-3720:


And probably deprecate `BufferExhaustedException` since it's not used by 
anything.

> Remove BufferExhaustException from doSend() in KafkaProducer
> 
>
> Key: KAFKA-3720
> URL: https://issues.apache.org/jira/browse/KAFKA-3720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> KafkaProducer no longer throws BufferExhaustException. We should remove it 
> from the catch clause. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3720) Remove BufferExhaustException from doSend() in KafkaProducer

2016-05-17 Thread Mayuresh Gharat (JIRA)
Mayuresh Gharat created KAFKA-3720:
--

 Summary: Remove BufferExhaustException from doSend() in 
KafkaProducer
 Key: KAFKA-3720
 URL: https://issues.apache.org/jira/browse/KAFKA-3720
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat


KafkaProducer no longer throws BufferExhaustException. We should remove it from 
the catch clause. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3554) Generate actual data with specific compression ratio and add multi-thread support in the ProducerPerformance tool.

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287878#comment-15287878
 ] 

ASF GitHub Bot commented on KAFKA-3554:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1399

KAFKA-3554 Improve ProducerPerformance test

1. Added multiple thread support.
2. Added value-bound to make compressed data more realistic.
3. Print out the producer metrics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3554

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1399


commit c6a96a95673e89a60a19959100982ab151ffb73c
Author: Jiangjie Qin 
Date:   2016-05-17T23:10:57Z

KAFKA-3554 ProducerPerformance test improvements.

commit 71fd4c8e92d3d9e695c8c0fcfab838de61f4ffc4
Author: Jiangjie Qin 
Date:   2016-05-17T23:12:40Z

remove change in the server property file




> Generate actual data with specific compression ratio and add multi-thread 
> support in the ProducerPerformance tool.
> --
>
> Key: KAFKA-3554
> URL: https://issues.apache.org/jira/browse/KAFKA-3554
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> Currently the ProducerPerformance always generate the payload with same 
> bytes. This does not quite well to test the compressed data because the 
> payload is extremely compressible no matter how big the payload is.
> We can make some changes to make it more useful for compressed messages. 
> Currently I am generating the payload containing integer from a given range. 
> By adjusting the range of the integers, we can get different compression 
> ratios. 
> API wise, we can either let user to specify the integer range or the expected 
> compression ratio (we will do some probing to get the corresponding range for 
> the users)
> Besides that, in many cases, it is useful to have multiple producer threads 
> when the producer threads themselves are bottleneck. Admittedly people can 
> run multiple ProducerPerformance to achieve similar result, but it is still 
> different from the real case when people actually use the producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3554 Improve ProducerPerformance test

2016-05-17 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1399

KAFKA-3554 Improve ProducerPerformance test

1. Added multiple thread support.
2. Added value-bound to make compressed data more realistic.
3. Print out the producer metrics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3554

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1399


commit c6a96a95673e89a60a19959100982ab151ffb73c
Author: Jiangjie Qin 
Date:   2016-05-17T23:10:57Z

KAFKA-3554 ProducerPerformance test improvements.

commit 71fd4c8e92d3d9e695c8c0fcfab838de61f4ffc4
Author: Jiangjie Qin 
Date:   2016-05-17T23:12:40Z

remove change in the server property file




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3396) Unauthorized topics are returned to the user

2016-05-17 Thread Edoardo Comar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edoardo Comar reassigned KAFKA-3396:


Assignee: Edoardo Comar

> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>Reporter: Grant Henke
>Assignee: Edoardo Comar
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3396) Unauthorized topics are returned to the user

2016-05-17 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287873#comment-15287873
 ] 

Edoardo Comar commented on KAFKA-3396:
--

Hi [~granthenke] we're still working on it, 
we're working to get the unit tests pass (and added a few others) .

We had some surprises running `SaslSslEndToEndAuthorizationTest`
if we change the consumers from 
```consumers.head.assign(List(tp).asJava)```
to
```consumers.head.subscribe(List(topic).asJava) ```
the code paths are different and even the original tests may not pass unless we 
change
```  this.serverConfig.setProperty(KafkaConfig.MinInSyncReplicasProp, "1")```
to be a `"1"`
which is the case also for the original code. Not sure if we're missing 
something.


> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>Reporter: Grant Henke
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3443) Support regex topics in addSource() and stream()

2016-05-17 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287764#comment-15287764
 ] 

Bill Bejeck commented on KAFKA-3443:


Status update - still actively working on this and I'm really close to issuing 
a PR in the next couple of days.

> Support regex topics in addSource() and stream()
> 
>
> Key: KAFKA-3443
> URL: https://issues.apache.org/jira/browse/KAFKA-3443
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: api
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams only support specific topics in creating source 
> streams, while we can leverage consumer's regex subscription to allow regex 
> topics as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3717:
---
Reviewer: Gwen Shapira
  Status: Patch Available  (was: In Progress)

> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287604#comment-15287604
 ] 

ASF GitHub Bot commented on KAFKA-3717:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1398

KAFKA-3717; Support building aggregate javadoc for all project modules

The task is called `javadocAll` and the generated html will be under 
`/build/docs/javadoc/`.

I disabled javadoc for `tools` and `log4j-appender` as they are not public 
API.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3717-aggregate-javadoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1398


commit bbf1626e53eb179b604c85aacca8b6e731bf
Author: Ismael Juma 
Date:   2016-05-17T21:16:35Z

Support aggregate javadoc task via `javadocAll`




> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3717; Support building aggregate javadoc...

2016-05-17 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1398

KAFKA-3717; Support building aggregate javadoc for all project modules

The task is called `javadocAll` and the generated html will be under 
`/build/docs/javadoc/`.

I disabled javadoc for `tools` and `log4j-appender` as they are not public 
API.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3717-aggregate-javadoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1398


commit bbf1626e53eb179b604c85aacca8b6e731bf
Author: Ismael Juma 
Date:   2016-05-17T21:16:35Z

Support aggregate javadoc task via `javadocAll`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #634

2016-05-17 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3719) Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too narrow

2016-05-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287422#comment-15287422
 ] 

Gwen Shapira commented on KAFKA-3719:
-

AFAIK, underscores are not legal in hostnames. See 
https://tools.ietf.org/html/rfc952

" A "name" (Net, Host, Gateway, or Domain name) is a text string up
   to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
   sign (-), and period (.).  Note that periods are only allowed when
   they serve to delimit components of "domain style names". (See
   RFC-921, "Domain Name System Implementation Schedule", for
   background).  No blank or space characters are permitted as part of a
   name. No distinction is made between upper and lower case.  The first
   character must be an alpha character.  The last character must not be
   a minus sign or period.  A host which serves as a GATEWAY should have
   "-GATEWAY" or "-GW" as part of its name.  Hosts which do not serve as
   Internet gateways should not use "-GATEWAY" and "-GW" as part of
   their names. A host which is a TAC should have "-TAC" as the last
   part of its host name, if it is a DoD host.  Single character names
   or nicknames are not allowed."

> Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too 
> narrow
> -
>
> Key: KAFKA-3719
> URL: https://issues.apache.org/jira/browse/KAFKA-3719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balazs Kossovics
>Priority: Trivial
>
> In our continuous integration environment the Kafka brokers run on hosts 
> containing underscores in their names. The current regex splits incorrectly 
> these names into host and port parts.
> I could submit a pull request if someone confirms that this is indeed a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC5

2016-05-17 Thread Gwen Shapira
Yeah, this means another RC.

Lets try to roll one out later today.

On Tue, May 17, 2016 at 12:43 PM, Ewen Cheslack-Postava
 wrote:
> FYI, there's a blocker issue with this RC due to Apache licensing
> restrictions. One of Connect's dependencies transitively includes the
> findbugs annotations jar, which is used for static analysis. Luckily it
> doesn't affect functionality and looks like it can easily be filtered out.
> We also discovered that some jars that had been explicitly filtered in core
> had accidentally been pulled in as streams dependencies (junit, jline,
> netty). These issues are being addressed here:
> https://github.com/apache/kafka/pull/1396
>
> -Ewen
>
> On Mon, May 16, 2016 at 3:28 PM, Gwen Shapira  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0.
>> This is a major release that includes: (1) New message format
>> including timestamps (2) client interceptor API (3) Kafka Streams.
>> Since this is a major release, we will give people more time to try it
>> out and give feedback.
>>
>> Release notes for the 0.10.0.0 release:
>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/RELEASE_NOTES.html
>>
>> Special thanks for Liquan Pei and Tom Crayford for testing the
>> previous release candidate and reporting back issues.
>>
>> Note that this is the sixth RC. I hope we are done with blockers,
>> because I'm tired of RCs :)
>>
>> *** Please download, test and vote by Friday, May 20, 4pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka
>>
>> * scala-doc
>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/scaladoc
>>
>> * java-doc
>> http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
>>
>> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>>
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=f68d6218478960b6cc6a01a18542825cc2fe8b80
>>
>> * Documentation:
>> http://kafka.apache.org/0100/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/0100/protocol.html
>>
>> /**
>>
>> Thanks,
>>
>> Gwen
>>
>
>
>
> --
> Thanks,
> Ewen


Re: [VOTE] 0.10.0.0 RC5

2016-05-17 Thread Ewen Cheslack-Postava
FYI, there's a blocker issue with this RC due to Apache licensing
restrictions. One of Connect's dependencies transitively includes the
findbugs annotations jar, which is used for static analysis. Luckily it
doesn't affect functionality and looks like it can easily be filtered out.
We also discovered that some jars that had been explicitly filtered in core
had accidentally been pulled in as streams dependencies (junit, jline,
netty). These issues are being addressed here:
https://github.com/apache/kafka/pull/1396

-Ewen

On Mon, May 16, 2016 at 3:28 PM, Gwen Shapira  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0.
> This is a major release that includes: (1) New message format
> including timestamps (2) client interceptor API (3) Kafka Streams.
> Since this is a major release, we will give people more time to try it
> out and give feedback.
>
> Release notes for the 0.10.0.0 release:
> http://home.apache.org/~gwenshap/0.10.0.0-rc5/RELEASE_NOTES.html
>
> Special thanks for Liquan Pei and Tom Crayford for testing the
> previous release candidate and reporting back issues.
>
> Note that this is the sixth RC. I hope we are done with blockers,
> because I'm tired of RCs :)
>
> *** Please download, test and vote by Friday, May 20, 4pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/0.10.0.0-rc5/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka
>
> * scala-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc5/scaladoc
>
> * java-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
>
> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=f68d6218478960b6cc6a01a18542825cc2fe8b80
>
> * Documentation:
> http://kafka.apache.org/0100/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> /**
>
> Thanks,
>
> Gwen
>



-- 
Thanks,
Ewen


[jira] [Comment Edited] (KAFKA-3704) Improve mechanism for compression stream block size selection in KafkaProducer

2016-05-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286887#comment-15286887
 ] 

Guozhang Wang edited comment on KAFKA-3704 at 5/17/16 6:52 PM:
---

Thanks for the summary [~ijuma].

I think 2) solves the problem "cleanly" except for GZIP, while 3) still 
introduces extra memory out of controlled buffer pool, one block for each 
partition. 1) introduces a new config but does not necessarily control the 
total extra memory allocated out of buffer pool.

Personally I feel 3) is worth doing: originally I'm concerned it complicates 
the code by quite a lot, but after checking it once again I feel it may not be 
that worse compared with 2).


was (Author: guozhang):
Thanks for the summary [~ijuma].

I think 2) solves the problem "cleanly" except for GZIP, while 3) still 
introduces extra memory out of controlled buffer pool, one block for each 
partition. 1) introduces a new config but does not necessarily control the 
total extra memory allocated out of buffer pool.

Personally I fell 3) is worth doing: originally I'm concerned it complicates 
the code by quite a lot, but after checking it once again I feel it may not be 
that worse compared with 2).

> Improve mechanism for compression stream block size selection in KafkaProducer
> --
>
> Key: KAFKA-3704
> URL: https://issues.apache.org/jira/browse/KAFKA-3704
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> As discovered in https://issues.apache.org/jira/browse/KAFKA-3565, the 
> current default block size (1K) used in Snappy and GZIP may cause a 
> sub-optimal compression ratio for Snappy, and hence reduce throughput. 
> Because we no longer recompress data in the broker, it also impacts what gets 
> stored on disk.
> A solution might be to use the default block size, which is 64K in LZ4, 32K 
> in Snappy and 0.5K in GZIP. The downside is that this solution will require 
> more memory allocated outside of the buffer pool and hence users may need to 
> bump up their JVM heap size, especially for MirrorMakers. Using Snappy as an 
> example, it's an additional 2x32k per batch (as Snappy uses two buffers) and 
> one would expect at least one batch per partition. However, the number of 
> batches per partition can be much higher if the broker is slow to acknowledge 
> producer requests (depending on `buffer.memory`, `batch.size`, message size, 
> etc.).
> Given the above, there are a few things that could be done (potentially more 
> than one):
> 1) A configuration for the producer compression stream buffer size.
> 2) Allocate buffers from the buffer pool and pass them to the compression 
> library. This is possible with Snappy and we could adapt our LZ4 code. It's 
> not possible with GZIP, but it uses a very small buffer by default.
> 3) Close the existing `RecordBatch.records` when we create a new 
> `RecordBatch` for the `TopicPartition` instead of doing it during 
> `RecordAccumulator.drain`. This would mean that we would only retain 
> resources for one `RecordBatch` per partition, which would improve the worst 
> case scenario significantly.
> Note that we decided that this change was too risky for 0.10.0.0 and reverted 
> the original attempt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3704) Improve mechanism for compression stream block size selection in KafkaProducer

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287273#comment-15287273
 ] 

Ismael Juma commented on KAFKA-3704:


I agree [~guozhang].

> Improve mechanism for compression stream block size selection in KafkaProducer
> --
>
> Key: KAFKA-3704
> URL: https://issues.apache.org/jira/browse/KAFKA-3704
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> As discovered in https://issues.apache.org/jira/browse/KAFKA-3565, the 
> current default block size (1K) used in Snappy and GZIP may cause a 
> sub-optimal compression ratio for Snappy, and hence reduce throughput. 
> Because we no longer recompress data in the broker, it also impacts what gets 
> stored on disk.
> A solution might be to use the default block size, which is 64K in LZ4, 32K 
> in Snappy and 0.5K in GZIP. The downside is that this solution will require 
> more memory allocated outside of the buffer pool and hence users may need to 
> bump up their JVM heap size, especially for MirrorMakers. Using Snappy as an 
> example, it's an additional 2x32k per batch (as Snappy uses two buffers) and 
> one would expect at least one batch per partition. However, the number of 
> batches per partition can be much higher if the broker is slow to acknowledge 
> producer requests (depending on `buffer.memory`, `batch.size`, message size, 
> etc.).
> Given the above, there are a few things that could be done (potentially more 
> than one):
> 1) A configuration for the producer compression stream buffer size.
> 2) Allocate buffers from the buffer pool and pass them to the compression 
> library. This is possible with Snappy and we could adapt our LZ4 code. It's 
> not possible with GZIP, but it uses a very small buffer by default.
> 3) Close the existing `RecordBatch.records` when we create a new 
> `RecordBatch` for the `TopicPartition` instead of doing it during 
> `RecordAccumulator.drain`. This would mean that we would only retain 
> resources for one `RecordBatch` per partition, which would improve the worst 
> case scenario significantly.
> Note that we decided that this change was too risky for 0.10.0.0 and reverted 
> the original attempt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-58 - Make Log Compaction Point Configurable

2016-05-17 Thread Gwen Shapira
 and Spark's implementation is another good reason to allow compaction lag.

I'm convinced :)

We need to decide:

1) Do we need just .ms config, or anything else? consumer lag is
measured (and monitored) in messages, so if we need this feature to
somehow work in tandem with consumer lag monitoring, I think we need
.messages too.

2) Does this new configuration allows us to get rid of cleaner.ratio config?

Gwen


On Tue, May 17, 2016 at 9:43 AM, Eric Wasserman
 wrote:
> James,
>
> Your pictures do an excellent job of illustrating my point.
>
> My mention of the additional "10's of minutes to hours" refers to how far 
> after the original target checkpoint (T1 in your diagram) on may need to go 
> to get to a checkpoint where all partitions of all topics are in the 
> uncompacted region of their respective logs. In terms of your diagram: the T3 
> transaction could have been written 10's of minutes to hours after T1 as that 
> was how much time it took all readers to get to T1.
>
>> You would not have to start over from the beginning in order to read to T3.
>
> While I agree this is technically true, in practice it could be very onerous 
> to actually do it. For example, we use the Kafka consumer that is part of the 
> Spark Streaming library to read table topics. It accepts a range of offsets 
> to read for each partition. Say we originally target ranges from offset 0 to 
> the offset of T1 for each topic+partition. There really is no way to have the 
> library arrive at T1 an then "keep going" to T3. What is worse, given Spark's 
> design, if you lost a worker during your calculations you would be in a 
> rather sticky position. Spark achieves resiliency not by data redundancy but 
> by keeping track of how to reproduce the transformations leading to a state. 
> In the face of a lost worker, Spark would try to re-read that portion of the 
> data on the lost worker from Kafka. However, in the interim compaction may 
> have moved past the reproducible checkpoint (T3) rendering the data 
> inconsistent. At best the entire calculation would need to start over 
> targeting some later transaction checkpoint.
>
> Needless to say with the proposed feature everything is quite simple. As long 
> as we set the compaction lag large enough we can be assured that T1 will 
> remain in the uncompacted region an thereby be reproducible. Thus reading 
> from 0 to the offsets in T1 will be sufficient for the duration of the 
> calculation.
>
> Eric
>
>


[GitHub] kafka pull request: KAFKA-3716: Validate all timestamps are not ne...

2016-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1393


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3716) Check against negative timestamps

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287195#comment-15287195
 ] 

ASF GitHub Bot commented on KAFKA-3716:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1393


> Check against negative timestamps
> -
>
> Key: KAFKA-3716
> URL: https://issues.apache.org/jira/browse/KAFKA-3716
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture, user-experience
> Fix For: 0.10.1.0
>
>
> Although currently we do not enforce any semantic meaning on the {{Long}} 
> typed timestamps, we are actually assuming it to be non-negative while 
> storing the timestamp in windowed store. For example, in 
> {{RocksDBWindowStore}} we store the timestamp as part of the key, and relying 
> on RocksDB's default lexicographic byte array comparator, and hence negative 
> long value stored in RocksDB will cause the range search ordering to be 
> messed up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3716) Check against negative timestamps

2016-05-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3716.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1393
[https://github.com/apache/kafka/pull/1393]

> Check against negative timestamps
> -
>
> Key: KAFKA-3716
> URL: https://issues.apache.org/jira/browse/KAFKA-3716
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture, user-experience
> Fix For: 0.10.1.0
>
>
> Although currently we do not enforce any semantic meaning on the {{Long}} 
> typed timestamps, we are actually assuming it to be non-negative while 
> storing the timestamp in windowed store. For example, in 
> {{RocksDBWindowStore}} we store the timestamp as part of the key, and relying 
> on RocksDB's default lexicographic byte array comparator, and hence negative 
> long value stored in RocksDB will cause the range search ordering to be 
> messed up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-17 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian closed KAFKA-3219.
--

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-58 - Make Log Compaction Point Configurable

2016-05-17 Thread Eric Wasserman
James,

Your pictures do an excellent job of illustrating my point. 

My mention of the additional "10's of minutes to hours" refers to how far after 
the original target checkpoint (T1 in your diagram) on may need to go to get to 
a checkpoint where all partitions of all topics are in the uncompacted region 
of their respective logs. In terms of your diagram: the T3 transaction could 
have been written 10's of minutes to hours after T1 as that was how much time 
it took all readers to get to T1.

> You would not have to start over from the beginning in order to read to T3.

While I agree this is technically true, in practice it could be very onerous to 
actually do it. For example, we use the Kafka consumer that is part of the 
Spark Streaming library to read table topics. It accepts a range of offsets to 
read for each partition. Say we originally target ranges from offset 0 to the 
offset of T1 for each topic+partition. There really is no way to have the 
library arrive at T1 an then "keep going" to T3. What is worse, given Spark's 
design, if you lost a worker during your calculations you would be in a rather 
sticky position. Spark achieves resiliency not by data redundancy but by 
keeping track of how to reproduce the transformations leading to a state. In 
the face of a lost worker, Spark would try to re-read that portion of the data 
on the lost worker from Kafka. However, in the interim compaction may have 
moved past the reproducible checkpoint (T3) rendering the data inconsistent. At 
best the entire calculation would need to start over targeting some later 
transaction checkpoint.

Needless to say with the proposed feature everything is quite simple. As long 
as we set the compaction lag large enough we can be assured that T1 will remain 
in the uncompacted region an thereby be reproducible. Thus reading from 0 to 
the offsets in T1 will be sufficient for the duration of the calculation.

Eric




[jira] [Assigned] (KAFKA-3554) Generate actual data with specific compression ratio and add multi-thread support in the ProducerPerformance tool.

2016-05-17 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-3554:
---

Assignee: Jiangjie Qin  (was: Dong Lin)

> Generate actual data with specific compression ratio and add multi-thread 
> support in the ProducerPerformance tool.
> --
>
> Key: KAFKA-3554
> URL: https://issues.apache.org/jira/browse/KAFKA-3554
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> Currently the ProducerPerformance always generate the payload with same 
> bytes. This does not quite well to test the compressed data because the 
> payload is extremely compressible no matter how big the payload is.
> We can make some changes to make it more useful for compressed messages. 
> Currently I am generating the payload containing integer from a given range. 
> By adjusting the range of the integers, we can get different compression 
> ratios. 
> API wise, we can either let user to specify the integer range or the expected 
> compression ratio (we will do some probing to get the corresponding range for 
> the users)
> Besides that, in many cases, it is useful to have multiple producer threads 
> when the producer threads themselves are bottleneck. Admittedly people can 
> run multiple ProducerPerformance to achieve similar result, but it is still 
> different from the real case when people actually use the producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3719) Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too narrow

2016-05-17 Thread Balazs Kossovics (JIRA)
Balazs Kossovics created KAFKA-3719:
---

 Summary: Pattern regex 
org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too narrow
 Key: KAFKA-3719
 URL: https://issues.apache.org/jira/browse/KAFKA-3719
 Project: Kafka
  Issue Type: Bug
Reporter: Balazs Kossovics
Priority: Trivial


In our continuous integration environment the Kafka brokers run on hosts 
containing underscores in their names. The current regex splits incorrectly 
these names into host and port parts.

I could submit a pull request if someone confirms that this is indeed a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Bump system test ducktape dependency to...

2016-05-17 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/1397

MINOR: Bump system test ducktape dependency to 0.5.1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granders/kafka minor-increment-ducktape

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1397.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1397






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: MINOR: Replace Scala style guide PDF link...

2016-05-17 Thread RyanCoonan
Github user RyanCoonan commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/13#discussion_r63553295
  
--- Diff: coding-guide.html ---
@@ -19,7 +19,7 @@
 
 
 Scala
-We are following the style guide given http://davetron5000.github.com/scala-style/ScalaStyleGuide.pdf ">here 
(though not perfectly). Below are some specifics worth noting:
+We are following the style guide given http://docs.scala-lang.org/style/ ">here (though not perfectly). 
Below are some specifics worth noting:
--- End diff --

Removed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: MINOR: Replace Scala style guide PDF link...

2016-05-17 Thread ijuma
Github user ijuma commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/13#discussion_r63552659
  
--- Diff: coding-guide.html ---
@@ -19,7 +19,7 @@
 
 
 Scala
-We are following the style guide given http://davetron5000.github.com/scala-style/ScalaStyleGuide.pdf ">here 
(though not perfectly). Below are some specifics worth noting:
+We are following the style guide given http://docs.scala-lang.org/style/ ">here (though not perfectly). 
Below are some specifics worth noting:
--- End diff --

Please remove it. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: MINOR: Fix typos and formatting in coding...

2016-05-17 Thread RyanCoonan
Github user RyanCoonan closed the pull request at:

https://github.com/apache/kafka-site/pull/14


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: MINOR: Replace Scala style guide PDF link...

2016-05-17 Thread RyanCoonan
Github user RyanCoonan commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/13#discussion_r63551060
  
--- Diff: coding-guide.html ---
@@ -19,7 +19,7 @@
 
 
 Scala
-We are following the style guide given http://davetron5000.github.com/scala-style/ScalaStyleGuide.pdf ">here 
(though not perfectly). Below are some specifics worth noting:
+We are following the style guide given http://docs.scala-lang.org/style/ ">here (though not perfectly). 
Below are some specifics worth noting:
--- End diff --

Yes and no. Yes because it was already there, and I wasn't sure if there 
was some reason for it. No because I didn't choose for it to be there. I am 
somewhat removed from HTML-land so I didn't know it this was some convention or 
something.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3717 started by Ismael Juma.
--
> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286897#comment-15286897
 ] 

Ismael Juma commented on KAFKA-3717:


I had a chat with Grant and he said it was OK for me to pick this up.

> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3717) Support building aggregate javadoc for all project modules

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3717:
--

Assignee: Ismael Juma  (was: Grant Henke)

> Support building aggregate javadoc for all project modules
> --
>
> Key: KAFKA-3717
> URL: https://issues.apache.org/jira/browse/KAFKA-3717
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>
> If you run "./gradlew javadoc", you will only get JavaDoc for the High Level 
> Consumer. All the new clients are missing.
> See here: http://home.apache.org/~gwenshap/0.10.0.0-rc5/javadoc/
> I suggest fixing in 0.10.0 branch and in trunk, not rolling a new release 
> candidate, but updating our docs site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3704) Improve mechanism for compression stream block size selection in KafkaProducer

2016-05-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286887#comment-15286887
 ] 

Guozhang Wang commented on KAFKA-3704:
--

Thanks for the summary [~ijuma].

I think 2) solves the problem "cleanly" except for GZIP, while 3) still 
introduces extra memory out of controlled buffer pool, one block for each 
partition. 1) introduces a new config but does not necessarily control the 
total extra memory allocated out of buffer pool.

Personally I fell 3) is worth doing: originally I'm concerned it complicates 
the code by quite a lot, but after checking it once again I feel it may not be 
that worse compared with 2).

> Improve mechanism for compression stream block size selection in KafkaProducer
> --
>
> Key: KAFKA-3704
> URL: https://issues.apache.org/jira/browse/KAFKA-3704
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> As discovered in https://issues.apache.org/jira/browse/KAFKA-3565, the 
> current default block size (1K) used in Snappy and GZIP may cause a 
> sub-optimal compression ratio for Snappy, and hence reduce throughput. 
> Because we no longer recompress data in the broker, it also impacts what gets 
> stored on disk.
> A solution might be to use the default block size, which is 64K in LZ4, 32K 
> in Snappy and 0.5K in GZIP. The downside is that this solution will require 
> more memory allocated outside of the buffer pool and hence users may need to 
> bump up their JVM heap size, especially for MirrorMakers. Using Snappy as an 
> example, it's an additional 2x32k per batch (as Snappy uses two buffers) and 
> one would expect at least one batch per partition. However, the number of 
> batches per partition can be much higher if the broker is slow to acknowledge 
> producer requests (depending on `buffer.memory`, `batch.size`, message size, 
> etc.).
> Given the above, there are a few things that could be done (potentially more 
> than one):
> 1) A configuration for the producer compression stream buffer size.
> 2) Allocate buffers from the buffer pool and pass them to the compression 
> library. This is possible with Snappy and we could adapt our LZ4 code. It's 
> not possible with GZIP, but it uses a very small buffer by default.
> 3) Close the existing `RecordBatch.records` when we create a new 
> `RecordBatch` for the `TopicPartition` instead of doing it during 
> `RecordAccumulator.drain`. This would mean that we would only retain 
> resources for one `RecordBatch` per partition, which would improve the worst 
> case scenario significantly.
> Note that we decided that this change was too risky for 0.10.0.0 and reverted 
> the original attempt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Perf producer/consumers for compacted topics

2016-05-17 Thread Tom Crayford
Hi there,

As noted in the 0.10.0.0-RC4 release thread, we (Heroku Kafka) have been
doing extensive benchmarking of Kafka. In our case this is to help give
customers a good idea of the performance of our various configurations. For
this we orchestrate the Kafka `producer-perf.sh` and `consumer-perf.sh`
across multiple machines, which was relatively easy to do and very
successful (recently leading to a doc change and a good lesson about 0.10).

However, we're finding one thing missing from the current producer/consumer
perf tests, which is that there's no good perf testing on compacted topics.
Some folk will undoubtedly use compacted topics, so it would be extremely
helpful (I think) for the community to have benchmarks that test
performance on compacted topics. We're interested in working on this and
contributing it upstream, but are pretty unsure what such a test should
look like. One straw proposal is to adapt the existing producer/consumer
perf tests to work on a compacted topic, likely with an additional flag on
the producer that lets you choose how wide a key range to emit, if it
should emit deletes (and how often to do so) and so on. Is there anything
more we could or should do there?

We're happy writing the code here, and want to continue contributing back,
I'd just love a hand thinking about what perf tests for compacted topics
should look like.

Thanks

Tom Crayford
Heroku Kafka


[GitHub] kafka pull request: MINOR: Exclude `jline` and `netty` dependencie...

2016-05-17 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1396

MINOR: Exclude `jline` and `netty` dependencies in the `streams` project

These dependencies are unnecessary and they are acquired
transitively via the zkclient dependency. The approach
taken is copied from what we do in the `core` project.

Ewen did the hard work in figuring out why we have unexpected
additional dependencies since 0.9.x.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
exclude-jline-netty-deps-in-streams

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1396


commit 482b6c0504f62a8cbac99c095a34bb7737f4b252
Author: Ismael Juma 
Date:   2016-05-17T14:56:20Z

Exclude `jline` and `netty` dependencies in the `streams` project

These dependencies are unnecessary and they are acquired
transitively via the zkclient dependency. The approach
taken is copied from what we do in the `core` project.

Ewen did the hard work in figuring out why we have unexpected
additional dependencies since 0.9.x.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3418:
---
Fix Version/s: (was: 0.10.1.0)

> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3384) bin scripts may not be portable/POSIX compliant

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3384:
---
Fix Version/s: (was: 0.10.1.0)

> bin scripts may not be portable/POSIX compliant
> ---
>
> Key: KAFKA-3384
> URL: https://issues.apache.org/jira/browse/KAFKA-3384
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.0.0
>
>
> We may be using some important tools in a non-POSIX compliant and 
> non-portable way. In particular, we've discovered that we can sometimes 
> trigger this error:
> /usr/bin/kafka-server-stop: line 22: kill: SIGTERM: invalid signal 
> specification
> which looks like it is caused by invoking a command like {{kill -SIGTERM 
> }}. (This is a lightly modified version of {{kafka-server-stop.sh}}, but 
> nothing of relevance has been affected.)
> Googling seems to suggest that passing the signal in that way is not 
> compliant -- it's a shell extensions. We're using {{/bin/sh}}, but that may 
> be aliased to other more liberal shells on some platforms. To be honest, I'm 
> not sure exactly the requirements for triggering this since running the 
> command directly on the same host via an interactive shell still works, but 
> we are definitely limiting portability using the current approach.
> There are a couple of possible solutions:
> 1. Standardize on bash. This lets us make more permissive wrt shell features 
> that we use. We're already using /bin/bash in the majority of scripts anyway. 
> Might help us avoid a bunch of assumptions people make when bash is aliased 
> to sh: https://wiki.ubuntu.com/DashAsBinSh
> 2. Try to clean up scripts as we discover incompatibilities. The immediate 
> fix for this issue seems to be to use {{kill -s TERM}} instead of {{kill 
> -SIGTERM}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3451:
---
Fix Version/s: (was: 0.10.1.0)

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3439) Document possible exception thrown in public APIs

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3439:
---
Fix Version/s: (was: 0.10.1.0)

> Document possible exception thrown in public APIs
> -
>
> Key: KAFKA-3439
> URL: https://issues.apache.org/jira/browse/KAFKA-3439
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Eno Thereska
>  Labels: api, docs
> Fix For: 0.10.0.0
>
>
> Candidate interfaces include all the ones in "kstream", "processor" and 
> "state" packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3426) Improve protocol type errors when invalid sizes are received

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3426:
---
Fix Version/s: (was: 0.10.1.0)

> Improve protocol type errors when invalid sizes are received
> 
>
> Key: KAFKA-3426
> URL: https://issues.apache.org/jira/browse/KAFKA-3426
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We currently don't perform much validation on the size value read by the 
> protocol types. This means that we end up throwing exceptions like 
> `BufferUnderflowException`, `NegativeArraySizeException`, etc. `Schema.read` 
> catches these exceptions and adds some useful information like:
> {code}
> throw new SchemaException("Error reading field '" + fields[i].name +
>   "': " +
>   (e.getMessage() == null ? 
> e.getClass().getName() : e.getMessage()));
> {code}
> We could do even better by throwing a `SchemaException` with a more user 
> friendly message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3434:
---
Fix Version/s: (was: 0.10.1.0)

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3382) Add system test for ReplicationVerificationTool

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3382:
---
Fix Version/s: (was: 0.10.1.0)

> Add system test for ReplicationVerificationTool
> ---
>
> Key: KAFKA-3382
> URL: https://issues.apache.org/jira/browse/KAFKA-3382
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3381) Add system test for SimpleConsumerShell

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3381:
---
Fix Version/s: (was: 0.10.1.0)

> Add system test for SimpleConsumerShell
> ---
>
> Key: KAFKA-3381
> URL: https://issues.apache.org/jira/browse/KAFKA-3381
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1720) [Renaming / Comments] Delayed Operations

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1720:
---
Fix Version/s: (was: 0.10.1.0)
   0.9.0.0

> [Renaming / Comments] Delayed Operations
> 
>
> Key: KAFKA-1720
> URL: https://issues.apache.org/jira/browse/KAFKA-1720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1720.patch, KAFKA-1720_2014-10-31_17:21:46.patch, 
> KAFKA-1720_2014-12-03_13:34:13.patch
>
>
> After KAFKA-1583 checked in, we would better renaming the delayed requests to 
> delayed operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1483) Split Brain about Leader Partitions

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1483:
---
Fix Version/s: (was: 0.10.1.0)
   0.8.2.0

> Split Brain about Leader Partitions
> ---
>
> Key: KAFKA-1483
> URL: https://issues.apache.org/jira/browse/KAFKA-1483
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1483.patch, KAFKA-1483_2014-07-16_11:07:44.patch
>
>
> Today in the server there are two places storing the leader partition info:
> 1) leaderPartitions list in the ReplicaManager.
> 2) leaderBrokerIdOpt in the Partition.
> 1) is used as the ground truth to decide if the server is the current leader 
> for serving requests; 2) is used as the ground truth for reporting leader 
> counts metrics, etc and for the background Shrinking-ISR thread to decide 
> which partition to check. There is a risk that these two ground truth caches 
> are not consistent, and we'd better only make one of them as the ground truth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1536) Change the status of the JIRA to "Patch Available" in the kafka-review-tool

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1536:
---
Fix Version/s: (was: 0.10.1.0)

> Change the status of the JIRA to "Patch Available" in the kafka-review-tool
> ---
>
> Key: KAFKA-1536
> URL: https://issues.apache.org/jira/browse/KAFKA-1536
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Manikumar Reddy
> Attachments: KAFKA-1536.patch, KAFKA-1536.patch, 
> KAFKA-1536_2014-08-06_19:44:49.patch
>
>
> When using the kafka-review-tool to upload a patch to certain jira, the 
> status remains "OPEN". It makes searching for JIRAs that needs review a bit 
> hard. Would be better to make the tool also change the status of the jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3435:
---
Fix Version/s: (was: 0.10.1.0)

> Remove `Unstable` annotation from new Java Consumer
> ---
>
> Key: KAFKA-3435
> URL: https://issues.apache.org/jira/browse/KAFKA-3435
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> As part of the vote for "KIP-45 - Standardize all client sequence interaction 
> on j.u.Collection", the underlying assumption is that we won't break things 
> going forward. We should remove the `Unstable` annotation to make that clear.
> cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1445) New Producer should send all partitions that have non-empty batches when on of them is ready

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1445:
---
Fix Version/s: (was: 0.10.1.0)
   0.8.2.0

> New Producer should send all partitions that have non-empty batches when on 
> of them is ready
> 
>
> Key: KAFKA-1445
> URL: https://issues.apache.org/jira/browse/KAFKA-1445
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1445.patch, KAFKA-1445.patch, 
> KAFKA-1445_2014-05-13_11:25:13.patch, KAFKA-1445_2014-05-14_16:24:25.patch, 
> KAFKA-1445_2014-05-14_16:28:06.patch, KAFKA-1445_2014-05-15_15:15:37.patch, 
> KAFKA-1445_2014-05-15_15:19:10.patch
>
>
> One difference between the new producer and the old producer is that on the 
> new producer the linger time is per partition, instead of global. Therefore, 
> when the traffic is low, the sender will likely expire partitions one-by-one 
> and send lots of small request containing only a few partitions with a few 
> data, resulting largely increased request rate.
> One solution of it would be to let senders select all partitions that have 
> non-empty batches when on of them is ready.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3488) commitAsync() fails if metadata update creates new SASL/SSL connection

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3488:
---
Fix Version/s: (was: 0.10.1.0)

> commitAsync() fails if metadata update creates new SASL/SSL connection
> --
>
> Key: KAFKA-3488
> URL: https://issues.apache.org/jira/browse/KAFKA-3488
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Sasl/SslConsumerTest.testSimpleConsumption() fails intermittently with a 
> failure in {{commitAsync()}}. The exception stack trace shows:
> {quote}
> kafka.api.SaslPlaintextConsumerTest.testSimpleConsumption FAILED
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.api.BaseConsumerTest.awaitCommitCallback(BaseConsumerTest.scala:340)
>   at 
> kafka.api.BaseConsumerTest.testSimpleConsumption(BaseConsumerTest.scala:85)
> {quote}
> I have recreated this with some additional trace. The tests run with a very 
> small metadata expiry interval, triggering metadata updates quite often. If a 
> metadata request immediately following a {{commitAsync()}} call creates a new 
> SSL/SASL connection, {{ConsumerNetworkClient.poll}} returns to process the 
> connection handshake packets. Since {{ConsumerNetworkClient.poll}} discards 
> all unsent packets before returning from poll, this can result in the failure 
> of the commit - the callback is invoked with {{SendFailedException}}.
> I understand that {{ConsumerNetworkClient.poll()}} discards unsent packets 
> rather than buffer them to keep the code simple. And perhaps it is ok to fail 
> {{commitAsync}} occasionally since the callback does indicate that the caller 
> should retry. But it feels like an unnecessary limitation that requires error 
> handling in client applications when there are no real failures and makes it 
> much harder to test reliably. As special handling to fix issues like 
> KAFKA-3412, KAFKA-2672 adds more complexity to the code anyway, and because 
> it is much harder to debug failures that affect only SSL/SASL, it may be 
> worth considering improving this behaviour.
> I will see if I can submit a PR for the specific issue I was seeing with the 
> impact of handshakes on {{commitAsync()}}, but I will be interested in views 
> on improving the logic in {{ConsumerNetworkClient}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3464) Connect security system tests

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3464:
---
Fix Version/s: (was: 0.10.1.0)

> Connect security system tests
> -
>
> Key: KAFKA-3464
> URL: https://issues.apache.org/jira/browse/KAFKA-3464
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.0.0
>
>
> We need to validate that Connect can actually work with security enabled. 
> System tests can easily cover this since they will be a small modification of 
> existing connect tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3483) Restructure ducktape tests to simplify running subsets of tests

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3483:
---
Fix Version/s: (was: 0.10.1.0)

> Restructure ducktape tests to simplify running subsets of tests
> ---
>
> Key: KAFKA-3483
> URL: https://issues.apache.org/jira/browse/KAFKA-3483
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> Provides a convenient way of running ducktape tests for a single component 
> (core, connect, streams, etc). It also separates tests from benchmarks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3490) Multiple version support for ducktape performance tests

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3490:
---
Fix Version/s: (was: 0.10.1.0)

> Multiple version support for ducktape performance tests
> ---
>
> Key: KAFKA-3490
> URL: https://issues.apache.org/jira/browse/KAFKA-3490
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> To verify the performance impact of changes, it is very handy to be able to 
> run ducktape performance tests across multiple Kafka versions. Luckily 
> [~geoffra] has done most of the work for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3495) `NetworkClient.blockingSendAndReceive` should rely on requestTimeout

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3495:
---
Fix Version/s: (was: 0.10.1.0)

> `NetworkClient.blockingSendAndReceive` should rely on requestTimeout
> 
>
> Key: KAFKA-3495
> URL: https://issues.apache.org/jira/browse/KAFKA-3495
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> `NetworkClient.blockingSendAndReceive` method should rely on requestTimeout 
> instead of having its own timeout logic. Currently, we sometimes get an 
> exception via the requestTimeout logic and other times we get it via the 
> `blockingSendAndReceive` timeout logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3505) Set curRecord in punctuate() functions

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3505:
---
Fix Version/s: (was: 0.10.1.0)

> Set curRecord in punctuate() functions
> --
>
> Key: KAFKA-3505
> URL: https://issues.apache.org/jira/browse/KAFKA-3505
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.0.0
>
>
> Punctuate() function in processor and transformer needs to be handled a bit 
> differently from process(), since it can generate new records to pass through 
> the topology from anywhere of the topology, whereas for the latter case a 
> record is always polled from Kafka and passed via the source processors.
> Today because we do not set the curRecord correctly, calls to timestamp() / 
> topic() / etc would actually trigger a KafkaStreamsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3506) Kafka Connect Task Restart API

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3506:
---
Fix Version/s: (was: 0.10.1.0)

> Kafka Connect Task Restart API
> --
>
> Key: KAFKA-3506
> URL: https://issues.apache.org/jira/browse/KAFKA-3506
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> This covers the connector and task restart APIs as documented on KIP-52: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3520) System tests of config validate and list connectors REST APIs

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3520:
---
Fix Version/s: (was: 0.10.1.0)

> System tests of config validate and list connectors REST APIs
> -
>
> Key: KAFKA-3520
> URL: https://issues.apache.org/jira/browse/KAFKA-3520
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Liquan Pei
>Assignee: Liquan Pei
>  Labels: test
> Fix For: 0.10.0.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3508:
---
Fix Version/s: (was: 0.10.1.0)

> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Updated] (KAFKA-3307) Add ApiVersion request/response and server side handling.

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3307:
---
Fix Version/s: (was: 0.10.1.0)

> Add ApiVersion request/response and server side handling.
> -
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3219:
---
Fix Version/s: (was: 0.10.1.0)

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2930) Update references to ZooKeeper in the docs

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2930:
---
Fix Version/s: (was: 0.10.1.0)

> Update references to ZooKeeper in the docs
> --
>
> Key: KAFKA-2930
> URL: https://issues.apache.org/jira/browse/KAFKA-2930
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.10.0.0
>
>
> Information about ZooKeeper in the ops doc is stale, it refers to branch 3.3 
> and Kafka is already using branch 3.4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2910) Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2910:
---
Fix Version/s: (was: 0.10.1.0)

> Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl
> 
>
> Key: KAFKA-2910
> URL: https://issues.apache.org/jira/browse/KAFKA-2910
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> {code}
> java.lang.SecurityException: zkEnableSecureAcls is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:265)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:143)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:66)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$IntegrationTestHarness$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:58)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$EndToEndAuthorizationTest$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.EndToEndAuthorizationTest$class.setUp(EndToEndAuthorizationTest.scala:141)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.setUp(SslEndToEndAuthorizationTest.scala:24)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Updated] (KAFKA-2844) Use different keyTab for client and server in SASL tests

2016-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2844:
---
Fix Version/s: (was: 0.10.1.0)

> Use different keyTab for client and server in SASL tests
> 
>
> Key: KAFKA-2844
> URL: https://issues.apache.org/jira/browse/KAFKA-2844
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We currently use the same keyTab, which could hide problems in the 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >