[jira] [Updated] (KAFKA-3645) NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a secure env

2016-06-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3645:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1301
[https://github.com/apache/kafka/pull/1301]

> NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a 
> secure env
> --
>
> Key: KAFKA-3645
> URL: https://issues.apache.org/jira/browse/KAFKA-3645
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> The host and port entries under /brokers/ids/ gets filled only for 
> PLAINTEXT security protocol. For other protocols the host is null and the 
> actual endpoint is under "endpoints". This causes NPE when running the 
> consumer group and offset checker scripts in a kerberized env. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1301: KAFKA-3645: Fix ConsumerGroupCommand and ConsumerO...

2016-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1301


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3645) NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a secure env

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315398#comment-15315398
 ] 

ASF GitHub Bot commented on KAFKA-3645:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1301


> NPE in ConsumerGroupCommand and ConsumerOffsetChecker when running in a 
> secure env
> --
>
> Key: KAFKA-3645
> URL: https://issues.apache.org/jira/browse/KAFKA-3645
> Project: Kafka
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> The host and port entries under /brokers/ids/ gets filled only for 
> PLAINTEXT security protocol. For other protocols the host is null and the 
> actual endpoint is under "endpoints". This causes NPE when running the 
> consumer group and offset checker scripts in a kerberized env. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #676

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3645; Fix ConsumerGroupCommand and ConsumerOffsetChecker to

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ff300c9d4f45e4a355db11258965c3a3a6f6bbf7 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ff300c9d4f45e4a355db11258965c3a3a6f6bbf7
 > git rev-list 49ddc897b8feda9c4786d5bcd03814b91ede7124 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7052546416676945397.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 17.9 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6177183151571764050.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 
-Dorg.gradle.project.testLoggingEvents=started,passed,skipped,failed clean 
jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 239
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
correspo

Build failed in Jenkins: kafka-trunk-jdk7 #1339

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3645; Fix ConsumerGroupCommand and ConsumerOffsetChecker to

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ff300c9d4f45e4a355db11258965c3a3a6f6bbf7 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ff300c9d4f45e4a355db11258965c3a3a6f6bbf7
 > git rev-list 49ddc897b8feda9c4786d5bcd03814b91ede7124 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3263838737558450073.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 20.792 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3140610573703035577.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 
-Dorg.gradle.project.testLoggingEvents=started,passed,skipped,failed clean 
jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 239
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 21.384 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request #1453: KAFKA-3561: Auto create through topic for KStream ...

2016-06-04 Thread dguy
Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/1453


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3561) Auto create through topic for KStream aggregation and join

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315545#comment-15315545
 ] 

ASF GitHub Bot commented on KAFKA-3561:
---

Github user dguy closed the pull request at:

https://github.com/apache/kafka/pull/1453


> Auto create through topic for KStream aggregation and join
> --
>
> Key: KAFKA-3561
> URL: https://issues.apache.org/jira/browse/KAFKA-3561
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: api
> Fix For: 0.10.1.0
>
>
> For KStream.join / aggregateByKey operations that requires the streams to be 
> partitioned on the record key, today users should repartition themselves 
> through the "through" call:
> {code}
> stream1 = builder.stream("topic1");
> stream2 = builder.stream("topic2");
> stream3 = stream1.map(/* set the right key for join*/).through("topic3");
> stream4 = stream2.map(/* set the right key for join*/).through("topic4");
> stream3.join(stream4, ..)
> {code}
> This pattern can actually be done by the Streams DSL itself instead of 
> requiring users to specify themselves, i.e. users can just set the right key 
> like (see KAFKA-3430) and then call join, which will be translated by adding 
> the "internal topic for repartition".
> Another thing is that today if user do not call "through" after setting a new 
> key, the aggregation result would not be correct as the aggregation is based 
> on key B while the source partitions is partitioned by key A and hence each 
> task will only get a partial aggregation for all keys. But this is not 
> validated in the DSL today. We should do both the auto-translation and 
> validation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-06-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3768.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1445
[https://github.com/apache/kafka/pull/1445]

> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Priority: Minor
> Fix For: 0.10.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else block instead of  pattern match on boolean 
> values.
> For example:
> {code:title=Comparasion.scala|borderStyle=solid}
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }
>   }
> }
> {code}
> Byte code comparison between method1 and method2:
> scala>javap -cp Comparasion
> {code:title=Comparasion.class|borderStyle=solid}
> Compiled from ""
> public class Comparasion {
>   public java.lang.String method1(boolean);
> Code:
>0: iload_1
>1: istore_2
>2: iconst_1
>3: iload_2
>4: if_icmpne 13
>7: ldc   #9  // String TRUE
>9: astore_3
>   10: goto  21
>   13: iconst_0
>   14: iload_2
>   15: if_icmpne 23
>   18: ldc   #11 // String FALSE
>   20: astore_3
>   21: aload_3
>   22: areturn
>   23: new   #13 // class scala/MatchError
>   26: dup
>   27: iload_2
>   28: invokestatic  #19 // Method 
> scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
>   31: invokespecial #23 // Method 
> scala/MatchError."":(Ljava/lang/Object;)V
>   34: athrow
>   public java.lang.String method2(boolean);
> Code:
>0: iload_1
>1: ifeq  9
>4: ldc   #9  // String TRUE
>6: goto  11
>9: ldc   #11 // String FALSE
>   11: areturn
>   public Comparasion();
> Code:
>0: aload_0
>1: invokespecial #33 // Method 
> java/lang/Object."":()V
>4: return
> }
> {code}
> method1 have 23 line of byte code and method2 have only 6 line  byte code. 
> Pattern match are more expensive comparison to if/else block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1445: KAFKA-3768: Replace all pattern match on boolean v...

2016-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1445


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315625#comment-15315625
 ] 

ASF GitHub Bot commented on KAFKA-3768:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1445


> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Priority: Minor
> Fix For: 0.10.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else block instead of  pattern match on boolean 
> values.
> For example:
> {code:title=Comparasion.scala|borderStyle=solid}
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }
>   }
> }
> {code}
> Byte code comparison between method1 and method2:
> scala>javap -cp Comparasion
> {code:title=Comparasion.class|borderStyle=solid}
> Compiled from ""
> public class Comparasion {
>   public java.lang.String method1(boolean);
> Code:
>0: iload_1
>1: istore_2
>2: iconst_1
>3: iload_2
>4: if_icmpne 13
>7: ldc   #9  // String TRUE
>9: astore_3
>   10: goto  21
>   13: iconst_0
>   14: iload_2
>   15: if_icmpne 23
>   18: ldc   #11 // String FALSE
>   20: astore_3
>   21: aload_3
>   22: areturn
>   23: new   #13 // class scala/MatchError
>   26: dup
>   27: iload_2
>   28: invokestatic  #19 // Method 
> scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
>   31: invokespecial #23 // Method 
> scala/MatchError."":(Ljava/lang/Object;)V
>   34: athrow
>   public java.lang.String method2(boolean);
> Code:
>0: iload_1
>1: ifeq  9
>4: ldc   #9  // String TRUE
>6: goto  11
>9: ldc   #11 // String FALSE
>   11: areturn
>   public Comparasion();
> Code:
>0: aload_0
>1: invokespecial #33 // Method 
> java/lang/Object."":()V
>4: return
> }
> {code}
> method1 have 23 line of byte code and method2 have only 6 line  byte code. 
> Pattern match are more expensive comparison to if/else block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-06-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3768:
---
Assignee: Satendra Kumar

> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Assignee: Satendra Kumar
>Priority: Minor
> Fix For: 0.10.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else block instead of  pattern match on boolean 
> values.
> For example:
> {code:title=Comparasion.scala|borderStyle=solid}
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }
>   }
> }
> {code}
> Byte code comparison between method1 and method2:
> scala>javap -cp Comparasion
> {code:title=Comparasion.class|borderStyle=solid}
> Compiled from ""
> public class Comparasion {
>   public java.lang.String method1(boolean);
> Code:
>0: iload_1
>1: istore_2
>2: iconst_1
>3: iload_2
>4: if_icmpne 13
>7: ldc   #9  // String TRUE
>9: astore_3
>   10: goto  21
>   13: iconst_0
>   14: iload_2
>   15: if_icmpne 23
>   18: ldc   #11 // String FALSE
>   20: astore_3
>   21: aload_3
>   22: areturn
>   23: new   #13 // class scala/MatchError
>   26: dup
>   27: iload_2
>   28: invokestatic  #19 // Method 
> scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
>   31: invokespecial #23 // Method 
> scala/MatchError."":(Ljava/lang/Object;)V
>   34: athrow
>   public java.lang.String method2(boolean);
> Code:
>0: iload_1
>1: ifeq  9
>4: ldc   #9  // String TRUE
>6: goto  11
>9: ldc   #11 // String FALSE
>   11: areturn
>   public Comparasion();
> Code:
>0: aload_0
>1: invokespecial #33 // Method 
> java/lang/Object."":()V
>4: return
> }
> {code}
> method1 have 23 line of byte code and method2 have only 6 line  byte code. 
> Pattern match are more expensive comparison to if/else block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1340

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3768; Replace all pattern match on boolean value by if/else 
block.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ab356060665b3b6502c7d531366b26e1e0f48f9c 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ab356060665b3b6502c7d531366b26e1e0f48f9c
 > git rev-list ff300c9d4f45e4a355db11258965c3a3a6f6bbf7 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5812625058912828257.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 26.286 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson658450314620942759.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 
-Dorg.gradle.project.testLoggingEvents=started,passed,skipped,failed clean 
jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 239
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 18.241 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2016-06-04 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315665#comment-15315665
 ] 

Ewen Cheslack-Postava commented on KAFKA-1995:
--

[~johnlon] JMS supports acknowledged messages. Message IDs and correlation IDs 
could potentially be used as offsets. Given some other guarantees, offsets 
don't necessarily need to be comparable and the source system doesn't need to 
be able to seek to arbitrary offsets. In the case of JMS, I think you could do 
something like use the message ID as the offset and if a Connect offset commit 
occurs but you don't have a chance to ack that block of messages, upon resuming 
you could consume messages, ignoring anything until you see the ID from the 
last commit, then start producing data. This minimizes the number of duplicates 
you might see in the destination system as long as you don't ack messages until 
Connect offsets have been committed. Alternatively, opt to ignore offsets and 
offset commit entirely and rely only on commitRecord 
(https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java#L84)
 to trigger acks. Upon faults you can still see duplicates, but since JMS has 
fine-grained acking and effectively tracks offsets for you, you may not even 
need Connect's offset tracking support. That support is included for systems 
that have no other place to store offsets (e.g. if you're reading of a 
database's commit log file), but it isn't required that every connector utilize 
it.

Strictly speaking you can also create a connector that doesn't use these 
features, you just won't have any guarantees about data delivery in the case of 
faults.

> JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
> hit Kafka)
> 
>
> Key: KAFKA-1995
> URL: https://issues.apache.org/jira/browse/KAFKA-1995
> Project: Kafka
>  Issue Type: Wish
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rekha Joshi
>
> Kafka is a great alternative to JMS, providing high performance, throughput 
> as scalable, distributed pub sub/commit log service.
> However there always exist traditional systems running on JMS.
> Rather than rewriting, it would be great if we just had an inbuilt 
> JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
> behind-the-scene.
> Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
> which receives msg off JMS queue and transforms to a Chukwa chunk?
> I have come across folks talking of this need in past as well.Is it 
> considered and/or part of the roadmap?
> http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
> http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
> http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
> Looking for inputs on correct way to approach this so to retain all good 
> features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1467: KAFKA-3789: Upgrade Snappy to fix snappy decompres...

2016-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1467


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315668#comment-15315668
 ] 

ASF GitHub Bot commented on KAFKA-3789:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1467


> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3789) Upgrade Snappy to fix snappy decompression errors

2016-06-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3789:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1467
[https://github.com/apache/kafka/pull/1467]

> Upgrade Snappy to fix snappy decompression errors
> -
>
> Key: KAFKA-3789
> URL: https://issues.apache.org/jira/browse/KAFKA-3789
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> snappy-java recently fixed a bug where parsing the MAGIC HEADER was being 
> handled incorrectly: https://github.com/xerial/snappy-java/issues/142
> This issue caused "unknown broker exceptions" in the clients and prevented 
> these messages from being appended to the log when messages were written 
> using snappy c bindings in clients like librdkafka or ruby-kafka and read 
> using snappy-java in the broker.   
> The related librdkafka issue is here: 
> https://github.com/edenhill/librdkafka/issues/645
> I am able to regularly reproduce the issue with librdkafka in 0.10 and after 
> upgrading snappy-java to 1.1.2.6 the issue is resolved. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1341

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3789; Upgrade Snappy to fix snappy decompression errors

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 27cb6686fd678a1625fe3bb114e7ff0afb4f6448 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 27cb6686fd678a1625fe3bb114e7ff0afb4f6448
 > git rev-list ab356060665b3b6502c7d531366b26e1e0f48f9c # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson6060565937411742162.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 15.248 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4374509850867844789.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 
-Dorg.gradle.project.testLoggingEvents=started,passed,skipped,failed clean 
jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 239
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2.6/snappy-java-1.1.2.6.pom
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2.6/snappy-java-1.1.2.6.jar
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.209 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #677

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3768; Replace all pattern match on boolean value by if/else 
block.

--
[...truncated 658 lines...]
kafka.api.SslConsumerTest > testCommitSpecifiedOffsets STARTED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors STARTED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer STARTED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PA

Build failed in Jenkins: kafka-trunk-jdk8 #678

2016-06-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3789; Upgrade Snappy to fix snappy decompression errors

--
[...truncated 6482 lines...]
kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign STARTED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists STARTED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware STARTED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion STARTED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
STARTED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists STARTED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists STARTED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover STARTED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower STARTED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic STARTED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic STARTED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion STARTED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic STARTED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks 
STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks 
PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas 
PASSED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned 
STARTED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware STARTED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion STARTED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers STARTED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack STARTED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex STARTED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment STARTED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks STARTED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup S

[jira] [Assigned] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2016-06-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-724:
-

Assignee: Rekha Joshi

> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Pablo Barrera
>Assignee: Rekha Joshi
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #50: KAFKA-724: auto socket buffer set

2016-06-04 Thread rekhajoshm
Github user rekhajoshm closed the pull request at:

https://github.com/apache/kafka/pull/50


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315694#comment-15315694
 ] 

ASF GitHub Bot commented on KAFKA-724:
--

Github user rekhajoshm closed the pull request at:

https://github.com/apache/kafka/pull/50


> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Pablo Barrera
>Assignee: Rekha Joshi
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.0-jdk7 #116

2016-06-04 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1469: KAFKA-724; Send, receive buffer size set if not -1

2016-06-04 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/1469

KAFKA-724; Send, receive buffer size set if not -1

PR #50 closed. Fixed, after rebase on trunk.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-724-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1469.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1469


commit c9a66992b1095616f87c5748f210b973ebc7eb01
Author: Rekha Joshi 
Date:   2016-05-26T17:48:37Z

Merge pull request #2 from apache/trunk

Apache Kafka trunk pull

commit 8d7fb005cb132440e7768a5b74257d2598642e0f
Author: Rekha Joshi 
Date:   2016-05-30T21:37:43Z

Merge pull request #3 from apache/trunk

Apache Kafka trunk pull

commit fbef9a8fb1411282fbadec46955691c3e7ba2578
Author: Rekha Joshi 
Date:   2016-06-04T23:58:02Z

Merge pull request #4 from apache/trunk

Apache Kafka trunk pull

commit feafe9714788896b00b0854eaf23c8ce5b8892ba
Author: Joshi 
Date:   2016-06-05T01:18:32Z

KAFKA-724; Send, receive buffer size set if not -1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2016-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315696#comment-15315696
 ] 

ASF GitHub Bot commented on KAFKA-724:
--

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/1469

KAFKA-724; Send, receive buffer size set if not -1

PR #50 closed. Fixed, after rebase on trunk.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-724-rebased

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1469.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1469


commit c9a66992b1095616f87c5748f210b973ebc7eb01
Author: Rekha Joshi 
Date:   2016-05-26T17:48:37Z

Merge pull request #2 from apache/trunk

Apache Kafka trunk pull

commit 8d7fb005cb132440e7768a5b74257d2598642e0f
Author: Rekha Joshi 
Date:   2016-05-30T21:37:43Z

Merge pull request #3 from apache/trunk

Apache Kafka trunk pull

commit fbef9a8fb1411282fbadec46955691c3e7ba2578
Author: Rekha Joshi 
Date:   2016-06-04T23:58:02Z

Merge pull request #4 from apache/trunk

Apache Kafka trunk pull

commit feafe9714788896b00b0854eaf23c8ce5b8892ba
Author: Joshi 
Date:   2016-06-05T01:18:32Z

KAFKA-724; Send, receive buffer size set if not -1




> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Pablo Barrera
>Assignee: Rekha Joshi
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3786) Avoid unused property from parent configs causing WARN entries

2016-06-04 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315698#comment-15315698
 ] 

Ewen Cheslack-Postava commented on KAFKA-3786:
--

[~guozhang] Exactly which warnings are you referring to? The code is supposed 
to be trying to track which properties are used -- whether by the parent 
AbstractConfig or anything that gets passed its originals. It does that by 
returning a {{RecordingMap}} when you get the original configs, and then 
{{logUnused}} should not be invoked until you've passed {{originals}} or some 
derivative to all child configurable classes (like serializers).

I think we may not handle the case of both a parent and child using 
{{ConfigDef}}/{{AbstractConfig}}. I think for the child {{AbstractConfig}}, we 
may not compute unused keys properly since it has no idea it is being passed a 
{{RecordingMap}}. Is that the case you're intending to solve here?

> Avoid unused property from parent configs causing WARN entries
> --
>
> Key: KAFKA-3786
> URL: https://issues.apache.org/jira/browse/KAFKA-3786
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> Currently the {{AbstractConfig}}'s constructor accepts the passed property 
> map as well as the {{ConfigDef}}, and maintains the original map as well as 
> the parsed values together. Because of it, with hierarchical config passing 
> like {{StreamsConfig}}, the underlying configs will takes all the key-value 
> pairs when constructed and hence cause WARNING log output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3791) Broken tools -- need better way to get offsets and other info

2016-06-04 Thread Greg Zoller (JIRA)
Greg Zoller created KAFKA-3791:
--

 Summary: Broken tools -- need better way to get offsets and other 
info
 Key: KAFKA-3791
 URL: https://issues.apache.org/jira/browse/KAFKA-3791
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.10.0.0
Reporter: Greg Zoller


Whenever I run included tools like kafka-consumer-offset-checker.sh I get 
deprecation warnings and it doesn't work for me (offsets not returned).  These 
need to be fixed.  The suggested class in the deprecation warning is not 
documented clearly in the docs.

In general it would be nice to streamline and simplify the tool scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-63: Unify store and downstream caching in streams

2016-06-04 Thread Eno Thereska
Hi Jay,

We can make it global instead of per-processor, sounds good.

Thanks
Eno


> On 3 Jun 2016, at 23:15, Jay Kreps  wrote:
> 
> Hey Eno,
> 
> Should the config be the global memory use rather than the per-processor?
> That is, let’s say I know I have fixed a 1GB heap because that is what I
> set for Java, and want to use 100MB for caching, it seems like right now
> I’d have to do some math that depends on my knowing a bit about how caching
> works to figure out how to set that parameter so I don't run out of memory.
> Does it also depend on the number of partitions assigned (and hence the
> number of task), if so that makes it even harder to set since each time
> rebalancing happens that changes so it is then pretty hard to set safely.
> 
> You could theoretically argue for either bottom up (you know how much cache
> you need per processor as you have it and you want to get exactly that) or
> top down (you know how much memory you have to spare but can't be bothered
> to work out what that amounts to per-processor). I think our experience has
> been that 99% of people never change the default and if it runs out of
> memory they really struggle to fix it and kind of blame us, so I think top
> down and a global config might be better. :-)
> 
> Example: https://issues.apache.org/jira/browse/KAFKA-3775
> 
> -Jay
> 
> On Fri, Jun 3, 2016 at 2:39 PM, Eno Thereska  wrote:
> 
>> Hi Gwen,
>> 
>> Yes. As an example, if cache.max.bytes.buffering set to X, and if users
>> have A aggregation operators and T KTable.to() operators, then X*(A + T)
>> total bytes will be allocated for caching.
>> 
>> Eno
>> 
>>> On 3 Jun 2016, at 21:37, Gwen Shapira  wrote:
>>> 
>>> Just to clarify: "cache.max.bytes.buffering" is per processor?
>>> 
>>> 
>>> On Thu, Jun 2, 2016 at 11:30 AM, Eno Thereska 
>> wrote:
 Hi there,
 
 I have created KIP-63: Unify store and downstream caching in streams
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63:+Unify+store+and+downstream+caching+in+streams
>>> 
 
 
 Feedback is appreciated.
 
 Thank you
 Eno
>> 
>>