Build failed in Jenkins: kafka-trunk-jdk7 #1325

2016-05-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3158; ConsumerGroupCommand should tell whether group is actually

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 0aff450961a8dd14cc7820ee8d1c9eea855439b0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 0aff450961a8dd14cc7820ee8d1c9eea855439b0
 > git rev-list a4802962c9e87f9cc81e1820fc88c71bd70b # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5786445133360717020.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 21.976 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5052510291139218047.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 231
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 22.488 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #661

2016-05-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3158; ConsumerGroupCommand should tell whether group is actually

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 0aff450961a8dd14cc7820ee8d1c9eea855439b0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 0aff450961a8dd14cc7820ee8d1c9eea855439b0
 > git rev-list a4802962c9e87f9cc81e1820fc88c71bd70b # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6287836024021454217.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.499 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8503948150167890294.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 231
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

[jira] [Resolved] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-05-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3158.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1429
[https://github.com/apache/kafka/pull/1429]

> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Ishita Mandhan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3158: ConsumerGroupCommand should tell w...

2016-05-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1429


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3158) ConsumerGroupCommand should tell whether group is actually dead

2016-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305681#comment-15305681
 ] 

ASF GitHub Bot commented on KAFKA-3158:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1429


> ConsumerGroupCommand should tell whether group is actually dead
> ---
>
> Key: KAFKA-3158
> URL: https://issues.apache.org/jira/browse/KAFKA-3158
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Ishita Mandhan
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> Currently the consumer group script reports the following when a group is 
> dead or rebalancing:
> {code}
> Consumer group `foo` does not exist or is rebalancing.
> {code}
> But it's annoying not to know which is actually the case. Since the group 
> state is exposed in the DescribeGroupRequest, we should be able to give 
> different messages for each case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305523#comment-15305523
 ] 

ASF GitHub Bot commented on KAFKA-3768:
---

GitHub user satendrakumar06 opened a pull request:

https://github.com/apache/kafka/pull/1444

Replace all pattern match on boolean value by if/else block.

Replaced all pattern match on boolean value by if/else block.

[KAFKA-3768](https://issues.apache.org/jira/browse/KAFKA-3768)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/satendrakumar06/kafka KAFKA-3768

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1444.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1444


commit 212327cba2387dbc0549e1a2e0c5387f430a7fc2
Author: Satendra kumar 
Date:   2016-05-28T18:03:58Z

Replace all pattern match on boolean value by if/elase block.

KAFKA-3768




> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else block instead of  pattern match on boolean 
> values.
> For example:
> {code:title=Comparasion.scala|borderStyle=solid}
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }
>   }
> }
> {code}
> Byte code comparison between method1 and method2:
> scala>javap -cp Comparasion
> {code:title=Comparasion.class|borderStyle=solid}
> Compiled from ""
> public class Comparasion {
>   public java.lang.String method1(boolean);
> Code:
>0: iload_1
>1: istore_2
>2: iconst_1
>3: iload_2
>4: if_icmpne 13
>7: ldc   #9  // String TRUE
>9: astore_3
>   10: goto  21
>   13: iconst_0
>   14: iload_2
>   15: if_icmpne 23
>   18: ldc   #11 // String FALSE
>   20: astore_3
>   21: aload_3
>   22: areturn
>   23: new   #13 // class scala/MatchError
>   26: dup
>   27: iload_2
>   28: invokestatic  #19 // Method 
> scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
>   31: invokespecial #23 // Method 
> scala/MatchError."":(Ljava/lang/Object;)V
>   34: athrow
>   public java.lang.String method2(boolean);
> Code:
>0: iload_1
>1: ifeq  9
>4: ldc   #9  // String TRUE
>6: goto  11
>9: ldc   #11 // String FALSE
>   11: areturn
>   public Comparasion();
> Code:
>0: aload_0
>1: invokespecial #33 // Method 
> java/lang/Object."":()V
>4: return
> }
> {code}
> method1 have 23 line of byte code and method2 have only 6 line  byte code. 
> Pattern match are more expensive comparison to if/else block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Replace all pattern match on boolean value by ...

2016-05-28 Thread satendrakumar06
GitHub user satendrakumar06 opened a pull request:

https://github.com/apache/kafka/pull/1444

Replace all pattern match on boolean value by if/else block.

Replaced all pattern match on boolean value by if/else block.

[KAFKA-3768](https://issues.apache.org/jira/browse/KAFKA-3768)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/satendrakumar06/kafka KAFKA-3768

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1444.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1444


commit 212327cba2387dbc0549e1a2e0c5387f430a7fc2
Author: Satendra kumar 
Date:   2016-05-28T18:03:58Z

Replace all pattern match on boolean value by if/elase block.

KAFKA-3768




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (KAFKA-3758) KStream job fails to recover after Kafka broker stopped

2016-05-28 Thread Greg Fodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Fodor reopened KAFKA-3758:
---

> KStream job fails to recover after Kafka broker stopped
> ---
>
> Key: KAFKA-3758
> URL: https://issues.apache.org/jira/browse/KAFKA-3758
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
> Attachments: muon.log.1.gz
>
>
> We've been doing some testing of a fairly complex KStreams job and under load 
> it seems the job fails to rebalance + recover if we shut down one of the 
> kafka brokers. The test we were running had a 3-node kafka cluster where each 
> topic had at least a replication factor of 2, and we terminated one of the 
> nodes.
> Attached is the full log, the root exception seems to be contention on the 
> lock on the state directory. The job continues to try to recover but throws 
> errors relating to locks over and over. Restarting the job itself resolves 
> the problem.
>  1702 org.apache.kafka.streams.errors.ProcessorStateException: Error while 
> creating the state manager
>  1703 at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:71)
>  1704 at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
>  1705 at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
>  1706 at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
>  1707 at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
>  1708 at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
>  1709 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
>  1710 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
>  1711 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
>  1712 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1713 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1714 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
>  1715 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1716 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1717 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
>  1718 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
>  1719 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>  1720 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>  1721 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  1722 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1723 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1724 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>  1725 at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>  1726 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>  1727 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>  1728 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>  1729 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  1730 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
>  1731 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
>  1732 at 
> org.apache.kafka.clients.c

[jira] [Commented] (KAFKA-3758) KStream job fails to recover after Kafka broker stopped

2016-05-28 Thread Greg Fodor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305503#comment-15305503
 ] 

Greg Fodor commented on KAFKA-3758:
---

Oh, actually, I'm not so sure. This was not during an unclean shutdown, but 
during a broker rebalance.

> KStream job fails to recover after Kafka broker stopped
> ---
>
> Key: KAFKA-3758
> URL: https://issues.apache.org/jira/browse/KAFKA-3758
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
> Attachments: muon.log.1.gz
>
>
> We've been doing some testing of a fairly complex KStreams job and under load 
> it seems the job fails to rebalance + recover if we shut down one of the 
> kafka brokers. The test we were running had a 3-node kafka cluster where each 
> topic had at least a replication factor of 2, and we terminated one of the 
> nodes.
> Attached is the full log, the root exception seems to be contention on the 
> lock on the state directory. The job continues to try to recover but throws 
> errors relating to locks over and over. Restarting the job itself resolves 
> the problem.
>  1702 org.apache.kafka.streams.errors.ProcessorStateException: Error while 
> creating the state manager
>  1703 at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:71)
>  1704 at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
>  1705 at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
>  1706 at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
>  1707 at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
>  1708 at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
>  1709 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
>  1710 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
>  1711 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
>  1712 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1713 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1714 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
>  1715 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1716 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1717 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
>  1718 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
>  1719 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>  1720 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>  1721 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  1722 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1723 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1724 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>  1725 at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>  1726 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>  1727 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>  1728 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>  1729 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  1730 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
>  1731 at 
> org.apac

[jira] [Resolved] (KAFKA-3758) KStream job fails to recover after Kafka broker stopped

2016-05-28 Thread Greg Fodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Fodor resolved KAFKA-3758.
---
Resolution: Duplicate

> KStream job fails to recover after Kafka broker stopped
> ---
>
> Key: KAFKA-3758
> URL: https://issues.apache.org/jira/browse/KAFKA-3758
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
> Attachments: muon.log.1.gz
>
>
> We've been doing some testing of a fairly complex KStreams job and under load 
> it seems the job fails to rebalance + recover if we shut down one of the 
> kafka brokers. The test we were running had a 3-node kafka cluster where each 
> topic had at least a replication factor of 2, and we terminated one of the 
> nodes.
> Attached is the full log, the root exception seems to be contention on the 
> lock on the state directory. The job continues to try to recover but throws 
> errors relating to locks over and over. Restarting the job itself resolves 
> the problem.
>  1702 org.apache.kafka.streams.errors.ProcessorStateException: Error while 
> creating the state manager
>  1703 at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:71)
>  1704 at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
>  1705 at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
>  1706 at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
>  1707 at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
>  1708 at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
>  1709 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
>  1710 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
>  1711 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
>  1712 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1713 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1714 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
>  1715 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1716 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1717 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
>  1718 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
>  1719 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>  1720 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>  1721 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  1722 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1723 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1724 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>  1725 at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>  1726 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>  1727 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>  1728 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>  1729 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  1730 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
>  1731 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
>  1732 at 
> 

[jira] [Commented] (KAFKA-3758) KStream job fails to recover after Kafka broker stopped

2016-05-28 Thread Greg Fodor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305501#comment-15305501
 ] 

Greg Fodor commented on KAFKA-3758:
---

Ah yes this looks like the same issue, thanks!

> KStream job fails to recover after Kafka broker stopped
> ---
>
> Key: KAFKA-3758
> URL: https://issues.apache.org/jira/browse/KAFKA-3758
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Guozhang Wang
> Attachments: muon.log.1.gz
>
>
> We've been doing some testing of a fairly complex KStreams job and under load 
> it seems the job fails to rebalance + recover if we shut down one of the 
> kafka brokers. The test we were running had a 3-node kafka cluster where each 
> topic had at least a replication factor of 2, and we terminated one of the 
> nodes.
> Attached is the full log, the root exception seems to be contention on the 
> lock on the state directory. The job continues to try to recover but throws 
> errors relating to locks over and over. Restarting the job itself resolves 
> the problem.
>  1702 org.apache.kafka.streams.errors.ProcessorStateException: Error while 
> creating the state manager
>  1703 at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:71)
>  1704 at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
>  1705 at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
>  1706 at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
>  1707 at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
>  1708 at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
>  1709 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
>  1710 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
>  1711 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
>  1712 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1713 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1714 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
>  1715 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1716 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1717 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
>  1718 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
>  1719 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
>  1720 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
>  1721 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  1722 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  1723 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  1724 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
>  1725 at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
>  1726 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>  1727 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>  1728 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>  1729 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  1730 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
>  1731 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.e

[jira] [Updated] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-05-28 Thread Satendra Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satendra Kumar updated KAFKA-3768:
--
Description: 
Scala recommend  use if/else block instead of  pattern match on boolean values.

For example:
{code:title=Comparasion.scala|borderStyle=solid}
class Comparasion {

def method1(flag: Boolean): String = {
  flag match {
 case true => "TRUE"
 case false => "FALSE"
   }
}

  def method2(flag: Boolean): String = {
  if(flag) {
   "TRUE"
 }else {
   "FALSE"
 }
  }

}
{code}
Byte code comparison between method1 and method2:
scala>javap -cp Comparasion
{code:title=Comparasion.class|borderStyle=solid}
Compiled from ""
public class Comparasion {
  public java.lang.String method1(boolean);
Code:
   0: iload_1
   1: istore_2
   2: iconst_1
   3: iload_2
   4: if_icmpne 13
   7: ldc   #9  // String TRUE
   9: astore_3
  10: goto  21
  13: iconst_0
  14: iload_2
  15: if_icmpne 23
  18: ldc   #11 // String FALSE
  20: astore_3
  21: aload_3
  22: areturn
  23: new   #13 // class scala/MatchError
  26: dup
  27: iload_2
  28: invokestatic  #19 // Method 
scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
  31: invokespecial #23 // Method 
scala/MatchError."":(Ljava/lang/Object;)V
  34: athrow

  public java.lang.String method2(boolean);
Code:
   0: iload_1
   1: ifeq  9
   4: ldc   #9  // String TRUE
   6: goto  11
   9: ldc   #11 // String FALSE
  11: areturn

  public Comparasion();
Code:
   0: aload_0
   1: invokespecial #33 // Method 
java/lang/Object."":()V
   4: return
}
{code}

method1 have 23 line of byte code and method2 have only 6 line  byte code. 
Pattern match are more expensive comparison to if/else block.


  was:
Scala recommend  use if/else clock instead of  pattern match on boolean values.

For example:

class Comparasion {

def method1(flag: Boolean): String = {
  flag match {
 case true => "TRUE"
 case false => "FALSE"
   }
}

  def method2(flag: Boolean): String = {
  if(flag) {
   "TRUE"
 }else {
   "FALSE"
 }
  }

}

Byte code comparison between method1 and method2:
scala>javap -cp Comparasion
Compiled from ""
public class Comparasion {
  public java.lang.String method1(boolean);
Code:
   0: iload_1
   1: istore_2
   2: iconst_1
   3: iload_2
   4: if_icmpne 13
   7: ldc   #9  // String TRUE
   9: astore_3
  10: goto  21
  13: iconst_0
  14: iload_2
  15: if_icmpne 23
  18: ldc   #11 // String FALSE
  20: astore_3
  21: aload_3
  22: areturn
  23: new   #13 // class scala/MatchError
  26: dup
  27: iload_2
  28: invokestatic  #19 // Method 
scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
  31: invokespecial #23 // Method 
scala/MatchError."":(Ljava/lang/Object;)V
  34: athrow

  public java.lang.String method2(boolean);
Code:
   0: iload_1
   1: ifeq  9
   4: ldc   #9  // String TRUE
   6: goto  11
   9: ldc   #11 // String FALSE
  11: areturn

  public Comparasion();
Code:
   0: aload_0
   1: invokespecial #33 // Method 
java/lang/Object."":()V
   4: return
}
method1 have 23 line of byte code and 6 line  byte code. Pattern match are more 
expensive comparison to if/else block.



> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else block instead of  pattern match on boolean 
> values.
> For example:
> {code:title=Comparasion.scala|borderStyle=solid}
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }

[jira] [Updated] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-05-28 Thread Satendra Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satendra Kumar updated KAFKA-3768:
--
Issue Type: Improvement  (was: Bug)

> Replace all pattern match on boolean value by if/elase block.
> -
>
> Key: KAFKA-3768
> URL: https://issues.apache.org/jira/browse/KAFKA-3768
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satendra Kumar
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Scala recommend  use if/else clock instead of  pattern match on boolean 
> values.
> For example:
> class Comparasion {
> def method1(flag: Boolean): String = {
>   flag match {
>  case true => "TRUE"
>  case false => "FALSE"
>}
> }
>   def method2(flag: Boolean): String = {
>   if(flag) {
>"TRUE"
>  }else {
>"FALSE"
>  }
>   }
> }
> Byte code comparison between method1 and method2:
> scala>javap -cp Comparasion
> Compiled from ""
> public class Comparasion {
>   public java.lang.String method1(boolean);
> Code:
>0: iload_1
>1: istore_2
>2: iconst_1
>3: iload_2
>4: if_icmpne 13
>7: ldc   #9  // String TRUE
>9: astore_3
>   10: goto  21
>   13: iconst_0
>   14: iload_2
>   15: if_icmpne 23
>   18: ldc   #11 // String FALSE
>   20: astore_3
>   21: aload_3
>   22: areturn
>   23: new   #13 // class scala/MatchError
>   26: dup
>   27: iload_2
>   28: invokestatic  #19 // Method 
> scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
>   31: invokespecial #23 // Method 
> scala/MatchError."":(Ljava/lang/Object;)V
>   34: athrow
>   public java.lang.String method2(boolean);
> Code:
>0: iload_1
>1: ifeq  9
>4: ldc   #9  // String TRUE
>6: goto  11
>9: ldc   #11 // String FALSE
>   11: areturn
>   public Comparasion();
> Code:
>0: aload_0
>1: invokespecial #33 // Method 
> java/lang/Object."":()V
>4: return
> }
> method1 have 23 line of byte code and 6 line  byte code. Pattern match are 
> more expensive comparison to if/else block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3768) Replace all pattern match on boolean value by if/elase block.

2016-05-28 Thread Satendra Kumar (JIRA)
Satendra Kumar created KAFKA-3768:
-

 Summary: Replace all pattern match on boolean value by if/elase 
block.
 Key: KAFKA-3768
 URL: https://issues.apache.org/jira/browse/KAFKA-3768
 Project: Kafka
  Issue Type: Bug
Reporter: Satendra Kumar
Priority: Minor


Scala recommend  use if/else clock instead of  pattern match on boolean values.

For example:

class Comparasion {

def method1(flag: Boolean): String = {
  flag match {
 case true => "TRUE"
 case false => "FALSE"
   }
}

  def method2(flag: Boolean): String = {
  if(flag) {
   "TRUE"
 }else {
   "FALSE"
 }
  }

}

Byte code comparison between method1 and method2:
scala>javap -cp Comparasion
Compiled from ""
public class Comparasion {
  public java.lang.String method1(boolean);
Code:
   0: iload_1
   1: istore_2
   2: iconst_1
   3: iload_2
   4: if_icmpne 13
   7: ldc   #9  // String TRUE
   9: astore_3
  10: goto  21
  13: iconst_0
  14: iload_2
  15: if_icmpne 23
  18: ldc   #11 // String FALSE
  20: astore_3
  21: aload_3
  22: areturn
  23: new   #13 // class scala/MatchError
  26: dup
  27: iload_2
  28: invokestatic  #19 // Method 
scala/runtime/BoxesRunTime.boxToBoolean:(Z)Ljava/lang/Boolean;
  31: invokespecial #23 // Method 
scala/MatchError."":(Ljava/lang/Object;)V
  34: athrow

  public java.lang.String method2(boolean);
Code:
   0: iload_1
   1: ifeq  9
   4: ldc   #9  // String TRUE
   6: goto  11
   9: ldc   #11 // String FALSE
  11: areturn

  public Comparasion();
Code:
   0: aload_0
   1: invokespecial #33 // Method 
java/lang/Object."":()V
   4: return
}
method1 have 23 line of byte code and 6 line  byte code. Pattern match are more 
expensive comparison to if/else block.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Modifier 'public' is redundant for interface m...

2016-05-28 Thread philipealves
Github user philipealves closed the pull request at:

https://github.com/apache/kafka/pull/1387


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #660

2016-05-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3767; Add missing license to connect-test.properties

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (Ubuntu ubuntu ubuntu-us golang-ppa) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a4802962c9e87f9cc81e1820fc88c71bd70b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a4802962c9e87f9cc81e1820fc88c71bd70b
 > git rev-list 7b7c4a7bb0fd25ddca4e4bdde9e605b3d5a1ba70 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8312668399140830222.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 22.651 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5529050932279224262.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 231
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.185 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK1_8_0_66_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_66


Build failed in Jenkins: kafka-trunk-jdk7 #1324

2016-05-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3767; Add missing license to connect-test.properties

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a4802962c9e87f9cc81e1820fc88c71bd70b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a4802962c9e87f9cc81e1820fc88c71bd70b
 > git rev-list 7b7c4a7bb0fd25ddca4e4bdde9e605b3d5a1ba70 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8366553670614927641.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 20.085 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson831381903638083.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 231
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 20.054 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-3767) Failed Kafka Connect's unit test with Unknown license.

2016-05-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3767:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1443
[https://github.com/apache/kafka/pull/1443]

> Failed Kafka Connect's unit test with Unknown license.
> --
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
> Fix For: 0.10.1.0
>
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3767) Failed Kafka Connect's unit test with Unknown license.

2016-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305286#comment-15305286
 ] 

ASF GitHub Bot commented on KAFKA-3767:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1443


> Failed Kafka Connect's unit test with Unknown license.
> --
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3767: Failed Kafka Connect's unit test w...

2016-05-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1443


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-62: Allow consumer to send heartbeats from a background thread

2016-05-28 Thread Onur Karaman
Thanks for the KIP writeup, Jason.

Before anything else, I just wanted to point out that it's worth mentioning
the "heartbeat.interval.ms" consumer config in the KIP for completeness.
Today this config only starts to kick in if poll is called frequently
enough. A separate heartbeat thread should make this config behave more
like what people would expect: a separate thread sending heartbeats at the
configured interval.

With this KIP, the relevant configs become:
"max.poll.records" - already exists
"session.timeout.ms" - already exists
"heartbeat.interval.ms" - already exists
"process.timeout.ms" - new

After reading the KIP several times, I think it would be helpful to be more
explicit in the desired outcome. Is it trying to make faster
best/average/worst case rebalance times? Is it trying to make the clients
need less configuration tuning?

Also it seems that brokers probably still want to enforce minimum and
maximum rebalance timeouts just as with the minimum and maximum session
timeouts so DelayedJoins don't stay in purgatory indefinitely. So we'd add
new "group.min.rebalance.timeout.ms" and "group.max.rebalance.timeout.ms"
broker configs which again might need to be brought up in the KIP. Let's
say we add these bounds. A side-effect of having broker-side bounds on
rebalance timeouts in combination with Java clients that makes process
timeouts the same as rebalance timeouts is that the broker effectively
dictates the max processing time allowed between poll calls. This gotcha
exists right now with today's broker-side bounds on session timeouts. So
I'm not really convinced that the proposal gets rid of this complication
mentioned in the KIP.

I think the main question to ask is: does the KIP actually make a
difference?

It looks like this KIP improves rebalance times specifically when the
client currently has processing times large enough to force larger session
timeouts and heartbeat intervals to not be honored. Separating session
timeouts from processing time means clients can keep their "
session.timeout.ms" low so the coordinator can quickly detect process
failure, and honoring a low "heartbeat.interval.ms" on the separate
heartbeat thread means clients will be quickly notified of group membership
and subscription changes - all without placing difficult expectations on
processing time. But even so, rebalancing through the calling thread means
the slowest processing client in the group will still be the rate limiting
step when looking at rebalance times.

>From a usability perspective, the burden still seems like it will be tuning
the processing time to keep the "progress liveness" happy during rebalances
while still having reasonable upper bounds on rebalance times. It still
looks like users have to do almost the exact same tricks as today when the
group membership changes due slow processing times even though all the
consumers are alive and the topics haven't change:
1. Increase the rebalance timeout to give more time for record processing
(the difference compared to today is that we bump the rebalance timeout
instead of session timeout).
2. Reduce the number of records handled on each iteration with
max.poll.records.

This burden goes away if you loosen the liveness property by having a
required rebalance time and optional processing time where rebalance
happens in the background thread as stated in the KIP.

On Thu, May 26, 2016 at 12:40 PM, Jason Gustafson 
wrote:

> Hey Grant,
>
> Thanks for the feedback. I'm definitely open to including heartbeat() in
> this KIP. One thing we should be clear about is what the behavior of
> heartbeat() should be when the group begins rebalancing. I think there are
> basically two options:
>
> 1. heartbeat() simply keeps heartbeating even if the group has started
> rebalancing.
> 2. heartbeat() completes the rebalance itself.
>
> With the first option, when processing takes longer than the rebalance
> timeout, the member will fall out of the group which will cause an offset
> commit failure when it finally finishes. However, if processing finishes
> before the rebalance completes, then offsets can still be committed. On the
> other hand, if heartbeat() completes the rebalance itself, then you'll
> definitely see the offset commit failure for any records being processed.
> So the first option is sort of biased toward processing completion while
> the latter is biased toward rebalance completion.
>
> I'm definitely not a fan of second option since it takes away the choice to
> finish processing before rejoining. However, I do see some benefit in the
> first option if the user wants to keep rebalance time low and doesn't mind
> being kicked out of the group if processing takes longer during a
> rebalance. This may be a reasonable tradeoff since consumer groups are
> presumed to be stable most of the time. A better option in that case might
> be to expose the rebalance timeout to the user directly since it would
> allow the user to use an essentially unbounded proce

[jira] [Updated] (KAFKA-3767) Failed Kafka Connect's unit test with Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru updated KAFKA-3767:
---
Status: Patch Available  (was: In Progress)

> Failed Kafka Connect's unit test with Unknown license.
> --
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3767) Failed Kafka Connect's unit test with Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3767 started by Sasaki Toru.
--
> Failed Kafka Connect's unit test with Unknown license.
> --
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3767) Failed Kafka Connect's unit test with Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru updated KAFKA-3767:
---
Summary: Failed Kafka Connect's unit test with Unknown license.  (was: 
Failed Kafka Connect's unit test because of Unknown license.)

> Failed Kafka Connect's unit test with Unknown license.
> --
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3767) Failed Kafka Connect's unit test because of Unknown license.

2016-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305250#comment-15305250
 ] 

ASF GitHub Bot commented on KAFKA-3767:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/1443

KAFKA-3767: Failed Kafka Connect's unit test because of Unknown license.

This address to https://issues.apache.org/jira/browse/KAFKA-3767.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka test_failure_no_license

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1443.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1443


commit ac07df53facab1f8277f744594fc1b5875148595
Author: Sasaki Toru 
Date:   2016-05-28T09:01:36Z

describes license to connect-test.properties




> Failed Kafka Connect's unit test because of Unknown license.
> 
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3767: Failed Kafka Connect's unit test b...

2016-05-28 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/1443

KAFKA-3767: Failed Kafka Connect's unit test because of Unknown license.

This address to https://issues.apache.org/jira/browse/KAFKA-3767.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka test_failure_no_license

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1443.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1443


commit ac07df53facab1f8277f744594fc1b5875148595
Author: Sasaki Toru 
Date:   2016-05-28T09:01:36Z

describes license to connect-test.properties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3767) Failed Kafka Connect's unit test because of Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru updated KAFKA-3767:
---
Description: 
Kafka Connect's unit test failed with Unknown license as blow.

{quote}
$ ./gradlew test
(snip)
:rat
Unknown license: 
/home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
:rat FAILED

FAILURE: Build failed with an exception.

* Where:
Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63

* What went wrong:
Execution failed for task ':rat'.
> Found 1 files with unknown licenses.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{quote}

I think this is because connect-test.properties doesn't have license.

  was:
Kafka Connect's unit test failed with Unknown license as blow.

{quote}
$ ./gradlew cleanTest test
(snip)
:rat
Unknown license: 
/home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
:rat FAILED

FAILURE: Build failed with an exception.

* Where:
Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63

* What went wrong:
Execution failed for task ':rat'.
> Found 1 files with unknown licenses.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{quote}

I think this is because connect-test.properties doesn't have license.


> Failed Kafka Connect's unit test because of Unknown license.
> 
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3767) Failed Kafka Connect's unit test because of Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)
Sasaki Toru created KAFKA-3767:
--

 Summary: Failed Kafka Connect's unit test because of Unknown 
license.
 Key: KAFKA-3767
 URL: https://issues.apache.org/jira/browse/KAFKA-3767
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, unit tests
Reporter: Sasaki Toru
Assignee: Ewen Cheslack-Postava


Kafka Connect's unit test failed with Unknown license as blow.

{quote}
$ ./gradlew cleanTest test
(snip)
:rat
Unknown license: 
/home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
:rat FAILED

FAILURE: Build failed with an exception.

* Where:
Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63

* What went wrong:
Execution failed for task ':rat'.
> Found 1 files with unknown licenses.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED
{quote}

I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3767) Failed Kafka Connect's unit test because of Unknown license.

2016-05-28 Thread Sasaki Toru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sasaki Toru reassigned KAFKA-3767:
--

Assignee: Sasaki Toru  (was: Ewen Cheslack-Postava)

> Failed Kafka Connect's unit test because of Unknown license.
> 
>
> Key: KAFKA-3767
> URL: https://issues.apache.org/jira/browse/KAFKA-3767
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>
> Kafka Connect's unit test failed with Unknown license as blow.
> {quote}
> $ ./gradlew cleanTest test
> (snip)
> :rat
> Unknown license: 
> /home/kafka/code/kafka/connect/json/src/test/resources/connect-test.properties
> :rat FAILED
> FAILURE: Build failed with an exception.
> * Where:
> Script '/home/kafka/code/kafka/gradle/rat.gradle' line: 63
> * What went wrong:
> Execution failed for task ':rat'.
> > Found 1 files with unknown licenses.
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> {quote}
> I think this is because connect-test.properties doesn't have license.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)