[jira] [Created] (KAFKA-3226) Replicas collections should use List instead of Set in order to maintain order

2016-02-10 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3226:
--

 Summary: Replicas collections should use List instead of Set in 
order to maintain order
 Key: KAFKA-3226
 URL: https://issues.apache.org/jira/browse/KAFKA-3226
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.9.0.1


Found an issue where LeaderAndIsrRequest and UpdateMetadataRequest store the 
replicas in a Set. This potentially changes the order of the replicas list, 
which is important because the first replica is the "preferred" replica. 

The question is, do these requests need to go through a 
deprecation/compatibility cycle, or are they considered internal messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3219: Fix long topic name validation

2016-02-10 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/898

KAFKA-3219: Fix long topic name validation

This fixes an issue with long topic names by considering, during topic
validation, the '-' and the partition id that is appended to the log
folder created for each topic partition.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3219

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/898.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #898


commit 413f1f657b810032c94c8d59908e4bc5e39c26fe
Author: Vahid Hashemian 
Date:   2016-02-10T16:37:35Z

KAFKA-3219: Fix long topic name validation

This fixes an issue with long topic names by considering, during topic
validation, the '-' and the partition id that is appended to the log
folder created for each topic partition.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: add retry to state dir locking

2016-02-10 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/899

MINOR: add retry to state dir locking

There is a possibility that the state directory locking fails when another 
stream thread is taking long to close all tasks. Simple retries should 
alleviate the problem.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka minor2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/899.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #899


commit c211cae875bd41d35b897f54d33fe3827f18c326
Author: Yasuhiro Matsuda 
Date:   2016-02-10T17:43:56Z

MINOR: add retry to state dir locking




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3219) Long topic names mess up broker topic state

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141117#comment-15141117
 ] 

ASF GitHub Bot commented on KAFKA-3219:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/898

KAFKA-3219: Fix long topic name validation

This fixes an issue with long topic names by considering, during topic
validation, the '-' and the partition id that is appended to the log
folder created for each topic partition.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3219

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/898.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #898


commit 413f1f657b810032c94c8d59908e4bc5e39c26fe
Author: Vahid Hashemian 
Date:   2016-02-10T16:37:35Z

KAFKA-3219: Fix long topic name validation

This fixes an issue with long topic names by considering, during topic
validation, the '-' and the partition id that is appended to the log
folder created for each topic partition.




> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at 

[GitHub] kafka pull request: KAFKA-3226: Replicas collections should use Li...

2016-02-10 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/900

KAFKA-3226: Replicas collections should use List instead of Set in or…

…der to maintain order

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka replicas-list

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/900.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #900






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3226) Replicas collections should use List instead of Set in order to maintain order

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141460#comment-15141460
 ] 

ASF GitHub Bot commented on KAFKA-3226:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/900

KAFKA-3226: Replicas collections should use List instead of Set in or…

…der to maintain order

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka replicas-list

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/900.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #900






> Replicas collections should use List instead of Set in order to maintain order
> --
>
> Key: KAFKA-3226
> URL: https://issues.apache.org/jira/browse/KAFKA-3226
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Found an issue where LeaderAndIsrRequest and UpdateMetadataRequest store the 
> replicas in a Set. This potentially changes the order of the replicas list, 
> which is important because the first replica is the "preferred" replica. 
> The question is, do these requests need to go through a 
> deprecation/compatibility cycle, or are they considered internal messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Remove multi-byte charactor in docs

2016-02-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/897


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3219) Long topic names mess up broker topic state

2016-02-10 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3219:
---
Status: Patch Available  (was: Open)

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3226) Replicas collections should use List instead of Set in order to maintain order

2016-02-10 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3226:
---
Status: Patch Available  (was: Open)

> Replicas collections should use List instead of Set in order to maintain order
> --
>
> Key: KAFKA-3226
> URL: https://issues.apache.org/jira/browse/KAFKA-3226
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Found an issue where LeaderAndIsrRequest and UpdateMetadataRequest store the 
> replicas in a Set. This potentially changes the order of the replicas list, 
> which is important because the first replica is the "preferred" replica. 
> The question is, do these requests need to go through a 
> deprecation/compatibility cycle, or are they considered internal messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3225) Method commit() of class SourceTask never invoked

2016-02-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141544#comment-15141544
 ] 

Gwen Shapira commented on KAFKA-3225:
-

Good catch. This is a bit embarrassing... we didn't have a connector that 
committed its own offsets until now (rather than relying on KafkaConnect's 
commit). 

Are you planning on supplying a patch for this? If so, I can assign you the 
JIRA.

> Method commit() of class SourceTask never invoked
> -
>
> Key: KAFKA-3225
> URL: https://issues.apache.org/jira/browse/KAFKA-3225
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
> Environment: Windows 8.1
>Reporter: Krzysztof Dębski
>Assignee: Ewen Cheslack-Postava
>
> In the class org.apache.kafka.connect.source.SourceTask there is the 
> following method:
> {code}/**
>  * 
>  * Commit the offsets, up to the offsets that have been returned by 
> {@link #poll()}. This
>  * method should block until the commit is complete.
>  * 
>  * 
>  * SourceTasks are not required to implement this functionality; Kafka 
> Connect will record offsets
>  * automatically. This hook is provided for systems that also need to 
> store offsets internally
>  * in their own system.
>  * 
>  */
> public void commit() throws InterruptedException {
> // This space intentionally left blank.
> }{code}
> I have created my task which inherits from SourceTask and overrides commit(). 
> In spites of offsets being recorded automatically be Kafka, method commit() 
> is never invoked.
> I have downloaded sources of Kafka and imported them to Intelij Idea.
> Then I used "Find Usages" command. Idea found only one usage - in comments of 
> method stop().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1034

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Remove multi-byte charactor in docs

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 1faab034b10a15beac2b90f8f2fe1c65a6b40765 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1faab034b10a15beac2b90f8f2fe1c65a6b40765
 > git rev-list d5b43b19bb06e9cdc606312c8bcf87ed267daf44 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2713832858479952985.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 15.835 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson882956365495366887.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.872 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Jenkins build is back to normal : kafka-trunk-jdk8 #361

2016-02-10 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-3153: KStream,Type and Serialization

2016-02-10 Thread ymatsuda
Github user ymatsuda closed the pull request at:

https://github.com/apache/kafka/pull/794


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3153) Serializer/Deserializer Registration and Type inference

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141696#comment-15141696
 ] 

ASF GitHub Bot commented on KAFKA-3153:
---

Github user ymatsuda closed the pull request at:

https://github.com/apache/kafka/pull/794


> Serializer/Deserializer Registration and Type inference
> ---
>
> Key: KAFKA-3153
> URL: https://issues.apache.org/jira/browse/KAFKA-3153
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> This changes the way serializer/deserializer are selected by the framework. 
> The new scheme requires the app dev to register serializers/deserializers for 
> types using API. The framework infers the type of data from topology and uses 
> appropriate serializer/deserializer. This is best effort. Type inference is 
> not always possible due to Java's type erasure. If a type cannot be 
> determined, a user code can supply more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: poll even when all partitions are paus...

2016-02-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/893


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


KafkaConsumer.poll(long) pauses indefinitely

2016-02-10 Thread William Grim
Hi,

I'm noticing that when we have a thread that comes up and calls
KafkaConsumer.poll(long), it will pause for a much longer time than our 1s
timeout.  It looks like this is probably happening when kafka gets down to
calling ConsumerNetworkClient.epollWait after calling
ConsumerNetworkClient.poll(Future), because that API seems to pass
Long.MAX_VALUE as the timeout to the low-level libs.

So, my question is if this is a known bug?  I couldn't find anything about
it, but it seems like if I start generating random traffic to that topic,
the timeout resolves sooner but not much sooner than now.

Thanks!

-- 




*William GrimSr. Software Engineerm: 914 418 4115
<914%20418%204115>e: wg...@signal.co signal.co
Cut
Through the NoiseThis e-mail and any files transmitted with it are for the
sole use of the intended recipient(s) and may contain confidential and
privileged information. Any unauthorized use of this email is strictly
prohibited. ©2015 Signal. All rights reserved.*


[jira] [Updated] (KAFKA-2590) KIP-28: Kafka Streams Checklist

2016-02-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2590:
-
Summary: KIP-28: Kafka Streams Checklist  (was: Kafka Streams Checklist)

> KIP-28: Kafka Streams Checklist
> ---
>
> Key: KAFKA-2590
> URL: https://issues.apache.org/jira/browse/KAFKA-2590
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Guozhang Wang
>
> This is an umbrella story for the processor client and Kafka Streams feature 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3226) Replicas collections should use List instead of Set in order to maintain order

2016-02-10 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3226:
---
   Resolution: Not A Problem
Fix Version/s: (was: 0.9.0.1)
   Status: Resolved  (was: Patch Available)

> Replicas collections should use List instead of Set in order to maintain order
> --
>
> Key: KAFKA-3226
> URL: https://issues.apache.org/jira/browse/KAFKA-3226
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Found an issue where LeaderAndIsrRequest and UpdateMetadataRequest store the 
> replicas in a Set. This potentially changes the order of the replicas list, 
> which is important because the first replica is the "preferred" replica. 
> The question is, do these requests need to go through a 
> deprecation/compatibility cycle, or are they considered internal messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: catch an exception in rebalance and sto...

2016-02-10 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/901

MINOR: catch an exception in rebalance and stop the stream thread



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka minor3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #901


commit 0343edc184721aaa4cf62b4e47859337be86c615
Author: Yasuhiro Matsuda 
Date:   2016-02-10T22:21:04Z

MINOR: catch an exception in rebalance and stop the stream thread

commit 77087a1912a3fc6622a7bbbf63e61fb5fea80bd8
Author: Yasuhiro Matsuda 
Date:   2016-02-10T22:21:09Z

Merge branch 'trunk' of github.com:apache/kafka into minor3

commit 282b889f2c2808138c3a1d2a5231092d4216e35a
Author: Yasuhiro Matsuda 
Date:   2016-02-10T22:25:13Z

msg




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3226) Replicas collections should use List instead of Set in order to maintain order

2016-02-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141858#comment-15141858
 ] 

ASF GitHub Bot commented on KAFKA-3226:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/900


> Replicas collections should use List instead of Set in order to maintain order
> --
>
> Key: KAFKA-3226
> URL: https://issues.apache.org/jira/browse/KAFKA-3226
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Found an issue where LeaderAndIsrRequest and UpdateMetadataRequest store the 
> replicas in a Set. This potentially changes the order of the replicas list, 
> which is important because the first replica is the "preferred" replica. 
> The question is, do these requests need to go through a 
> deprecation/compatibility cycle, or are they considered internal messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3226: Replicas collections should use Li...

2016-02-10 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/900


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-02-10 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142015#comment-15142015
 ] 

Mark Grover commented on KAFKA-3052:


Hi all, this seems like a very good change, thanks for working on this.

However, changes the public signature of the class between Apache Kafka 0.9.0.0 
and Apache Kafka 0.9.0.0. For example, there's some code in [Spark 
Streaming|https://github.com/apache/spark/blob/master/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaTestUtils.scala#L111]
 which will fail to compile when Kafka version is bumped from 0.9.0.0 to 
0.9.0.1 (currently Spark is still on 0.8* but we will hopefully add support for 
0.9* soon).

I realize there is a companion object (KafkaConfig) and its fromProps method 
should be used instead of the constructor but if so, do you think it makes 
sense to make the class constructor private? If so, I can file a JIRA. I 
realize Spark is doing the wrong thing by using the constructor but I wonder 
how much code is out there that's going to stop compiling when they bump from 
0.9.0.0 to 0.9.0.1 Thoughts?



> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: connect hangs on startup failure

2016-02-10 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/902

MINOR: connect hangs on startup failure



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-connect-startup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/902.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #902


commit 18943122c33c26635cc4a9b29736ccc1c05d55ff
Author: Jason Gustafson 
Date:   2016-02-11T00:28:51Z

MINOR: connect hangs on startup failure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-02-10 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142015#comment-15142015
 ] 

Mark Grover edited comment on KAFKA-3052 at 2/11/16 12:24 AM:
--

Hi all, this seems like a very good change, thanks for working on this.

However, this changes the public signature of the KafkaConfig class between 
Apache Kafka 0.9.0.0 and Apache Kafka 0.9.0.0. For example, there's some code 
in [Spark 
Streaming|https://github.com/apache/spark/blob/master/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaTestUtils.scala#L111]
 which will fail to compile when Kafka version is bumped from 0.9.0.0 to 
0.9.0.1 (currently Spark is still on 0.8* but we will hopefully add support for 
0.9* soon).

I realize there is a companion object (KafkaConfig) and its fromProps method 
should be used instead of the constructor. Then, do you think it makes sense to 
make the class constructor private? If so, I can file a JIRA. I realize Spark 
is doing the wrong thing by using the constructor but I wonder how much code is 
out there that's going to stop compiling when they bump from 0.9.0.0 to 0.9.0.1 
Thoughts?




was (Author: mgrover):
Hi all, this seems like a very good change, thanks for working on this.

However, changes the public signature of the class between Apache Kafka 0.9.0.0 
and Apache Kafka 0.9.0.0. For example, there's some code in [Spark 
Streaming|https://github.com/apache/spark/blob/master/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaTestUtils.scala#L111]
 which will fail to compile when Kafka version is bumped from 0.9.0.0 to 
0.9.0.1 (currently Spark is still on 0.8* but we will hopefully add support for 
0.9* soon).

I realize there is a companion object (KafkaConfig) and its fromProps method 
should be used instead of the constructor but if so, do you think it makes 
sense to make the class constructor private? If so, I can file a JIRA. I 
realize Spark is doing the wrong thing by using the constructor but I wonder 
how much code is out there that's going to stop compiling when they bump from 
0.9.0.0 to 0.9.0.1 Thoughts?



> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-02-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142034#comment-15142034
 ] 

Ismael Juma commented on KAFKA-3052:


Hi Mark, we also added a secondary constructor to maintain compatibility:

https://github.com/apache/kafka/commit/5b5f002fceee7c1b16298b0eaf176ca49d98c025

Have you verified that Spark breaks when compiled against the 0.9.0 branch?

> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-02-10 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142041#comment-15142041
 ] 

Mark Grover commented on KAFKA-3052:


Ah, thanks Ismael. My kafka code was in the small window in between when this 
got committed and when the new secondary constructor was introduced. Looks like 
that will make it to 0.9.0.1 release as well. So, all is good, thanks.

> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1037

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: add setUncaughtExceptionHandler to KafkaStreams

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 67a7ea9d6744645dd4e08b6a78dd69704a4982b3 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 67a7ea9d6744645dd4e08b6a78dd69704a4982b3
 > git rev-list 5092e7f8347d17d1b6e509424cbebf2406d8d4ba # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson815424194330899912.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 36.912 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7181600275371437734.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 52.4 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: MINOR: add setUncaughtExceptionHandler to Kafk...

2016-02-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/894


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #364

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: add setUncaughtExceptionHandler to KafkaStreams

--
[...truncated 7086 lines...]
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 3 mins 12.47 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 665629 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
  

[jira] [Created] (KAFKA-3225) Method commit() of class SourceTask never invoked

2016-02-10 Thread JIRA
Krzysztof Dębski created KAFKA-3225:
---

 Summary: Method commit() of class SourceTask never invoked
 Key: KAFKA-3225
 URL: https://issues.apache.org/jira/browse/KAFKA-3225
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Affects Versions: 0.9.0.0
 Environment: Windows 8.1
Reporter: Krzysztof Dębski
Assignee: Ewen Cheslack-Postava


In the class org.apache.kafka.connect.source.SourceTask there is the following 
method:

{code}/**
 * 
 * Commit the offsets, up to the offsets that have been returned by {@link 
#poll()}. This
 * method should block until the commit is complete.
 * 
 * 
 * SourceTasks are not required to implement this functionality; Kafka 
Connect will record offsets
 * automatically. This hook is provided for systems that also need to store 
offsets internally
 * in their own system.
 * 
 */
public void commit() throws InterruptedException {
// This space intentionally left blank.
}{code}

I have created my task which inherits from SourceTask and overrides commit(). 
In spites of offsets being recorded automatically be Kafka, method commit() is 
never invoked.
I have downloaded sources of Kafka and imported them to Intelij Idea.
Then I used "Find Usages" command. Idea found only one usage - in comments of 
method stop().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Remove multi-byte charactor in docs

2016-02-10 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/897

MINOR: Remove multi-byte charactor in docs

There are multi-byte characters In quickstart.html and security.html.
This PR will fix it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka remove_multi_byte_character

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/897.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #897


commit 6be07f206c5618c07eb2cfc0cbf50246f43337a3
Author: Sasaki Toru 
Date:   2016-02-10T08:50:48Z

Remove multi-byte charactor in docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1035

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] HOTFIX: poll even when all partitions are paused. handle concurrent

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu yahoo-not-h2) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c1f8f689af43f5ce5a95dad86537db4615449694 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c1f8f689af43f5ce5a95dad86537db4615449694
 > git rev-list 1faab034b10a15beac2b90f8f2fe1c65a6b40765 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8629590206623087727.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 26.466 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3758740885105039076.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 32.57 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-10 Thread Jay Kreps
I think this makes a lot of sense and won't be hard to implement and
doesn't create too much in the way of new interfaces.

-Jay

On Tue, Feb 9, 2016 at 8:13 AM, Bill Warshaw  wrote:

> Hello,
>
> I just submitted KIP-47 for adding a new log deletion policy based on a
> minimum timestamp of messages to retain.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-47+-+Add+timestamp-based+log+deletion+policy
>
> I'm open to any comments or suggestions.
>
> Thanks,
> Bill Warshaw
>


Build failed in Jenkins: kafka-trunk-jdk8 #362

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[cshapi] HOTFIX: poll even when all partitions are paused. handle concurrent

--
[...truncated 6784 lines...]
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 1 mins 17.085 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 665629 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 

[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-02-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142096#comment-15142096
 ] 

Ismael Juma commented on KAFKA-3052:


Great, Mark. Thank you for checking, we do appreciate it!

> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: connect hangs on startup failure

2016-02-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/902


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1036

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Connect hangs on startup failure

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5092e7f8347d17d1b6e509424cbebf2406d8d4ba 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5092e7f8347d17d1b6e509424cbebf2406d8d4ba
 > git rev-list c1f8f689af43f5ce5a95dad86537db4615449694 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2994374708428076603.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 19.981 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson80967004432956480.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.192 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #363

2016-02-10 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Connect hangs on startup failure

--
[...truncated 6699 lines...]
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 2 mins 23.59 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 665623 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at