[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-02-04 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133716#comment-15133716
 ] 

Elias Levy commented on KAFKA-2729:
---

Had the same issue happen here while testing a 5 node Kafka cluster with a 3 
node ZK ensemble on Kubernetes on AWS.  After running for a while broker 2 
started showing the "Cached zkVersion [29] not equal to that in zookeeper, skip 
updating ISR" error message for al the partitions it leads.  For those 
partition it is the only in sync replica.  That has led to the Samza jobs I was 
running to stop.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-02-04 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133716#comment-15133716
 ] 

Elias Levy edited comment on KAFKA-2729 at 2/5/16 5:55 AM:
---

Had the same issue happen here while testing a 5 node Kafka cluster with a 3 
node ZK ensemble on Kubernetes on AWS.  After running for a while broker 2 
started showing the "Cached zkVersion [29] not equal to that in zookeeper, skip 
updating ISR" error message for al the partitions it leads.  For those 
partition it is the only in sync replica.  That has led to the Samza jobs I was 
running to stop.

I should note that I am running 0.9.0.0.


was (Author: elevy):
Had the same issue happen here while testing a 5 node Kafka cluster with a 3 
node ZK ensemble on Kubernetes on AWS.  After running for a while broker 2 
started showing the "Cached zkVersion [29] not equal to that in zookeeper, skip 
updating ISR" error message for al the partitions it leads.  For those 
partition it is the only in sync replica.  That has led to the Samza jobs I was 
running to stop.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2923) Improve 0.9.0 Upgrade Documents

2016-02-04 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133681#comment-15133681
 ] 

Grant Henke commented on KAFKA-2923:


I think KAFKA-3164 covered point 3. For point 2, there looks to be pretty good 
docs upstream now:

- http://kafka.apache.org/documentation.html#design_quotasenforcement
- http://kafka.apache.org/documentation.html#brokerconfigs

Is anything else needed [~guozhang]?

> Improve 0.9.0 Upgrade Documents 
> 
>
> Key: KAFKA-2923
> URL: https://issues.apache.org/jira/browse/KAFKA-2923
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Grant Henke
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> A couple of places we can improve the upgrade docs:
> 1) Explanation about replica.lag.time.max.ms and how it relates to the old 
> configs.
> 2) Default quota configs.
> 3) Client-server compatibility: old clients working with new servers and new 
> clients working with old servers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2923) Improve 0.9.0 Upgrade Documents

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2923:
--

Assignee: Grant Henke

> Improve 0.9.0 Upgrade Documents 
> 
>
> Key: KAFKA-2923
> URL: https://issues.apache.org/jira/browse/KAFKA-2923
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Grant Henke
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> A couple of places we can improve the upgrade docs:
> 1) Explanation about replica.lag.time.max.ms and how it relates to the old 
> configs.
> 2) Default quota configs.
> 3) Client-server compatibility: old clients working with new servers and new 
> clients working with old servers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2589) Documentation bug: the default value for the "rebalance.backoff.ms" property is not specified correctly

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2589:
---
Status: Patch Available  (was: Open)

> Documentation bug: the default value for the "rebalance.backoff.ms" property 
> is not specified correctly
> ---
>
> Key: KAFKA-2589
> URL: https://issues.apache.org/jira/browse/KAFKA-2589
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.2.1
> Environment: any
>Reporter: Bogdan Dimitriu
>Assignee: Grant Henke
>
> The documentation for 0.8.2.1 consumer properties specifies:
> | rebalance.backoff.ms | 2000 | Backoff time between retries during rebalance 
> |
> According to the source code though the default value is obtained this way:
> {code}
> val rebalanceBackoffMs = props.getInt("rebalance.backoff.ms", zkSyncTimeMs)
> {code}
> which is referenced from here:
> {code}
> val zkSyncTimeMs = props.getInt("zookeeper.sync.time.ms", 2000)
> {code}
> So by default it is 2000 as specified in the documentation, UNLESS the 
> {{zookeeper.sync.time.ms}} is manually set to be different
> This may create confusion with recommendations such:
> {quote}
> In the case, make sure that rebalance.max.retries * rebalance.backoff.ms > 
> zookeeper.session.timeout.ms
> {quote}
> from here: 
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2589) Documentation bug: the default value for the "rebalance.backoff.ms" property is not specified correctly

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133674#comment-15133674
 ] 

ASF GitHub Bot commented on KAFKA-2589:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/876

KAFKA-2589: the default value for the "rebalance.backoff.ms" property…

… is not specified correctly

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka rebalance-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/876.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #876


commit 746a3cc88b05cc26e727b66ae808148066a07f1f
Author: Grant Henke 
Date:   2016-02-05T05:09:09Z

KAFKA-2589: the default value for the "rebalance.backoff.ms" property is 
not specified correctly




> Documentation bug: the default value for the "rebalance.backoff.ms" property 
> is not specified correctly
> ---
>
> Key: KAFKA-2589
> URL: https://issues.apache.org/jira/browse/KAFKA-2589
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.2.1
> Environment: any
>Reporter: Bogdan Dimitriu
>Assignee: Grant Henke
>
> The documentation for 0.8.2.1 consumer properties specifies:
> | rebalance.backoff.ms | 2000 | Backoff time between retries during rebalance 
> |
> According to the source code though the default value is obtained this way:
> {code}
> val rebalanceBackoffMs = props.getInt("rebalance.backoff.ms", zkSyncTimeMs)
> {code}
> which is referenced from here:
> {code}
> val zkSyncTimeMs = props.getInt("zookeeper.sync.time.ms", 2000)
> {code}
> So by default it is 2000 as specified in the documentation, UNLESS the 
> {{zookeeper.sync.time.ms}} is manually set to be different
> This may create confusion with recommendations such:
> {quote}
> In the case, make sure that rebalance.max.retries * rebalance.backoff.ms > 
> zookeeper.session.timeout.ms
> {quote}
> from here: 
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2589: the default value for the "rebalan...

2016-02-04 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/876

KAFKA-2589: the default value for the "rebalance.backoff.ms" property…

… is not specified correctly

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka rebalance-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/876.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #876


commit 746a3cc88b05cc26e727b66ae808148066a07f1f
Author: Grant Henke 
Date:   2016-02-05T05:09:09Z

KAFKA-2589: the default value for the "rebalance.backoff.ms" property is 
not specified correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2589) Documentation bug: the default value for the "rebalance.backoff.ms" property is not specified correctly

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2589:
--

Assignee: Grant Henke

> Documentation bug: the default value for the "rebalance.backoff.ms" property 
> is not specified correctly
> ---
>
> Key: KAFKA-2589
> URL: https://issues.apache.org/jira/browse/KAFKA-2589
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.2.1
> Environment: any
>Reporter: Bogdan Dimitriu
>Assignee: Grant Henke
>
> The documentation for 0.8.2.1 consumer properties specifies:
> | rebalance.backoff.ms | 2000 | Backoff time between retries during rebalance 
> |
> According to the source code though the default value is obtained this way:
> {code}
> val rebalanceBackoffMs = props.getInt("rebalance.backoff.ms", zkSyncTimeMs)
> {code}
> which is referenced from here:
> {code}
> val zkSyncTimeMs = props.getInt("zookeeper.sync.time.ms", 2000)
> {code}
> So by default it is 2000 as specified in the documentation, UNLESS the 
> {{zookeeper.sync.time.ms}} is manually set to be different
> This may create confusion with recommendations such:
> {quote}
> In the case, make sure that rebalance.max.retries * rebalance.backoff.ms > 
> zookeeper.session.timeout.ms
> {quote}
> from here: 
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3037) Number of alive brokers not known after single node cluster startup

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3037:
---
Status: Patch Available  (was: Open)

> Number of alive brokers not known after single node cluster startup
> ---
>
> Key: KAFKA-3037
> URL: https://issues.apache.org/jira/browse/KAFKA-3037
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Grant Henke
>Priority: Minor
>
> Single broker cluster is not aware of itself being alive. This can cause 
> failure in logic which relies on number of alive brokers being known - e.g. 
> consumer offsets topic creation logic success depends on number of alive 
> brokers being known.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3037: Test number of alive brokers known...

2016-02-04 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/875

KAFKA-3037: Test number of alive brokers known after single node clus…

…ter startup

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka self-aware

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #875


commit 21ec8efa3a1d218e1bd578a5498d25c8e3d95ee8
Author: Grant Henke 
Date:   2016-02-05T04:52:27Z

KAFKA-3037: Test number of alive brokers known after single node cluster 
startup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3037) Number of alive brokers not known after single node cluster startup

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133657#comment-15133657
 ] 

ASF GitHub Bot commented on KAFKA-3037:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/875

KAFKA-3037: Test number of alive brokers known after single node clus…

…ter startup

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka self-aware

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/875.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #875


commit 21ec8efa3a1d218e1bd578a5498d25c8e3d95ee8
Author: Grant Henke 
Date:   2016-02-05T04:52:27Z

KAFKA-3037: Test number of alive brokers known after single node cluster 
startup




> Number of alive brokers not known after single node cluster startup
> ---
>
> Key: KAFKA-3037
> URL: https://issues.apache.org/jira/browse/KAFKA-3037
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Grant Henke
>Priority: Minor
>
> Single broker cluster is not aware of itself being alive. This can cause 
> failure in logic which relies on number of alive brokers being known - e.g. 
> consumer offsets topic creation logic success depends on number of alive 
> brokers being known.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3037) Number of alive brokers not known after single node cluster startup

2016-02-04 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133656#comment-15133656
 ] 

Grant Henke commented on KAFKA-3037:


[~sslavic] I think this is resolved. I wrote a small unit test that fails in 
0.8.2 but passes in trunk and 0.9. I will open a PR with the test upstream, let 
me know if it doesn't cover the scenario you expect. 

> Number of alive brokers not known after single node cluster startup
> ---
>
> Key: KAFKA-3037
> URL: https://issues.apache.org/jira/browse/KAFKA-3037
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Grant Henke
>Priority: Minor
>
> Single broker cluster is not aware of itself being alive. This can cause 
> failure in logic which relies on number of alive brokers being known - e.g. 
> consumer offsets topic creation logic success depends on number of alive 
> brokers being known.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3206) No reconsiliation after partitioning

2016-02-04 Thread TAO XIAO (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133580#comment-15133580
 ] 

TAO XIAO commented on KAFKA-3206:
-

This is most likely due to unclean.leader.election.enable being set to true. It 
basically means any out of sync node can take over leadership and receive 
traffic.  If you have unclean.leader.election.enable turned off the node you 
brought up in step 4 will not become a leader therefore no more messages can be 
sent to this partition.

> No reconsiliation after partitioning
> 
>
> Key: KAFKA-3206
> URL: https://issues.apache.org/jira/browse/KAFKA-3206
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Arkadiusz Firus
>
> Kafka topic partition could be in an inconsistent state where two nodes see 
> different partition state.
> Steps to reproduce the problem:
> 1. Create topic with one partition and replication factor 2
> 2. Stop one of the Kafka nodes where that partition resides
> 3. Send following messages to that topic:
> ABC
> BCD
> CDE
> 3. Stop the other Kafka node
> Currently none of the nodes should be running
> 4. Start first Kafka node - this node has no records
> 5. Send following records to the topic:
> 123
> 234
> 345
> 6. Start the other Kafka node
> The reconciliation should happen here but it does not.
> 7. When you read the topic you will see
> 123
> 234
> 345
> 8. When you stop the first node and read the topic you will see
> ABC
> BCD
> CDE 
> This means that the partition is in inconsistent state.
> If you have any questions please feel free to e-mail me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1021

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3211: Handle WorkerTask stop before start correctly

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision dc662776cde8e980a3f978041adaf961edf0fe7d 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f dc662776cde8e980a3f978041adaf961edf0fe7d
 > git rev-list db8d6f02c092c42f2402b7e2587c1b28d330bf83 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson697217756391884583.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 28.403 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7402540622494928529.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 36.048 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Jenkins build is back to normal : kafka_0.9.0_jdk7 #113

2016-02-04 Thread Apache Jenkins Server
See 



[jira] [Resolved] (KAFKA-3211) Handle Connect WorkerTask shutdown before startup correctly

2016-02-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3211.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 874
[https://github.com/apache/kafka/pull/874]

> Handle Connect WorkerTask shutdown before startup correctly
> ---
>
> Key: KAFKA-3211
> URL: https://issues.apache.org/jira/browse/KAFKA-3211
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> This is a follow-up from KAFKA-3092. The shortcut exit from awaitShutdown() 
> in WorkerTask can lead to an inconsistent state if the task has not actually 
> begun execution yet. Although the caller will believe the task has completed 
> shutdown, nothing stops it from starting afterwards. This could happen if 
> there is a long delay between the time a task is submitted to an Executor and 
> the time it begins execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3211) Handle Connect WorkerTask shutdown before startup correctly

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133500#comment-15133500
 ] 

ASF GitHub Bot commented on KAFKA-3211:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/874


> Handle Connect WorkerTask shutdown before startup correctly
> ---
>
> Key: KAFKA-3211
> URL: https://issues.apache.org/jira/browse/KAFKA-3211
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> This is a follow-up from KAFKA-3092. The shortcut exit from awaitShutdown() 
> in WorkerTask can lead to an inconsistent state if the task has not actually 
> begun execution yet. Although the caller will believe the task has completed 
> shutdown, nothing stops it from starting afterwards. This could happen if 
> there is a long delay between the time a task is submitted to an Executor and 
> the time it begins execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3211: handle WorkerTask stop before star...

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/874


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3211) Handle Connect WorkerTask shutdown before startup correctly

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133461#comment-15133461
 ] 

ASF GitHub Bot commented on KAFKA-3211:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/874

KAFKA-3211: handle WorkerTask stop before start correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3211

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #874


commit e5a08f53f4a8ca665aae1ea8cb81d11e9ace90ca
Author: Jason Gustafson 
Date:   2016-02-05T01:16:04Z

KAFKA-3211: handle WorkerTask stop before start correctly




> Handle Connect WorkerTask shutdown before startup correctly
> ---
>
> Key: KAFKA-3211
> URL: https://issues.apache.org/jira/browse/KAFKA-3211
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> This is a follow-up from KAFKA-3092. The shortcut exit from awaitShutdown() 
> in WorkerTask can lead to an inconsistent state if the task has not actually 
> begun execution yet. Although the caller will believe the task has completed 
> shutdown, nothing stops it from starting afterwards. This could happen if 
> there is a long delay between the time a task is submitted to an Executor and 
> the time it begins execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3211: handle WorkerTask stop before star...

2016-02-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/874

KAFKA-3211: handle WorkerTask stop before start correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3211

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/874.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #874


commit e5a08f53f4a8ca665aae1ea8cb81d11e9ace90ca
Author: Jason Gustafson 
Date:   2016-02-05T01:16:04Z

KAFKA-3211: handle WorkerTask stop before start correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3214:
--

 Summary: Add consumer system tests for compressed topics
 Key: KAFKA-3214
 URL: https://issues.apache.org/jira/browse/KAFKA-3214
 Project: Kafka
  Issue Type: Test
  Components: consumer
Reporter: Jason Gustafson
Assignee: Jason Gustafson


As far as I can tell, we don't have any ducktape tests which verify correctness 
when compression is enabled. If we did, we might have caught KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #347

2016-02-04 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-04 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133419#comment-15133419
 ] 

Anna Povzner commented on KAFKA-3188:
-

Cool! I took compatibility test. I can take one more later if needed. 

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1020

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] KAFKA-3179; Fix seek on compressed messages

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu yahoo-not-h2) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision db8d6f02c092c42f2402b7e2587c1b28d330bf83 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f db8d6f02c092c42f2402b7e2587c1b28d330bf83
 > git rev-list 7802a90ed98ea5b9a2b2dcf2e04db1a50e34a2f8 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4322071509984109609.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 13.023 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1147662473454094598.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 16.228 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Assigned] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-04 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner reassigned KAFKA-3188:
---

Assignee: Anna Povzner

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133411#comment-15133411
 ] 

Jiangjie Qin commented on KAFKA-3188:
-

[~apovzner] Yes, please! I am actually busy with a couple of other tickets, so 
I may not be able to get on to system test in a couple of days.

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3179) Kafka consumer delivers message whose offset is earlier than sought offset.

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133380#comment-15133380
 ] 

ASF GitHub Bot commented on KAFKA-3179:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/842


> Kafka consumer delivers message whose offset is earlier than sought offset.
> ---
>
> Key: KAFKA-3179
> URL: https://issues.apache.org/jira/browse/KAFKA-3179
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This problem is reproducible by seeking to the middle a compressed message 
> set. Because KafkaConsumer does not filter out the messages earlier than the 
> sought offset in the compressed message. The message returned to user will 
> always be the first message in the compressed message set instead of the 
> message user sought to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3179) Kafka consumer delivers message whose offset is earlier than sought offset.

2016-02-04 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-3179:
--
   Resolution: Fixed
Fix Version/s: (was: 0.9.0.1)
   0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 842
[https://github.com/apache/kafka/pull/842]

> Kafka consumer delivers message whose offset is earlier than sought offset.
> ---
>
> Key: KAFKA-3179
> URL: https://issues.apache.org/jira/browse/KAFKA-3179
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This problem is reproducible by seeking to the middle a compressed message 
> set. Because KafkaConsumer does not filter out the messages earlier than the 
> sought offset in the compressed message. The message returned to user will 
> always be the first message in the compressed message set instead of the 
> message user sought to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3179 Fix seek on compressed messages

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/842


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1019

2016-02-04 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-04 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1519#comment-1519
 ] 

Anna Povzner commented on KAFKA-3188:
-

[~becket_qin] Thanks! Do you mind if I take one of these three tickets? I can 
do either this one (Compatibility test) or the rolling upgrade. Let me know. 

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133305#comment-15133305
 ] 

Jiangjie Qin edited comment on KAFKA-3197 at 2/4/16 11:30 PM:
--

Thanks [~fpj].

Currently the error handling and retry logic is in the response handler. We 
wait for a request to fail, reenqueue failed batches in that requests into 
accumulator and then resend. The part makes me feel uncomfortable is that we 
have to look into the in flight requests if we want to resend a potentially 
failed batch to new leader.


was (Author: becket_qin):
Thanks [~fpj].

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #112

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Removed unnecessary Vagrantfile hack

[me] MINOR: log connect reconfiguration error only if there was an error

--
[...truncated 2846 lines...]
kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED
:clients:checkstyleMain
FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'checkstyleMain' during 
up-to-date check.  See stacktrace for details.
> Could not read entry 
> '

[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133305#comment-15133305
 ] 

Jiangjie Qin commented on KAFKA-3197:
-

Thanks [~fpj].

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3213) [CONNECT] It looks like we are not backing off properly when reconfiguring tasks

2016-02-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3213:

Description: 
Looking at logs of attempt to reconfigure connector while leader is restarting, 
I see:

{code}

[2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,803] DEBUG Sending POST with input 
[{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
 to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
{code}

Note that it looks like we are retrying every 1ms, while I'd expect a retry 
every 250ms.


  was:
Looking at logs of attempt to reconfigure connector while leader is restarting, 
I see:

[code]

[2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,803] DEBUG Sending POST with input 
[{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
 to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[code]
Note that it looks like we are retrying every 1ms, while I'd expect a retry 
every 250ms.



> [CONNECT] It looks like we are not backing off properly when reconfiguring 
> tasks
> 
>

[jira] [Updated] (KAFKA-3213) [CONNECT] It looks like we are not backing off properly when reconfiguring tasks

2016-02-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3213:

Description: 
Looking at logs of attempt to reconfigure connector while leader is restarting, 
I see:

[code]

[2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,803] DEBUG Sending POST with input 
[{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
 to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[code]
Note that it looks like we are retrying every 1ms, while I'd expect a retry 
every 250ms.


  was:
Looking at logs of attempt to reconfigure connector while leader is restarting, 
I see:


[2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,803] DEBUG Sending POST with input 
[{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
 to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused


Note that it looks like we are retrying every 1ms, while I'd expect a retry 
every 250ms.



> [CONNECT] It looks like we are not backing off properly when reconfiguring 
> tasks
> 
>
>   

[jira] [Created] (KAFKA-3213) [CONNECT] It looks like we are not backing off properly when reconfiguring tasks

2016-02-04 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3213:
---

 Summary: [CONNECT] It looks like we are not backing off properly 
when reconfiguring tasks
 Key: KAFKA-3213
 URL: https://issues.apache.org/jira/browse/KAFKA-3213
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Gwen Shapira
Assignee: Ewen Cheslack-Postava


Looking at logs of attempt to reconfigure connector while leader is restarting, 
I see:


[2016-01-29 20:31:01,799] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,802] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,802] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused
[2016-01-29 20:31:01,803] DEBUG Sending POST with input 
[{"tables":"bar","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"},{"tables":"foo","table.poll.interval.ms":"1000","incrementing.column.name":"id","connection.url":"jdbc:mysql://worker1:3306/testdb?user=root","name":"test-mysql-jdbc","tasks.max":"3","task.class":"io.confluent.connect.jdbc.JdbcSourceTask","poll.interval.ms":"1000","topic.prefix":"test-","connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing"}]
 to http://worker2:8083/connectors/test-mysql-jdbc/tasks 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2016-01-29 20:31:01,803] ERROR IO error forwarding REST request:  
(org.apache.kafka.connect.runtime.rest.RestServer)
java.net.ConnectException: Connection refused
[2016-01-29 20:31:01,804] ERROR Request to leader to reconfigure connector 
tasks failed (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: IO Error 
trying to forward REST request: Connection refused


Note that it looks like we are retrying every 1ms, while I'd expect a retry 
every 250ms.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133265#comment-15133265
 ] 

Flavio Junqueira commented on KAFKA-3197:
-

[~becket_qin] thanks for the clarification. 

bq. If leader moves, that does not necessarily mean the request to old leader 
failed. We can always send the unacknowledged message to new leader, but that 
probably introduce duplicate almost every time leader moves.

I agree that duplicates are inconvenient, but in this scenario we aren't 
promising no duplicates, so I'd rather treat the duplicates separately.

bq. Currently after batches leave the record accumulator, we only track them in 
requests.

The record accumulator point is a good one and I'm not super familiar with that 
part of the code, so I don't have any concrete suggestion right now, but I'll 
have a closer look. However, 

bq. So while the idea of resend unacknowledged message to both old and new 
leader is natural and makes sense, it seems much more complicated and error 
prone based on our current implementation and does not buy us much.

True, from your description, it sounds like the change isn't trivial. But let 
me ask you this: don't we ever have to retransmit messages after a leader 
change? If we do, then the code path for retransmitting on a different 
connection must be there. I'm not super familiar with that part of the code, so 
I don't have any concrete suggestion right now, but I can have a look to see if 
I'm able to help out.



> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: more info in error msg

2016-02-04 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/873

MINOR: more info in error msg

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #873






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133215#comment-15133215
 ] 

ASF GitHub Bot commented on KAFKA-3207:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/865


> StateStore seems to be writing state to one topic but restoring from another
> 
>
> Key: KAFKA-3207
> URL: https://issues.apache.org/jira/browse/KAFKA-3207
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.1.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.1.0
>
>
> The state store (I am using in-memory state store) writes to a topic call 
> [store-name] but restores from [job-id]-[store-name]-changelog.  You can see 
> in StoreChangeLogger that it writes to a topic which is the [store-name] 
> passed through from the store supplier factory, but restores from the above 
> topic name. My topology is:
>   TopologyBuilder builder = new TopologyBuilder();
>   SerializerAdapter commonKeyAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   SerializerAdapter gamePlayAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
> kafkaStreamConfig.getGamePlayTopic());
>   Duration activityInterval = 
> kafkaStreamConfig.getActivityInterval();
>   if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
> activityInterval.toMinutes() != 0)
>   {
>   throw new SystemFaultException(
>   "The game activity interval must be a multiple 
> of 5 minutes and divide into 24 hours current value [" +
>   activityInterval.toMinutes() + "]");
>   }
>   builder.addProcessor("PROCESS", new 
> GameActivitySupplier(kafkaStreamConfig.getStoreName(),
>
> kafkaStreamConfig.getGameActivitySendPeriod(),
>
> activityInterval,
>
> kafkaStreamConfig.getRemoveOldestTime(),
>
> kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");
>   SerializerAdapter storeValueAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addStateStore(
>   
> Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
> commonKeyAdapter).withValues(
>   storeValueAdapter, 
> storeValueAdapter).inMemory().build(), "PROCESS");
>   builder.addSink("SINK", 
> kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
>   new 
> SerializerAdapter(JDKBinarySerializer.INSTANCE), 
> "PROCESS");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3207.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 865
[https://github.com/apache/kafka/pull/865]

> StateStore seems to be writing state to one topic but restoring from another
> 
>
> Key: KAFKA-3207
> URL: https://issues.apache.org/jira/browse/KAFKA-3207
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.1.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.1.0
>
>
> The state store (I am using in-memory state store) writes to a topic call 
> [store-name] but restores from [job-id]-[store-name]-changelog.  You can see 
> in StoreChangeLogger that it writes to a topic which is the [store-name] 
> passed through from the store supplier factory, but restores from the above 
> topic name. My topology is:
>   TopologyBuilder builder = new TopologyBuilder();
>   SerializerAdapter commonKeyAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   SerializerAdapter gamePlayAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
> kafkaStreamConfig.getGamePlayTopic());
>   Duration activityInterval = 
> kafkaStreamConfig.getActivityInterval();
>   if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
> activityInterval.toMinutes() != 0)
>   {
>   throw new SystemFaultException(
>   "The game activity interval must be a multiple 
> of 5 minutes and divide into 24 hours current value [" +
>   activityInterval.toMinutes() + "]");
>   }
>   builder.addProcessor("PROCESS", new 
> GameActivitySupplier(kafkaStreamConfig.getStoreName(),
>
> kafkaStreamConfig.getGameActivitySendPeriod(),
>
> activityInterval,
>
> kafkaStreamConfig.getRemoveOldestTime(),
>
> kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");
>   SerializerAdapter storeValueAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addStateStore(
>   
> Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
> commonKeyAdapter).withValues(
>   storeValueAdapter, 
> storeValueAdapter).inMemory().build(), "PROCESS");
>   builder.addSink("SINK", 
> kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
>   new 
> SerializerAdapter(JDKBinarySerializer.INSTANCE), 
> "PROCESS");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3207: Fix StateChangeLogger to use the r...

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/865


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: KTable.count() to only take a selector ...

2016-02-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/872

MINOR: KTable.count() to only take a selector for key



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KCount

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #872


commit 913a617240eadb853ca8196df97e9578eca8f729
Author: Guozhang Wang 
Date:   2016-02-04T22:37:31Z

first version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: log connect reconfiguration error only ...

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/871


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: log connect reconfiguration error only ...

2016-02-04 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/871

MINOR: log connect reconfiguration error only if there was an error



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka fix-cc-log

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #871


commit f2cdc0bd71f4528df912e45fc62a65146661
Author: Gwen Shapira 
Date:   2016-02-04T22:30:52Z

MINOR: log connect reconfiguration error only if there was an error




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #346

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Removed unnecessary Vagrantfile hack

[wangguoz] HOTFIX: fix partition ordering in assignment

--
[...truncated 82 lines...]
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:305:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/network/BlockingChannel.scala:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
11 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:395:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:293:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more informat

[GitHub] kafka pull request: KAFKA-3192: Add unwindowed aggregations for KS...

2016-02-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/870

KAFKA-3192: Add unwindowed aggregations for KStream



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3192

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #870


commit bf4c4cb3dbb5b4066d9c3e0ada5b7ffd98eb129a
Author: Guozhang Wang 
Date:   2016-01-14T20:27:58Z

add internal source topic for tracking

commit 1485dff08a76c6ff685b0fe72226ce3b629b1d3c
Author: Guozhang Wang 
Date:   2016-01-14T22:32:08Z

minor fix for this.interSourceTopics

commit 60cafd0885c41f93e408f8d89880187ddec789a1
Author: Guozhang Wang 
Date:   2016-01-15T01:09:00Z

add KStream windowed aggregation

commit 983a626008d987828deabe45d75e26e909032843
Author: Guozhang Wang 
Date:   2016-01-15T01:34:56Z

merge from apache trunk

commit 57051720de4238feb4dc3c505053096042a87d9c
Author: Guozhang Wang 
Date:   2016-01-15T21:38:53Z

v1

commit 4a49205fcab3a05ed1fd05a34c7a9a92794b992d
Author: Guozhang Wang 
Date:   2016-01-15T22:07:17Z

minor fix on HoppingWindows

commit 9b4127e91c3a551fb655155d9b8e0df50132d0b7
Author: Guozhang Wang 
Date:   2016-01-15T22:43:14Z

fix HoppingWindows

commit 9649fe5c8a9b2e900e7746ae7b8745bb65694583
Author: Guozhang Wang 
Date:   2016-01-16T19:00:54Z

add retainDuplicate option in RocksDBWindowStore

commit 8a9ea02ac3f9962416defa79d16069431063eac0
Author: Guozhang Wang 
Date:   2016-01-16T19:06:12Z

minor fixes

commit 4123528cf4695b05235789ebfca3a63e8a832ffa
Author: Guozhang Wang 
Date:   2016-01-18T17:55:02Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3104

commit 46e8c8d285c0afae6da9ec7437d082060599f3f1
Author: Guozhang Wang 
Date:   2016-01-18T19:15:47Z

add wordcount and pipe jobs

commit 582d3ac24bfe08edb1c567461971cd35c1f75a00
Author: Guozhang Wang 
Date:   2016-01-18T21:53:21Z

merge from trunk

commit 5a002fadfcf760627274ddaa016deeaed5a3199f
Author: Guozhang Wang 
Date:   2016-01-19T00:06:34Z

1. WallClockTimestampExtractor as default; 2. remove windowMS config; 3. 
override state dir with jobId prefix;

commit 7425673e523c42806b29a364564a747443712a53
Author: Guozhang Wang 
Date:   2016-01-19T01:26:11Z

Add PageViewJob

commit ca04ba8d18674c521ad67872562a7671cb0e2c0d
Author: Guozhang Wang 
Date:   2016-01-19T06:23:05Z

minor changes on topic names

commit 563cc546b3a0dd16d586d2df33c37d2c5a5bfb18
Author: Guozhang Wang 
Date:   2016-01-19T21:30:11Z

change config importance levels

commit 4218904505363e61bb4c6b60dc5b13badfd39697
Author: Guozhang Wang 
Date:   2016-01-21T00:11:34Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 26fb5f3f5a8c9b304c5b1e61778c6bc1d9d5fccb
Author: Guozhang Wang 
Date:   2016-01-21T06:43:04Z

demo examples v1

commit 6d92a55d770e058183daabb7aaef7675335fbbad
Author: Guozhang Wang 
Date:   2016-01-22T00:41:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 929e405058eb61d38510120f3f3ed50cd0cfaf47
Author: Guozhang Wang 
Date:   2016-01-22T01:02:04Z

add RollingRocksDBStore

commit 324eb584b97ed3c228347d108d697d2f5133ea99
Author: Guozhang Wang 
Date:   2016-01-22T01:23:32Z

modify MeteredWindowStore

commit 7ba2d90fe1de1ca776cea23ff1c2e8f8b3a6c3f2
Author: Guozhang Wang 
Date:   2016-01-22T01:35:10Z

remove getter

commit a4d78bac9d84dfd1c7dab4ae465b9115ddc451b3
Author: Guozhang Wang 
Date:   2016-01-22T01:36:51Z

remove RollingRocksDB

commit d0e8198ac6a25315d7ab8d21894acf0077f88fde
Author: Guozhang Wang 
Date:   2016-01-22T17:24:32Z

adding cache layer on RocksDB

commit 257b53d3b6df967f8a015a06c9e178d4219d0f8c
Author: Guozhang Wang 
Date:   2016-01-22T23:15:08Z

dummy

commit 25fd73107c577ac2e4b32300d4fe132ad7ff7312
Author: Guozhang Wang 
Date:   2016-01-22T23:21:29Z

merge from trunk

commit e57abf3c117d4ea7398c6157983d75443194ce9f
Author: Guozhang Wang 
Date:   2016-01-22T23:29:59Z

merge from demo example changes again

commit a0caee14d4a7b77045d9813a1072143db0fc8fa1
Author: Guozhang Wang 
Date:   2016-01-22T23:50:19Z

resolve conflicts

commit 5501e6fef16fdef55cb2136d8c3f8035d394d79f
Author: Guozhang Wang 
Date:   2016-01-26T17:12:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3060

commit d73b594147d1b4e25a61497550f498b9f14822a0
Author: Guozhang Wang 
Date:   2016-01-26T19:04:11Z

move logging to rocksDBStore




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena

[jira] [Commented] (KAFKA-3192) Add implicit unlimited windowed aggregation for KStream

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133179#comment-15133179
 ] 

ASF GitHub Bot commented on KAFKA-3192:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/870

KAFKA-3192: Add unwindowed aggregations for KStream



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3192

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #870


commit bf4c4cb3dbb5b4066d9c3e0ada5b7ffd98eb129a
Author: Guozhang Wang 
Date:   2016-01-14T20:27:58Z

add internal source topic for tracking

commit 1485dff08a76c6ff685b0fe72226ce3b629b1d3c
Author: Guozhang Wang 
Date:   2016-01-14T22:32:08Z

minor fix for this.interSourceTopics

commit 60cafd0885c41f93e408f8d89880187ddec789a1
Author: Guozhang Wang 
Date:   2016-01-15T01:09:00Z

add KStream windowed aggregation

commit 983a626008d987828deabe45d75e26e909032843
Author: Guozhang Wang 
Date:   2016-01-15T01:34:56Z

merge from apache trunk

commit 57051720de4238feb4dc3c505053096042a87d9c
Author: Guozhang Wang 
Date:   2016-01-15T21:38:53Z

v1

commit 4a49205fcab3a05ed1fd05a34c7a9a92794b992d
Author: Guozhang Wang 
Date:   2016-01-15T22:07:17Z

minor fix on HoppingWindows

commit 9b4127e91c3a551fb655155d9b8e0df50132d0b7
Author: Guozhang Wang 
Date:   2016-01-15T22:43:14Z

fix HoppingWindows

commit 9649fe5c8a9b2e900e7746ae7b8745bb65694583
Author: Guozhang Wang 
Date:   2016-01-16T19:00:54Z

add retainDuplicate option in RocksDBWindowStore

commit 8a9ea02ac3f9962416defa79d16069431063eac0
Author: Guozhang Wang 
Date:   2016-01-16T19:06:12Z

minor fixes

commit 4123528cf4695b05235789ebfca3a63e8a832ffa
Author: Guozhang Wang 
Date:   2016-01-18T17:55:02Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3104

commit 46e8c8d285c0afae6da9ec7437d082060599f3f1
Author: Guozhang Wang 
Date:   2016-01-18T19:15:47Z

add wordcount and pipe jobs

commit 582d3ac24bfe08edb1c567461971cd35c1f75a00
Author: Guozhang Wang 
Date:   2016-01-18T21:53:21Z

merge from trunk

commit 5a002fadfcf760627274ddaa016deeaed5a3199f
Author: Guozhang Wang 
Date:   2016-01-19T00:06:34Z

1. WallClockTimestampExtractor as default; 2. remove windowMS config; 3. 
override state dir with jobId prefix;

commit 7425673e523c42806b29a364564a747443712a53
Author: Guozhang Wang 
Date:   2016-01-19T01:26:11Z

Add PageViewJob

commit ca04ba8d18674c521ad67872562a7671cb0e2c0d
Author: Guozhang Wang 
Date:   2016-01-19T06:23:05Z

minor changes on topic names

commit 563cc546b3a0dd16d586d2df33c37d2c5a5bfb18
Author: Guozhang Wang 
Date:   2016-01-19T21:30:11Z

change config importance levels

commit 4218904505363e61bb4c6b60dc5b13badfd39697
Author: Guozhang Wang 
Date:   2016-01-21T00:11:34Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 26fb5f3f5a8c9b304c5b1e61778c6bc1d9d5fccb
Author: Guozhang Wang 
Date:   2016-01-21T06:43:04Z

demo examples v1

commit 6d92a55d770e058183daabb7aaef7675335fbbad
Author: Guozhang Wang 
Date:   2016-01-22T00:41:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3066

commit 929e405058eb61d38510120f3f3ed50cd0cfaf47
Author: Guozhang Wang 
Date:   2016-01-22T01:02:04Z

add RollingRocksDBStore

commit 324eb584b97ed3c228347d108d697d2f5133ea99
Author: Guozhang Wang 
Date:   2016-01-22T01:23:32Z

modify MeteredWindowStore

commit 7ba2d90fe1de1ca776cea23ff1c2e8f8b3a6c3f2
Author: Guozhang Wang 
Date:   2016-01-22T01:35:10Z

remove getter

commit a4d78bac9d84dfd1c7dab4ae465b9115ddc451b3
Author: Guozhang Wang 
Date:   2016-01-22T01:36:51Z

remove RollingRocksDB

commit d0e8198ac6a25315d7ab8d21894acf0077f88fde
Author: Guozhang Wang 
Date:   2016-01-22T17:24:32Z

adding cache layer on RocksDB

commit 257b53d3b6df967f8a015a06c9e178d4219d0f8c
Author: Guozhang Wang 
Date:   2016-01-22T23:15:08Z

dummy

commit 25fd73107c577ac2e4b32300d4fe132ad7ff7312
Author: Guozhang Wang 
Date:   2016-01-22T23:21:29Z

merge from trunk

commit e57abf3c117d4ea7398c6157983d75443194ce9f
Author: Guozhang Wang 
Date:   2016-01-22T23:29:59Z

merge from demo example changes again

commit a0caee14d4a7b77045d9813a1072143db0fc8fa1
Author: Guozhang Wang 
Date:   2016-01-22T23:50:19Z

resolve conflicts

commit 5501e6fef16fdef55cb2136d8c3f8035d394d79f
Author: Guozhang Wang 
Date:   2016-01-26T17:12:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K3060

commit d73b594147d1b4e25a61497550f498b9f14822a0
Author: Guozhang Wang 
Date:   2016-01-2

[jira] [Resolved] (KAFKA-2216) ConsumerMetadataRequest does not validate consumer group exists

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2216.

Resolution: Not A Problem

No longer relevant with the new groups protocol.

> ConsumerMetadataRequest does not validate consumer group exists
> ---
>
> Key: KAFKA-2216
> URL: https://issues.apache.org/jira/browse/KAFKA-2216
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently the offset partition is found by hashing & modding the name:
> {quote}
>// Utils.abs(group.hashCode) % config.offsetsTopicNumPartitions
>val partition = offsetManager.partitionFor(consumerMetadataRequest.group)
> {quote}
> And then the broker for the partition is returned as the coordinator broker. 
> I expected an error to be returned for a group that does not exist. If this 
> is not the expectation, then updating the docs may be a result of this Jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix partition ordering in assignment

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/868


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3208) Default security.inter.broker.protocol based on configured advertised.listeners

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3208:
---
Affects Version/s: 0.9.0.0

> Default security.inter.broker.protocol based on configured 
> advertised.listeners
> ---
>
> Key: KAFKA-3208
> URL: https://issues.apache.org/jira/browse/KAFKA-3208
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently the default security.inter.broker.protocol is PLAINTEXT. However, 
> when enabling SASL or SSL on a cluster it is common to not include a 
> PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 
> To reduce the number of issues and make the configuration more convenient we 
> could default this value, if not explicitly set, to the first protocol 
> defined in the advertised.listeners configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3208) Default security.inter.broker.protocol based on configured advertised.listeners

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3208:
---
Status: Patch Available  (was: Open)

> Default security.inter.broker.protocol based on configured 
> advertised.listeners
> ---
>
> Key: KAFKA-3208
> URL: https://issues.apache.org/jira/browse/KAFKA-3208
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently the default security.inter.broker.protocol is PLAINTEXT. However, 
> when enabling SASL or SSL on a cluster it is common to not include a 
> PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 
> To reduce the number of issues and make the configuration more convenient we 
> could default this value, if not explicitly set, to the first protocol 
> defined in the advertised.listeners configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3208) Default security.inter.broker.protocol based on configured advertised.listeners

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133152#comment-15133152
 ] 

ASF GitHub Bot commented on KAFKA-3208:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/869

KAFKA-3208: Default security.inter.broker.protocol based on configure…

…d advertised.listeners

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka default-broker-protocol

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/869.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #869


commit 0e7214c19c2e787f2b65556628cc2e441c7f5c48
Author: Grant Henke 
Date:   2016-02-04T22:10:38Z

KAFKA-3208: Default security.inter.broker.protocol based on configured 
advertised.listeners




> Default security.inter.broker.protocol based on configured 
> advertised.listeners
> ---
>
> Key: KAFKA-3208
> URL: https://issues.apache.org/jira/browse/KAFKA-3208
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently the default security.inter.broker.protocol is PLAINTEXT. However, 
> when enabling SASL or SSL on a cluster it is common to not include a 
> PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 
> To reduce the number of issues and make the configuration more convenient we 
> could default this value, if not explicitly set, to the first protocol 
> defined in the advertised.listeners configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3208: Default security.inter.broker.prot...

2016-02-04 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/869

KAFKA-3208: Default security.inter.broker.protocol based on configure…

…d advertised.listeners

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka default-broker-protocol

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/869.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #869


commit 0e7214c19c2e787f2b65556628cc2e441c7f5c48
Author: Grant Henke 
Date:   2016-02-04T22:10:38Z

KAFKA-3208: Default security.inter.broker.protocol based on configured 
advertised.listeners




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: fix partition ordering in assignment

2016-02-04 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/868

HOTFIX: fix partition ordering in assignment

workround partition ordering not preserved by the consumer group management.
@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka partitionOrder

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #868


commit b964e7b0d76d7071d27926a3da684a9157abb522
Author: Yasuhiro Matsuda 
Date:   2016-02-04T22:05:05Z

HOTFIX: fix partition ordering in assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3212) Race Condition for Repartition Topics

2016-02-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3212.
--
Resolution: Not A Problem

But the consumer will not send its first metadata request until it has the 
coordinator known, which will be after the first rebalance and hence the 
repartition topics created already. So this is not a problem.

> Race Condition for Repartition Topics
> -
>
> Key: KAFKA-3212
> URL: https://issues.apache.org/jira/browse/KAFKA-3212
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> One type of internal topics for Kafka Streams is re-partition topics, used 
> for table aggregations. It is considered as part of the source topics as well 
> that will be subscribed by consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3212) Race Condition for Repartition Topics

2016-02-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3212:
-
Description: One type of internal topics for Kafka Streams is re-partition 
topics, used for table aggregations. It is considered as part of the source 
topics as well that will be subscribed by consumers.  (was: One type of 
internal topics for Kafka Streams is re-partition topics, used for table 
aggregations. It is considered as part of the source topics as well that will 
be subscribed by consumers.

However, when consumer subscribe to the)

> Race Condition for Repartition Topics
> -
>
> Key: KAFKA-3212
> URL: https://issues.apache.org/jira/browse/KAFKA-3212
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> One type of internal topics for Kafka Streams is re-partition topics, used 
> for table aggregations. It is considered as part of the source topics as well 
> that will be subscribed by consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3212) Race Condition for Repartition Topics

2016-02-04 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3212:


 Summary: Race Condition for Repartition Topics
 Key: KAFKA-3212
 URL: https://issues.apache.org/jira/browse/KAFKA-3212
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


One type of internal topics for Kafka Streams is re-partition topics, used for 
table aggregations. It is considered as part of the source topics as well that 
will be subscribed by consumers.

However, when consumer subscribe to the



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Removed unnecessary Vagrantfile hack

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/867


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #345

2016-02-04 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1018

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: fix broken WorkerSourceTask test

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f8598f96df3500cdea15a913d78de201469244b0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f8598f96df3500cdea15a913d78de201469244b0
 > git rev-list 77683c3cb0eab8c85eb13d0e1397cf9ee32586b6 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4165564316112892144.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 17.874 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson792950333715495389.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 677175 found in cache 
> '

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 25.275 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-3206) No reconsiliation after partitioning

2016-02-04 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132862#comment-15132862
 ] 

Vahid Hashemian commented on KAFKA-3206:


I can reproduce this issue. It looks like after step 6 the two brokers are 
sync'ed only on their partition offsets, and not the actual messages received.

> No reconsiliation after partitioning
> 
>
> Key: KAFKA-3206
> URL: https://issues.apache.org/jira/browse/KAFKA-3206
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Arkadiusz Firus
>
> Kafka topic partition could be in an inconsistent state where two nodes see 
> different partition state.
> Steps to reproduce the problem:
> 1. Create topic with one partition and replication factor 2
> 2. Stop one of the Kafka nodes where that partition resides
> 3. Send following messages to that topic:
> ABC
> BCD
> CDE
> 3. Stop the other Kafka node
> Currently none of the nodes should be running
> 4. Start first Kafka node - this node has no records
> 5. Send following records to the topic:
> 123
> 234
> 345
> 6. Start the other Kafka node
> The reconciliation should happen here but it does not.
> 7. When you read the topic you will see
> 123
> 234
> 345
> 8. When you stop the first node and read the topic you will see
> ABC
> BCD
> CDE 
> This means that the partition is in inconsistent state.
> If you have any questions please feel free to e-mail me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3211) Handle Connect WorkerTask shutdown before startup correctly

2016-02-04 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3211:
---
Summary: Handle Connect WorkerTask shutdown before startup correctly  (was: 
Handle WorkerTask shutdown before startup correctly)

> Handle Connect WorkerTask shutdown before startup correctly
> ---
>
> Key: KAFKA-3211
> URL: https://issues.apache.org/jira/browse/KAFKA-3211
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> This is a follow-up from KAFKA-3092. The shortcut exit from awaitShutdown() 
> in WorkerTask can lead to an inconsistent state if the task has not actually 
> begun execution yet. Although the caller will believe the task has completed 
> shutdown, nothing stops it from starting afterwards. This could happen if 
> there is a long delay between the time a task is submitted to an Executor and 
> the time it begins execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3211) Handle WorkerTask shutdown before startup correctly

2016-02-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3211:
--

 Summary: Handle WorkerTask shutdown before startup correctly
 Key: KAFKA-3211
 URL: https://issues.apache.org/jira/browse/KAFKA-3211
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Jason Gustafson
Assignee: Jason Gustafson


This is a follow-up from KAFKA-3092. The shortcut exit from awaitShutdown() in 
WorkerTask can lead to an inconsistent state if the task has not actually begun 
execution yet. Although the caller will believe the task has completed 
shutdown, nothing stops it from starting afterwards. This could happen if there 
is a long delay between the time a task is submitted to an Executor and the 
time it begins execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix broken WorkerSourceTask test

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/859


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Removed unnecessary Vagrantfile hack

2016-02-04 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/867

MINOR: Removed unnecessary Vagrantfile hack

The hack here is no longer necessary with up-to-date versions of Vagrant, 
vagrant-hostmanager, and vagrant-aws. What's more, the change in c8b60b63 
caused a chain of infinite recursion on OSX.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka remove-vagrantfile-hack

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #867


commit 8799afe3190345457adaf12befb47e3a6f8adcf2
Author: Geoff Anderson 
Date:   2016-02-03T23:44:58Z

Removed Vagrantfile hack which is no longer necessary with up-to-date 
versions of Vagrant, vagrant-hostmanager, and vagrant-aws




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1017

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: temp fix for ktable look up

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu yahoo-not-h2) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 77683c3cb0eab8c85eb13d0e1397cf9ee32586b6 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 77683c3cb0eab8c85eb13d0e1397cf9ee32586b6
 > git rev-list 99956f56c9f994b4f55315559c07310b3d599d9b # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1800469438818385144.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.112 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3022783368550352545.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.105 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132736#comment-15132736
 ] 

Jiangjie Qin commented on KAFKA-3203:
-

[~granthenke] The reason we don't use UnknownMagicByteException is probably 
because previously magic byte is always 0 so we don't even check it in the 
code. However, after KIP-31/KIP-32 and idempotent producer in the future, I 
think we will check magic byte. So I would prefer keeping it there. There are 
two possibilities magic byte can go wrong: 1) unknown 2) invalid because of 
something like broker setting, format mismatch, etc. Should we change the 
exception to be InvalidMagicByteException?

I don't have a strong opinion on error code mapping. But according to the 
current description, corrupted message specifically means CRC does not match. 
Personally I would prefer having separate error code for magic byte error and 
codec error. It seems clearer to client developers and users.

> Add UnknownCodecException and UnknownMagicByteException to error mapping
> 
>
> Key: KAFKA-3203
> URL: https://issues.apache.org/jira/browse/KAFKA-3203
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.10.0.0, 0.9.1.0
>
>
> Currently most of the exceptions to user have an error code. While 
> UnknownCodecException and UnknownMagicByteException can also be thrown to 
> client, broker does not have error mapping for them, so clients will only 
> receive UnknownServerException, which is vague.
> We should create those two exceptions in client package and add them to error 
> mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3210) Using asynchronous calls through the raw ZK API in ZkUtils

2016-02-04 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-3210:

Assignee: (was: Neha Narkhede)

> Using asynchronous calls through the raw ZK API in ZkUtils
> --
>
> Key: KAFKA-3210
> URL: https://issues.apache.org/jira/browse/KAFKA-3210
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>
> We have observed a number of issues with the controller interaction with 
> ZooKeeper mainly because ZkClient creates new sessions transparently under 
> the hood. Creating sessions transparently enables, for example, old 
> controller to successfully update znodes in ZooKeeper even when they aren't 
> the controller any longer (e.g., KAFKA-3083). To fix this, we need to bypass 
> the ZkClient lib like we did with ZKWatchedEphemeral.
> In addition to fixing such races with the controller, it would improve 
> performance significantly if we used the async API (see KAFKA-3038). The 
> async API is more efficient because it pipelines the requests to ZooKeeper, 
> and the number of requests upon controller recovery can be large.
> This jira proposes to make these two changes to the calls in ZkUtils and to 
> do it, one path is to first replace the calls in ZkUtils with raw async ZK 
> calls and block so that we don't have to change the controller code in this 
> phase. Once this step is accomplished and it is stable, we make changes to 
> the controller to handle the asynchronous calls to ZooKeeper.
> Note that in the first step, we will need to introduce some logic for session 
> management, which is currently handled entirely by ZkClient.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3210) Using asynchronous calls through the raw ZK API in ZkUtils

2016-02-04 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-3210:
---

 Summary: Using asynchronous calls through the raw ZK API in ZkUtils
 Key: KAFKA-3210
 URL: https://issues.apache.org/jira/browse/KAFKA-3210
 Project: Kafka
  Issue Type: Improvement
  Components: controller, zkclient
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira
Assignee: Neha Narkhede


We have observed a number of issues with the controller interaction with 
ZooKeeper mainly because ZkClient creates new sessions transparently under the 
hood. Creating sessions transparently enables, for example, old controller to 
successfully update znodes in ZooKeeper even when they aren't the controller 
any longer (e.g., KAFKA-3083). To fix this, we need to bypass the ZkClient lib 
like we did with ZKWatchedEphemeral.

In addition to fixing such races with the controller, it would improve 
performance significantly if we used the async API (see KAFKA-3038). The async 
API is more efficient because it pipelines the requests to ZooKeeper, and the 
number of requests upon controller recovery can be large.

This jira proposes to make these two changes to the calls in ZkUtils and to do 
it, one path is to first replace the calls in ZkUtils with raw async ZK calls 
and block so that we don't have to change the controller code in this phase. 
Once this step is accomplished and it is stable, we make changes to the 
controller to handle the asynchronous calls to ZooKeeper.

Note that in the first step, we will need to introduce some logic for session 
management, which is currently handled entirely by ZkClient.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132712#comment-15132712
 ] 

ASF GitHub Bot commented on KAFKA-3088:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/866

KAFKA-3088: broker crash on receipt of produce request with empty cli…

…ent ID

- Adds  NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default 
of ""
- Fixes server handling of invalid ApiKey request and other invalid requests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka null-clientid

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/866.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #866


commit b17eb557ac989dfa19d32370eca4cb73eca30664
Author: Grant Henke 
Date:   2016-02-04T18:17:51Z

KAFKA-3088: broker crash on receipt of produce request with empty client ID

- Adds  NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default 
of ""
- Fixes server handling of invalid ApiKey request and other invalid requests




> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3088:
---
Status: Patch Available  (was: Open)

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3088: broker crash on receipt of produce...

2016-02-04 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/866

KAFKA-3088: broker crash on receipt of produce request with empty cli…

…ent ID

- Adds  NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default 
of ""
- Fixes server handling of invalid ApiKey request and other invalid requests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka null-clientid

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/866.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #866


commit b17eb557ac989dfa19d32370eca4cb73eca30664
Author: Grant Henke 
Date:   2016-02-04T18:17:51Z

KAFKA-3088: broker crash on receipt of produce request with empty client ID

- Adds  NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default 
of ""
- Fixes server handling of invalid ApiKey request and other invalid requests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132698#comment-15132698
 ] 

Jiangjie Qin edited comment on KAFKA-3197 at 2/4/16 6:18 PM:
-

[~fpj] [~ijuma] That was my first thinking as well. After a second thought it 
might be a little bit complicated for the current implementation.

This approach needs the following works:
1. Detect leader movement on each metadata refresh.
2. If leader moves, that does not necessarily mean the request to old leader 
failed. We can always send the unacknowledged message to new leader, but that 
probably introduce duplicate almost every time leader moves.
3. Currently after batches leave the record accumulator, we only track them in 
requests. If leader migrates, now we need to peek into every in flight request, 
take out the batches to the partition whose leader moved, and re-enqueue them 
into record accumulator. This is even more intrusive because we store the 
batches in the ProduceResponseHandler which we don't even track today. 

Compared with current approach, the benefit of doing that seems we potentially 
don't need to wait for request timeout if a broker is actually down. However, 
given the metadata refresh itself is usually triggered by request timeout, this 
benefit becomes marginal. 

So while the idea of resend unacknowledged message to both old and new leader 
is natural and makes sense, it seems much more complicated and error prone 
based on our current implementation and does not buy us much.


was (Author: becket_qin):
[~fpj] [~ijuma] That was my first thinking as well. After a second thought it 
might be a little bit complicated for the current implementation.

This approach needs the following works:
1. Detect leader movement on each metadata refresh.
2. If leader moves, that does not necessarily mean the request to old leader 
failed. We can always send the unacknowledged message to new leader, but that 
probably introduce duplicate almost every time leader moves.
3. Currently after batches leave the record accumulator, we only track them in 
requests. If leader migrates, now we need to peek into every in flight request, 
take out the batches to the partition whose leader moved, and re-enqueue them 
in the to record accumulator. This is even more intrusive because we store the 
batches in the ProduceResponseHandler which we don't even track today. 

Compared with current approach, the benefit of doing that seems we potentially 
don't need to wait for request timeout if a broker is actually down. However, 
given the metadata refresh itself is usually triggered by request timeout, this 
benefit becomes marginal. 

So while the idea of resend unacknowledged message to both old and new leader 
is natural and makes sense, it seems much more complicated and error prone 
based on our current implementation and does not buy us much.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132699#comment-15132699
 ] 

Guozhang Wang commented on KAFKA-3207:
--

[~tom_dearman] I have uploaded  a patch in the above PR, could you try to apply 
it and see if it solves your problem when you got time? Thanks.

> StateStore seems to be writing state to one topic but restoring from another
> 
>
> Key: KAFKA-3207
> URL: https://issues.apache.org/jira/browse/KAFKA-3207
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.1.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Guozhang Wang
>Priority: Blocker
>
> The state store (I am using in-memory state store) writes to a topic call 
> [store-name] but restores from [job-id]-[store-name]-changelog.  You can see 
> in StoreChangeLogger that it writes to a topic which is the [store-name] 
> passed through from the store supplier factory, but restores from the above 
> topic name. My topology is:
>   TopologyBuilder builder = new TopologyBuilder();
>   SerializerAdapter commonKeyAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   SerializerAdapter gamePlayAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
> kafkaStreamConfig.getGamePlayTopic());
>   Duration activityInterval = 
> kafkaStreamConfig.getActivityInterval();
>   if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
> activityInterval.toMinutes() != 0)
>   {
>   throw new SystemFaultException(
>   "The game activity interval must be a multiple 
> of 5 minutes and divide into 24 hours current value [" +
>   activityInterval.toMinutes() + "]");
>   }
>   builder.addProcessor("PROCESS", new 
> GameActivitySupplier(kafkaStreamConfig.getStoreName(),
>
> kafkaStreamConfig.getGameActivitySendPeriod(),
>
> activityInterval,
>
> kafkaStreamConfig.getRemoveOldestTime(),
>
> kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");
>   SerializerAdapter storeValueAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addStateStore(
>   
> Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
> commonKeyAdapter).withValues(
>   storeValueAdapter, 
> storeValueAdapter).inMemory().build(), "PROCESS");
>   builder.addSink("SINK", 
> kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
>   new 
> SerializerAdapter(JDKBinarySerializer.INSTANCE), 
> "PROCESS");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132698#comment-15132698
 ] 

Jiangjie Qin commented on KAFKA-3197:
-

[~fpj] [~ijuma] That was my first thinking as well. After a second thought it 
might be a little bit complicated for the current implementation.

This approach needs the following works:
1. Detect leader movement on each metadata refresh.
2. If leader moves, that does not necessarily mean the request to old leader 
failed. We can always send the unacknowledged message to new leader, but that 
probably introduce duplicate almost every time leader moves.
3. Currently after batches leave the record accumulator, we only track them in 
requests. If leader migrates, now we need to peek into every in flight request, 
take out the batches to the partition whose leader moved, and re-enqueue them 
in the to record accumulator. This is even more intrusive because we store the 
batches in the ProduceResponseHandler which we don't even track today. 

Compared with current approach, the benefit of doing that seems we potentially 
don't need to wait for request timeout if a broker is actually down. However, 
given the metadata refresh itself is usually triggered by request timeout, this 
benefit becomes marginal. 

So while the idea of resend unacknowledged message to both old and new leader 
is natural and makes sense, it seems much more complicated and error prone 
based on our current implementation and does not buy us much.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: temp fix for ktable look up

2016-02-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/864


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132692#comment-15132692
 ] 

ASF GitHub Bot commented on KAFKA-3207:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/865

KAFKA-3207: Fix StateChangeLogger to use the right topic name



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3207

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #865


commit 0e6392cfa51d9ba0cc1abb9052abdff3316cec74
Author: Guozhang Wang 
Date:   2016-02-04T18:13:05Z

fis StateChangeLogger to use the right topic name




> StateStore seems to be writing state to one topic but restoring from another
> 
>
> Key: KAFKA-3207
> URL: https://issues.apache.org/jira/browse/KAFKA-3207
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.1.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Guozhang Wang
>Priority: Blocker
>
> The state store (I am using in-memory state store) writes to a topic call 
> [store-name] but restores from [job-id]-[store-name]-changelog.  You can see 
> in StoreChangeLogger that it writes to a topic which is the [store-name] 
> passed through from the store supplier factory, but restores from the above 
> topic name. My topology is:
>   TopologyBuilder builder = new TopologyBuilder();
>   SerializerAdapter commonKeyAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   SerializerAdapter gamePlayAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
> kafkaStreamConfig.getGamePlayTopic());
>   Duration activityInterval = 
> kafkaStreamConfig.getActivityInterval();
>   if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
> activityInterval.toMinutes() != 0)
>   {
>   throw new SystemFaultException(
>   "The game activity interval must be a multiple 
> of 5 minutes and divide into 24 hours current value [" +
>   activityInterval.toMinutes() + "]");
>   }
>   builder.addProcessor("PROCESS", new 
> GameActivitySupplier(kafkaStreamConfig.getStoreName(),
>
> kafkaStreamConfig.getGameActivitySendPeriod(),
>
> activityInterval,
>
> kafkaStreamConfig.getRemoveOldestTime(),
>
> kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");
>   SerializerAdapter storeValueAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addStateStore(
>   
> Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
> commonKeyAdapter).withValues(
>   storeValueAdapter, 
> storeValueAdapter).inMemory().build(), "PROCESS");
>   builder.addSink("SINK", 
> kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
>   new 
> SerializerAdapter(JDKBinarySerializer.INSTANCE), 
> "PROCESS");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3207: Fix StateChangeLogger to use the r...

2016-02-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/865

KAFKA-3207: Fix StateChangeLogger to use the right topic name



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3207

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #865


commit 0e6392cfa51d9ba0cc1abb9052abdff3316cec74
Author: Guozhang Wang 
Date:   2016-02-04T18:13:05Z

fis StateChangeLogger to use the right topic name




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: temp fix for ktable look up

2016-02-04 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/864

HOTFIX: temp fix for ktable look up

@guozhangwang 
Temporarily disabled state store access checking.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka fix_table_lookup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/864.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #864


commit e25415c2082ec12b52ec22f4b774d73eee4f6526
Author: Yasuhiro Matsuda 
Date:   2016-02-04T18:05:56Z

HOTFIX: temp fix for ktable look up




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Pin to system tests to ducktape 0.3.10

2016-02-04 Thread granders
Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/863


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3209) Support single message transforms in Kafka Connect

2016-02-04 Thread Neha Narkhede (JIRA)
Neha Narkhede created KAFKA-3209:


 Summary: Support single message transforms in Kafka Connect
 Key: KAFKA-3209
 URL: https://issues.apache.org/jira/browse/KAFKA-3209
 Project: Kafka
  Issue Type: Improvement
  Components: copycat
Reporter: Neha Narkhede


Users should be able to perform light transformations on messages between a 
connector and Kafka. This is needed because some transformations must be 
performed before the data hits Kafka (e.g. filtering certain types of events or 
PII filtering). It's also useful for very light, single-message modifications 
that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3208) Default security.inter.broker.protocol based on configured advertised.listeners

2016-02-04 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3208:
--

 Summary: Default security.inter.broker.protocol based on 
configured advertised.listeners
 Key: KAFKA-3208
 URL: https://issues.apache.org/jira/browse/KAFKA-3208
 Project: Kafka
  Issue Type: Improvement
Reporter: Grant Henke
Assignee: Grant Henke


Currently the default security.inter.broker.protocol is plaintext. However, 
when enabling Kerberos or SSL on a cluster it is common to not include a 
PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 

To reduce the number of issues and make the configuration more convenient we 
could default this value, if not explicitly set, to the first protocol defined 
in the advertised.listeners configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3208) Default security.inter.broker.protocol based on configured advertised.listeners

2016-02-04 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3208:
---
Description: 
Currently the default security.inter.broker.protocol is PLAINTEXT. However, 
when enabling SASL or SSL on a cluster it is common to not include a PLAINTEXT 
listener which causes Kafka to fail startup (after KAFKA-3194). 

To reduce the number of issues and make the configuration more convenient we 
could default this value, if not explicitly set, to the first protocol defined 
in the advertised.listeners configuration. 

  was:
Currently the default security.inter.broker.protocol is plaintext. However, 
when enabling Kerberos or SSL on a cluster it is common to not include a 
PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 

To reduce the number of issues and make the configuration more convenient we 
could default this value, if not explicitly set, to the first protocol defined 
in the advertised.listeners configuration. 


> Default security.inter.broker.protocol based on configured 
> advertised.listeners
> ---
>
> Key: KAFKA-3208
> URL: https://issues.apache.org/jira/browse/KAFKA-3208
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Currently the default security.inter.broker.protocol is PLAINTEXT. However, 
> when enabling SASL or SSL on a cluster it is common to not include a 
> PLAINTEXT listener which causes Kafka to fail startup (after KAFKA-3194). 
> To reduce the number of issues and make the configuration more convenient we 
> could default this value, if not explicitly set, to the first protocol 
> defined in the advertised.listeners configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132580#comment-15132580
 ] 

Ismael Juma commented on KAFKA-3197:


I was wondering the same thing [~fpj]

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-04 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132571#comment-15132571
 ] 

Flavio Junqueira commented on KAFKA-3197:
-

Treating it as a bug sounds right. In the example given in the description, 
when the producer connects to broker B, shouldn't it resend unacknowledged 
messages (0 in the example) over the new connection (to broker B in the 
example)? It can produce duplicates as has been pointed out, but eliminating 
duplicates is a separate matter.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3207:


Assignee: Guozhang Wang

> StateStore seems to be writing state to one topic but restoring from another
> 
>
> Key: KAFKA-3207
> URL: https://issues.apache.org/jira/browse/KAFKA-3207
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.9.1.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Guozhang Wang
>Priority: Blocker
>
> The state store (I am using in-memory state store) writes to a topic call 
> [store-name] but restores from [job-id]-[store-name]-changelog.  You can see 
> in StoreChangeLogger that it writes to a topic which is the [store-name] 
> passed through from the store supplier factory, but restores from the above 
> topic name. My topology is:
>   TopologyBuilder builder = new TopologyBuilder();
>   SerializerAdapter commonKeyAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   SerializerAdapter gamePlayAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
> kafkaStreamConfig.getGamePlayTopic());
>   Duration activityInterval = 
> kafkaStreamConfig.getActivityInterval();
>   if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
> activityInterval.toMinutes() != 0)
>   {
>   throw new SystemFaultException(
>   "The game activity interval must be a multiple 
> of 5 minutes and divide into 24 hours current value [" +
>   activityInterval.toMinutes() + "]");
>   }
>   builder.addProcessor("PROCESS", new 
> GameActivitySupplier(kafkaStreamConfig.getStoreName(),
>
> kafkaStreamConfig.getGameActivitySendPeriod(),
>
> activityInterval,
>
> kafkaStreamConfig.getRemoveOldestTime(),
>
> kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");
>   SerializerAdapter storeValueAdapter = new 
> SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
>   builder.addStateStore(
>   
> Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
> commonKeyAdapter).withValues(
>   storeValueAdapter, 
> storeValueAdapter).inMemory().build(), "PROCESS");
>   builder.addSink("SINK", 
> kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
>   new 
> SerializerAdapter(JDKBinarySerializer.INSTANCE), 
> "PROCESS");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Pluggable Log Compaction Policy

2016-02-04 Thread Bill Warshaw
Becket,

I put together a commit with a timestamp-based deletion policy, at
https://github.com/apache/kafka/commit/2c51ae3cead99432ebf19f0303f8cc797723c939
Is this a small enough change that you'd be comfortable incorporating it
into your work KIP 32, or do I need to open a separate KIP?

Thanks,
Bill Warshaw

On Mon, Feb 1, 2016 at 12:35 PM, Becket Qin  wrote:

> Hi Bill,
>
> The PR is still under review. It might take some more time because it
> touches a bunch of files. You can watch KAFKA-3025 so once it gets closed
> you will get email notification.
> Looking forward to your tool.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Feb 1, 2016 at 6:54 AM, Bill Warshaw 
> wrote:
>
> > Becket,
> >
> > I took a look at KIP-32 and your PR for it.  This looks like something
> that
> > would be great to build off of; I'm envisioning a timestamp-based policy
> > where the client application sets a minimum timestamp, before which
> > everything can be deleted / compacted.  How far along is this pull
> request?
> >
> > Bill Warshaw
> >
> > On Fri, Jan 22, 2016 at 12:41 AM, Becket Qin 
> wrote:
> >
> > > I agree with Guozhang that this seems better to be a separate tool.
> > >
> > > Also, I am wondering if KIP-32 can be used here. We can have a
> timestamp
> > > based compaction policy if needed, for example, keep any message whose
> > > timestamp is greater than (MaxTimestamp - 24 hours).
> > >
> > > Jiangjie (Becket) Qin
> > >
> > >
> > >
> > > On Thu, Jan 21, 2016 at 4:35 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Bill,
> > > >
> > > > For your case since once the log is cleaned up to the given offset
> > > > watermark (or threshold, whatever the name is), future cleaning with
> > the
> > > > same watermark will effectively be a no-op, so I feel your scenario
> > will
> > > be
> > > > better fit as a one-time admin tool to cleanup the logs rather than
> > > > customizing the periodic cleaning policy. Does this sound reasonable
> to
> > > > you?
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Wed, Jan 20, 2016 at 7:09 PM, Bill Warshaw <
> bill.wars...@appian.com
> > >
> > > > wrote:
> > > >
> > > > > For our particular use case, we would need to.  This proposal is
> > really
> > > > two
> > > > > separate pieces:  custom log compaction policy, and the ability to
> > set
> > > > > arbitrary key-value pairs in a Topic configuration.
> > > > >
> > > > > I believe that Kafka's current behavior of throwing errors when it
> > > > > encounters configuration keys that aren't defined is meant to help
> > > users
> > > > > not misconfigure their configuration files.  If that is the sole
> > > > motivation
> > > > > for it, I would propose adding a property namespace, and allow
> users
> > to
> > > > > configure arbitrary properties behind that particular namespace,
> > while
> > > > > still enforcing strict parsing for all other properties.
> > > > >
> > > > > On Wed, Jan 20, 2016 at 9:23 PM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > > > So do you need to periodically update the key-value pairs to
> > "advance
> > > > the
> > > > > > threshold for each topic"?
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Wed, Jan 20, 2016 at 5:51 PM, Bill Warshaw <
> > > bill.wars...@appian.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Compaction would be performed in the same manner as it is
> > > currently.
> > > > > > There
> > > > > > > is a predicate applied in the "shouldRetainMessage" function in
> > > > > > LogCleaner;
> > > > > > > ultimately we just want to be able to swap a custom
> > implementation
> > > of
> > > > > > that
> > > > > > > particular method in.  Nothing else in the compaction codepath
> > > would
> > > > > need
> > > > > > > to change.
> > > > > > >
> > > > > > > For advancing the "threshold transaction_id", ideally we would
> be
> > > > able
> > > > > to
> > > > > > > set arbitrary key-value pairs on the topic configuration.  We
> > have
> > > > > access
> > > > > > > to the topic configuration during log compaction, so a custom
> > > policy
> > > > > > class
> > > > > > > would also have access to that config, and could read anything
> we
> > > > > stored
> > > > > > in
> > > > > > > there.
> > > > > > >
> > > > > > > On Wed, Jan 20, 2016 at 8:14 PM, Guozhang Wang <
> > wangg...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > > Hello Bill,
> > > > > > > >
> > > > > > > > Just to clarify your use case, is your "log compaction"
> > executed
> > > > > > > manually,
> > > > > > > > or it is triggered periodically like the current log cleaning
> > > > by-key
> > > > > > > does?
> > > > > > > > If it is the latter case, how will you advance the "threshold
> > > > > > > > transaction_id" each time when it executes?
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Jan 20, 2016 at 1:50 PM, Bill Warshaw <
> > > > > bill.wars...@appian.com
> > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > 

[jira] [Resolved] (KAFKA-3178) Expose a method in AdminUtils to manually truncate a specific partition to a particular offset

2016-02-04 Thread Bill Warshaw (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Warshaw resolved KAFKA-3178.
-
Resolution: Won't Fix

> Expose a method in AdminUtils to manually truncate a specific partition to a 
> particular offset
> --
>
> Key: KAFKA-3178
> URL: https://issues.apache.org/jira/browse/KAFKA-3178
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Warshaw
>Assignee: Vahid Hashemian
>  Labels: kafka
>
> h3. Description
> One of Kafka's officially-described use cases is a distributed commit log 
> (http://kafka.apache.org/documentation.html#uses_commitlog).  In this case, 
> for a distributed service that needed a commit log, there would be a topic 
> with a single partition to guarantee log order.  This service would use the 
> commit log to re-sync failed nodes.  Kafka is generally an excellent fit for 
> such a system, but it does not expose an adequate mechanism for log cleanup 
> in such a case.  The built-in log cleanup mechanisms are based on time / size 
> thresholds, which doesn't work well with a commit log; data can only be 
> deleted from a commit log when the client application determines that it is 
> no longer needed.  Here we propose a new API exposed to clients through 
> AdminUtils that will delete all messages before a certain offset from a 
> specific partition.
> h3. Rejected Alternatives
> - Manually setting / resetting time intervals for log retention configs to 
> periodically flush messages from the logs from before a certain time period.  
> Doing this involves several asynchronous processes, none of which provide any 
> hooks to know when they are actually complete.
> - Rolling a new topic each time we want to cleanup the log.  This is the best 
> existing approach, but is not ideal.  All incoming writes would be paused 
> while waiting for a new topic to be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3207) StateStore seems to be writing state to one topic but restoring from another

2016-02-04 Thread Tom Dearman (JIRA)
Tom Dearman created KAFKA-3207:
--

 Summary: StateStore seems to be writing state to one topic but 
restoring from another
 Key: KAFKA-3207
 URL: https://issues.apache.org/jira/browse/KAFKA-3207
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Affects Versions: 0.9.1.0
 Environment: MacOS El Capitan
Reporter: Tom Dearman
Priority: Blocker


The state store (I am using in-memory state store) writes to a topic call 
[store-name] but restores from [job-id]-[store-name]-changelog.  You can see in 
StoreChangeLogger that it writes to a topic which is the [store-name] passed 
through from the store supplier factory, but restores from the above topic 
name. My topology is:
TopologyBuilder builder = new TopologyBuilder();

SerializerAdapter commonKeyAdapter = new 
SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
SerializerAdapter gamePlayAdapter = new 
SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
builder.addSource("SOURCE", commonKeyAdapter, gamePlayAdapter, 
kafkaStreamConfig.getGamePlayTopic());

Duration activityInterval = 
kafkaStreamConfig.getActivityInterval();
if (activityInterval.toMinutes() % 5 != 0 || 24 * 60 % 
activityInterval.toMinutes() != 0)
{
throw new SystemFaultException(
"The game activity interval must be a multiple 
of 5 minutes and divide into 24 hours current value [" +
activityInterval.toMinutes() + "]");
}
builder.addProcessor("PROCESS", new 
GameActivitySupplier(kafkaStreamConfig.getStoreName(),
 
kafkaStreamConfig.getGameActivitySendPeriod(),
 
activityInterval,
 
kafkaStreamConfig.getRemoveOldestTime(),
 
kafkaStreamConfig.getRemoveAbsoluteTime()), "SOURCE");

SerializerAdapter storeValueAdapter = new 
SerializerAdapter<>(JDKBinarySerializer.INSTANCE);
builder.addStateStore(

Stores.create(kafkaStreamConfig.getStoreName()).withKeys(commonKeyAdapter, 
commonKeyAdapter).withValues(
storeValueAdapter, 
storeValueAdapter).inMemory().build(), "PROCESS");

builder.addSink("SINK", 
kafkaStreamConfig.getGameActivityTopic(), commonKeyAdapter,
new 
SerializerAdapter(JDKBinarySerializer.INSTANCE), 
"PROCESS");




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3206) No reconsiliation after partitioning

2016-02-04 Thread Arkadiusz Firus (JIRA)
Arkadiusz Firus created KAFKA-3206:
--

 Summary: No reconsiliation after partitioning
 Key: KAFKA-3206
 URL: https://issues.apache.org/jira/browse/KAFKA-3206
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
Reporter: Arkadiusz Firus


Kafka topic partition could be in an inconsistent state where two nodes see 
different partition state.

Steps to reproduce the problem:
1. Create topic with one partition and replication factor 2
2. Stop one of the Kafka nodes where that partition resides
3. Send following messages to that topic:
ABC
BCD
CDE
3. Stop the other Kafka node
Currently none of the nodes should be running
4. Start first Kafka node - this node has no records
5. Send following records to the topic:
123
234
345
6. Start the other Kafka node
The reconciliation should happen here but it does not.
7. When you read the topic you will see
123
234
345
8. When you stop the first node and read the topic you will see
ABC
BCD
CDE 

This means that the partition is in inconsistent state.

If you have any questions please feel free to e-mail me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3204) ConsumerConnector blocked on Authenticated by SASL Failed.

2016-02-04 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132370#comment-15132370
 ] 

Ismael Juma commented on KAFKA-3204:


ZooKeeper SASL authentication is only supported in Kafka 0.9 with the new Java 
consumer.

> ConsumerConnector blocked on Authenticated by SASL Failed.
> --
>
> Key: KAFKA-3204
> URL: https://issues.apache.org/jira/browse/KAFKA-3204
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.0
>Reporter: shenyuan wang
>
> We've set up Kafka to use ZK authentication and test authentication 
> failures.Test program has been blocked, and Repeated retry connection zk.The 
> log repeated like this:
> 2016-02-04 17:24:47 INFO  FourLetterWordMain:46 - connecting to 10.75.202.42 
> 24002
> 2016-02-04 17:24:47 INFO  ClientCnxn:1211 - Opening socket connection to 
> server 10.75.202.42/10.75.202.42:24002. Will not attempt to authenticate 
> using SASL (unknown error)
> 2016-02-04 17:24:47 INFO  ClientCnxn:981 - Socket connection established, 
> initiating session, client: /10.61.22.215:56060, server: 
> 10.75.202.42/10.75.202.42:24002
> 2016-02-04 17:24:47 INFO  ClientCnxn:1472 - Session establishment complete on 
> server 10.75.202.42/10.75.202.42:24002, sessionid = 0xd013ed38238d1aa, 
> negotiated timeout = 4000
> 2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed 
> (SyncConnected)
> 2016-02-04 17:24:47 INFO  ClientCnxn:1326 - Unable to read additional data 
> from server sessionid 0xd013ed38238d1aa, likely server has closed socket, 
> closing socket connection and attempting reconnect
> 2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed 
> (Disconnected)
> 2016-02-04 17:24:47 INFO  ZkClient:934 - Waiting for keeper state 
> SyncConnected



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #111

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
c…

--
[...truncated 2915 lines...]
kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.SaslSslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslSslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslSslConsumerTest > testListTopics PASSED

kafka.api.SaslSslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslSslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslSslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

643 tests completed, 1 failed
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_11_7 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file://

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskE

[jira] [Commented] (KAFKA-3205) Error in I/O with host (java.io.EOFException) raised in producer

2016-02-04 Thread Jonathan Raffre (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132073#comment-15132073
 ] 

Jonathan Raffre commented on KAFKA-3205:


KAFKA-2078 seems to resemble this, although it's not reproduced the same way.

> Error in I/O with host (java.io.EOFException) raised in producer
> 
>
> Key: KAFKA-3205
> URL: https://issues.apache.org/jira/browse/KAFKA-3205
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Jonathan Raffre
>
> In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
> producers seems to raise the following after a variable amount of time since 
> start :
> {noformat}
> 2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
> 172.22.2.170
> java.io.EOFException: null
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>  ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
> ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> {noformat}
> This can be reproduced successfully by doing the following :
>  * Start a 0.8.2 producer connected to the 0.9 broker
>  * Wait 15 minutes, exactly
>  * See the error in the producer logs.
> Oddly, this also shows up in an active producer but after 10 minutes of 
> activity.
> Kafka's server.properties :
> {noformat}
> broker.id=1
> listeners=PLAINTEXT://:9092
> port=9092
> num.network.threads=2
> num.io.threads=2
> socket.send.buffer.bytes=1048576
> socket.receive.buffer.bytes=1048576
> socket.request.max.bytes=104857600
> log.dirs=/mnt/data/kafka
> num.partitions=4
> auto.create.topics.enable=false
> delete.topic.enable=true
> num.recovery.threads.per.data.dir=1
> log.retention.hours=48
> log.retention.bytes=524288000
> log.segment.bytes=52428800
> log.retention.check.interval.ms=6
> log.roll.hours=24
> log.cleanup.policy=delete
> log.cleaner.enable=true
> zookeeper.connect=127.0.0.1:2181
> zookeeper.connection.timeout.ms=100
> {noformat}
> Producer's configuration :
> {noformat}
>   compression.type = none
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   metadata.fetch.timeout.ms = 6
>   acks = all
>   batch.size = 16384
>   reconnect.backoff.ms = 10
>   bootstrap.servers = [127.0.0.1:9092]
>   receive.buffer.bytes = 32768
>   retry.backoff.ms = 500
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   retries = 3
>   max.request.size = 500
>   block.on.buffer.full = true
>   value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   metrics.sample.window.ms = 3
>   send.buffer.bytes = 131072
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   linger.ms = 0
>   client.id = 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #344

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
c…

--
[...truncated 4656 lines...]

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
SourceTask.initialize(): expected: 1, actual: 1
SourceTask.stop(): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testSlowTaskStart(Worke

[jira] [Updated] (KAFKA-3205) Error in I/O with host (java.io.EOFException) raised in producer

2016-02-04 Thread Jonathan Raffre (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Raffre updated KAFKA-3205:
---
Description: 
In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
producers seems to raise the following after a variable amount of time since 
start :

{noformat}
2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
172.22.2.170
java.io.EOFException: null
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
{noformat}

This can be reproduced successfully by doing the following :

 * Start a 0.8.2 producer connected to the 0.9 broker
 * Wait 15 minutes, exactly
 * See the error in the producer logs.

Oddly, this also shows up in an active producer but after 10 minutes of 
activity.

Kafka's server.properties :

{noformat}
broker.id=1
listeners=PLAINTEXT://:9092
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/mnt/data/kafka
num.partitions=4
auto.create.topics.enable=false
delete.topic.enable=true
num.recovery.threads.per.data.dir=1
log.retention.hours=48
log.retention.bytes=524288000
log.segment.bytes=52428800
log.retention.check.interval.ms=6
log.roll.hours=24
log.cleanup.policy=delete
log.cleaner.enable=true
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=100
{noformat}

Producer's configuration :

{noformat}
compression.type = none
metric.reporters = []
metadata.max.age.ms = 30
metadata.fetch.timeout.ms = 6
acks = all
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [127.0.0.1:9092]
receive.buffer.bytes = 32768
retry.backoff.ms = 500
buffer.memory = 33554432
timeout.ms = 3
key.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
retries = 3
max.request.size = 500
block.on.buffer.full = true
value.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
metrics.sample.window.ms = 3
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
linger.ms = 0
client.id = 
{noformat}

  was:
In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
producers seems to raise the following after a variable amount of time since 
start :

{noformat}
2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
172.22.2.170
java.io.EOFException: null
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
{noformat}

This can be reproduced successfully by doing the following :

 * Start a 0.8.2 producer connected to the 0.9 broker
 * Wait 15 minutes, exactly
 * See the error in the producer logs.

Oddly, this also shows up in an active producer but after 10 minutes of 
activity.

Kafka's server.properties :

{noformat}
broker.id=1
listeners=PLAINTEXT://:9092
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/mnt/data/kafka
num.partitions=4
auto.create.topics.enable=false
delete.topic.enable=true
num.recovery.threads.per.data.dir=1
log.retention.hours=48
log.retention.bytes=524288000
log.segment.bytes=52428800
log.retention.check.interval.ms=6
log.roll.hours=24
log.cleanup.policy=delete
log.cleaner.enable=true
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=100

[jira] [Created] (KAFKA-3205) Error in I/O with host (java.io.EOFException) raised in procuder

2016-02-04 Thread Jonathan Raffre (JIRA)
Jonathan Raffre created KAFKA-3205:
--

 Summary: Error in I/O with host (java.io.EOFException) raised in 
procuder
 Key: KAFKA-3205
 URL: https://issues.apache.org/jira/browse/KAFKA-3205
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0, 0.8.2.1
Reporter: Jonathan Raffre


In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
producers seems to raise the following after a variable amount of time since 
start :

{noformat}
2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
172.22.2.170
java.io.EOFException: null
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
{noformat}

This can be reproduced successfully by doing the following :

 * Start a 0.8.2 producer connected to the 0.9 broker
 * Wait 15 minutes, exactly
 * See the error in the producer logs.

Oddly, this also shows up in an active producer but after 10 minutes of 
activity.

Kafka's server.properties :

{noformat}
broker.id=1
listeners=PLAINTEXT://:9092
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/mnt/data/kafka
num.partitions=4
auto.create.topics.enable=false
delete.topic.enable=true
num.recovery.threads.per.data.dir=1
log.retention.hours=48
log.retention.bytes=524288000
log.segment.bytes=52428800
log.retention.check.interval.ms=6
log.roll.hours=24
log.cleanup.policy=delete
log.cleaner.enable=true
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=100
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3205) Error in I/O with host (java.io.EOFException) raised in producer

2016-02-04 Thread Jonathan Raffre (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Raffre updated KAFKA-3205:
---
Summary: Error in I/O with host (java.io.EOFException) raised in producer  
(was: Error in I/O with host (java.io.EOFException) raised in procuder)

> Error in I/O with host (java.io.EOFException) raised in producer
> 
>
> Key: KAFKA-3205
> URL: https://issues.apache.org/jira/browse/KAFKA-3205
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Jonathan Raffre
>
> In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
> producers seems to raise the following after a variable amount of time since 
> start :
> {noformat}
> 2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
> 172.22.2.170
> java.io.EOFException: null
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>  ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
> ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> {noformat}
> This can be reproduced successfully by doing the following :
>  * Start a 0.8.2 producer connected to the 0.9 broker
>  * Wait 15 minutes, exactly
>  * See the error in the producer logs.
> Oddly, this also shows up in an active producer but after 10 minutes of 
> activity.
> Kafka's server.properties :
> {noformat}
> broker.id=1
> listeners=PLAINTEXT://:9092
> port=9092
> num.network.threads=2
> num.io.threads=2
> socket.send.buffer.bytes=1048576
> socket.receive.buffer.bytes=1048576
> socket.request.max.bytes=104857600
> log.dirs=/mnt/data/kafka
> num.partitions=4
> auto.create.topics.enable=false
> delete.topic.enable=true
> num.recovery.threads.per.data.dir=1
> log.retention.hours=48
> log.retention.bytes=524288000
> log.segment.bytes=52428800
> log.retention.check.interval.ms=6
> log.roll.hours=24
> log.cleanup.policy=delete
> log.cleaner.enable=true
> zookeeper.connect=127.0.0.1:2181
> zookeeper.connection.timeout.ms=100
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3204) ConsumerConnector blocked on Authenticated by SASL Failed.

2016-02-04 Thread shenyuan wang (JIRA)
shenyuan wang created KAFKA-3204:


 Summary: ConsumerConnector blocked on Authenticated by SASL Failed.
 Key: KAFKA-3204
 URL: https://issues.apache.org/jira/browse/KAFKA-3204
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Affects Versions: 0.8.2.0
Reporter: shenyuan wang


We've set up Kafka to use ZK authentication and test authentication 
failures.Test program has been blocked, and Repeated retry connection zk.The 
log repeated like this:
2016-02-04 17:24:47 INFO  FourLetterWordMain:46 - connecting to 10.75.202.42 
24002
2016-02-04 17:24:47 INFO  ClientCnxn:1211 - Opening socket connection to server 
10.75.202.42/10.75.202.42:24002. Will not attempt to authenticate using SASL 
(unknown error)
2016-02-04 17:24:47 INFO  ClientCnxn:981 - Socket connection established, 
initiating session, client: /10.61.22.215:56060, server: 
10.75.202.42/10.75.202.42:24002
2016-02-04 17:24:47 INFO  ClientCnxn:1472 - Session establishment complete on 
server 10.75.202.42/10.75.202.42:24002, sessionid = 0xd013ed38238d1aa, 
negotiated timeout = 4000
2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed (SyncConnected)
2016-02-04 17:24:47 INFO  ClientCnxn:1326 - Unable to read additional data from 
server sessionid 0xd013ed38238d1aa, likely server has closed socket, closing 
socket connection and attempting reconnect
2016-02-04 17:24:47 INFO  ZkClient:711 - zookeeper state changed (Disconnected)
2016-02-04 17:24:47 INFO  ZkClient:934 - Waiting for keeper state SyncConnected




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1016

2016-02-04 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
c…

--
[...truncated 1379 lines...]

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

  1   2   >