Build failed in Jenkins: kafka-trunk-jdk7 #2569

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

[jira] [Created] (KAFKA-5642) use async ZookeeperClient everywhere

2017-07-26 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-5642:
---

 Summary: use async ZookeeperClient everywhere
 Key: KAFKA-5642
 URL: https://issues.apache.org/jira/browse/KAFKA-5642
 Project: Kafka
  Issue Type: Sub-task
Reporter: Onur Karaman
Assignee: Onur Karaman


Synchronous zookeeper writes means that we wait an entire round trip before 
doing the next write. These synchronous writes are happening at a per-partition 
granularity in several places, so partition-heavy clusters suffer from the 
controller doing many sequential round trips to zookeeper.
* PartitionStateMachine.electLeaderForPartition updates leaderAndIsr in 
zookeeper on transition to OnlinePartition. This gets triggered per-partition 
sequentially with synchronous writes during controlled shutdown of the shutting 
down broker's replicas for which it is the leader.
* ReplicaStateMachine updates leaderAndIsr in zookeeper on transition to 
OfflineReplica when calling KafkaController.removeReplicaFromIsr. This gets 
triggered per-partition sequentially with synchronous writes for failed or 
controlled shutdown brokers.

KAFKA-5501 introduced an async ZookeeperClient that encourages pipelined 
requests to zookeeper. We should replace ZkClient's usage with this client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5501) introduce async ZookeeperClient

2017-07-26 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman resolved KAFKA-5501.
-
Resolution: Fixed

> introduce async ZookeeperClient
> ---
>
> Key: KAFKA-5501
> URL: https://issues.apache.org/jira/browse/KAFKA-5501
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 1.0.0
>
>
> Synchronous zookeeper apis means that we wait an entire round trip before 
> doing the next operation. We should introduce a zookeeper client that 
> encourages pipelined requests to zookeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #1849

2017-07-26 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-4602; KIP-72 - Allow putting a bound on memory consumed by

--
[...truncated 2.11 MB...]
org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName STARTED

org.apa

[GitHub] kafka pull request #3576: Fix typo in SMT doc : s/RegexpRouter/RegexRouter

2017-07-26 Thread rmoff
GitHub user rmoff opened a pull request:

https://github.com/apache/kafka/pull/3576

Fix typo in SMT doc : s/RegexpRouter/RegexRouter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmoff/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3576.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3576


commit de08b2f7e63e1b503db5d04735113583f5bb9e81
Author: Robin Moffatt 
Date:   2017-07-26T07:18:05Z

Fix typo in SMT doc : s/RegexpRouter/RegexRouter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2570

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

[GitHub] kafka pull request #3569: MINOR: enforce setting listeners in CREATE state.

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3569


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2571

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

[GitHub] kafka pull request #3577: MINOR: Added some tips for running a single test f...

2017-07-26 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/3577

MINOR: Added some tips for running a single test file, test class and/or 
test method

Added some tips for running a single test file, test class and/or test 
method on the documentation landing page about tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka minor-tests-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3577.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3577


commit 1381dd34098d3313e74010699102780e94ce6181
Author: Paolo Patierno 
Date:   2017-07-26T08:36:15Z

Added some tips for running a single test file, test class and/or test 
method




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2572

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

[GitHub] kafka pull request #3516: KAFKA-5562: execute state dir cleanup on single th...

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3516


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1850

2017-07-26 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5643) Using _DUCKTAPE_OPTIONS has no effect on executing tests

2017-07-26 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5643:
-

 Summary: Using _DUCKTAPE_OPTIONS has no effect on executing tests
 Key: KAFKA-5643
 URL: https://issues.apache.org/jira/browse/KAFKA-5643
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
as described in the documentation, you should be able to enable debugging using 
the following line :

_DUCKTAPE_OPTIONS="--debug" bash tests/docker/run_tests.sh | tee debug_logs.txt

Instead the _DUCKTAPE_OPTIONS isn't available in the run_tests.sh script so 
it's not passed to the ducker-ak and finally on the ducktape command line.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3578: KAFKA-5643: Using _DUCKTAPE_OPTIONS has no effect ...

2017-07-26 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/3578

KAFKA-5643: Using _DUCKTAPE_OPTIONS has no effect on executing tests

Added handling of _DUCKTAPE_OPTIONS (mainly for enabling debugging)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka kafka-5643

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3578.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3578


commit 02e958a1ee7cdd5b7e81fcec45fba7326a4ac9fa
Author: Paolo Patierno 
Date:   2017-07-26T09:52:52Z

Added handling of _DUCKTAPE_OPTIONS (mainly for enabling debugging)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-07-26 Thread Tom Bentley
Thanks Paolo,

  *   in the "Public Interfaces" section you wrote
> alterTopics(Set) but then a collection is used (instead of a
> set) in the Proposed Changes section. I'm ok with collection.
>

Agree it should be Collection.


>   *   in the summary of the alterTopics method you say "The request can
> change the number of partitions, replication factor and/or the partition
> assignments." I think that the "and/or" is misleading (at least for me).
> For the TopicCommand tool you can specify partitions AND replication factor
> ... OR partition assignments (but not for example partitions AND
> replication factor AND partition assignments). Maybe it could be "The
> request can change the number of partitions and the related replication
> factor or specifying a partition assignments."
>

Is there a reason why we can't support changing all three at once? It
certainly makes conceptual sense to, say, increase the number of partitions
and replication factor, and specify how you want the partitions assigned.
And doing two separate calls would be less efficient as we sync new
replicas on brokers only to then move them somewhere else.

If there is a reason we don't want to support changing all three, then we
can return the error INVALID_REQUEST (42). That would allow us to support
changing all three at some time in the future, without having to change the
API.


>   *   I know that it would be a breaking change in the Admin Client API
> but why having an AlteredTopic class which is quite similar to the already
> existing NewTopic class ? I know that using NewTopic for the alter method
> could be misleading. What about renaming NewTopic in something like
> AdminTopic ? At same time it means that the TopicDetails class (which you
> can get from the current NewTopic) should be outside the
> CreateTopicsRequest because it could be used in the AlterTopicsRequest as
> well.
>

One problem with this is it tends to inhibit future API changes for either
newTopics() or alterTopics(), because any common class needs to make sense
in both contexts.

For createTopics() we get to specify some configs (the Map),
but since the AdminClient already has alterConfigs() for changing topic
configs after topic creation I don't think it's right to also support
changing those configs via alterTopics() as well. But having them in a
common AdminTopic class would suggest that that was supported. Yes,
alterTopics could return INVALID_REQUEST if it was given topic configs, but
this is just making the API harder to use since it is weakening of the type
safety of the API.

I suppose we *could* write a common TopicDetails class and make the
existing nested one extend the common one, with deprecations, to eventually
remove the nested one.



>   *   A typo in the ReplicaStatus : gpartition() instead of partition()
>   *   In the AlterTopicRequets
>  *   the replication factor should be INT16
>

Ah, thanks!


>  *   I would use same fields name as CreateTopicsRequest (they are
> quite similar)
>   *   What's the broker id in the ReplicaStatusRequest ?
>

It's the broker, which is expected to have a replica of the given
partition, that we're querying the status of. It is necessary because we're
asking the _leader_ for the partition about (a subset of) the status of the
followers. Put another way, to identify the replica of a particular
partition on a particular broker we need the tuple (topic, partition,
broker).

If we were always interested in the status of the partition across all
brokers we could omit the broker part. But for reassignment we actually
only care about a subset of the brokers.


>   *   Thinking aloud, could make sense having "Partition" in the
> ReplicaStatusRequest as an array so that I can specify in only one request
> the status for partitions I'm interested in, in order to avoid e request
> for each partition for the same topic. Maybe empty array could mean ..
> "give me status for all partitions of this topic". Of course it means that
> the ReplicaStatusResponse should change because we should have an array
> with partition, broker, lag and so on
>

You already can specify in one request the status for all the partitions
you're interested in (ReplicaStatus can be repeated/is an array field).

We could factor out the topic to avoid repeating it, which would be more
efficient when we're querying the status of many partitions of a topic
and/or there are many brokers holding replicas. In other words, we could
factor it to look like this:

ReplicaStatusRequest => [TopicReplicas]
  TopicReplicas => Topic [PartitionReplica]
Topic => string
PartitionReplica => Partition Broker
  Partition => int32
  Broker => int32

Does this make sense to you?


[jira] [Created] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-07-26 Thread Manikumar (JIRA)
Manikumar created KAFKA-5644:


 Summary: Transient test failure: 
ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
 Key: KAFKA-5644
 URL: https://issues.apache.org/jira/browse/KAFKA-5644
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Manikumar
Priority: Minor


{quote}
unit.kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
FAILED
java.lang.AssertionError: Expected the consumer group to reset to when 
offset was 50.
at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
at 
unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk7 #2573

2017-07-26 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-07-26 Thread Tom Bentley
I've updated the KIP to fix those niggles, but I've not factored out the
topic name from the ReplicaStatusRequest, yet.

Looking at the topic creation APIs in more detail, the CreateTopicsOptions
has

* `shouldValidateOnly()`, which would make a lot of sense for the alter
topic APIs
* `timeoutMs()`, which I'm not sure sure about...

Topic creation doesn't require shifting replicas between brokers so it's
reasonable to support timeout, because we don't expect it to take very long.

Topic alteration usually takes a while because we are going to have to move
replicas. Since we're adding a whole API to track the progress of that
replication, I'm inclined to think that having a timeout is a bit pointless.

But should the replicaStatus() API have a timeout? I suppose it probably
should.


On 26 July 2017 at 10:58, Tom Bentley  wrote:

> Thanks Paolo,
>
>   *   in the "Public Interfaces" section you wrote
>> alterTopics(Set) but then a collection is used (instead of a
>> set) in the Proposed Changes section. I'm ok with collection.
>>
>
> Agree it should be Collection.
>
>
>>   *   in the summary of the alterTopics method you say "The request can
>> change the number of partitions, replication factor and/or the partition
>> assignments." I think that the "and/or" is misleading (at least for me).
>> For the TopicCommand tool you can specify partitions AND replication factor
>> ... OR partition assignments (but not for example partitions AND
>> replication factor AND partition assignments). Maybe it could be "The
>> request can change the number of partitions and the related replication
>> factor or specifying a partition assignments."
>>
>
> Is there a reason why we can't support changing all three at once? It
> certainly makes conceptual sense to, say, increase the number of partitions
> and replication factor, and specify how you want the partitions assigned.
> And doing two separate calls would be less efficient as we sync new
> replicas on brokers only to then move them somewhere else.
>
> If there is a reason we don't want to support changing all three, then we
> can return the error INVALID_REQUEST (42). That would allow us to support
> changing all three at some time in the future, without having to change the
> API.
>
>
>>   *   I know that it would be a breaking change in the Admin Client API
>> but why having an AlteredTopic class which is quite similar to the already
>> existing NewTopic class ? I know that using NewTopic for the alter method
>> could be misleading. What about renaming NewTopic in something like
>> AdminTopic ? At same time it means that the TopicDetails class (which you
>> can get from the current NewTopic) should be outside the
>> CreateTopicsRequest because it could be used in the AlterTopicsRequest as
>> well.
>>
>
> One problem with this is it tends to inhibit future API changes for either
> newTopics() or alterTopics(), because any common class needs to make sense
> in both contexts.
>
> For createTopics() we get to specify some configs (the
> Map), but since the AdminClient already has alterConfigs()
> for changing topic configs after topic creation I don't think it's right to
> also support changing those configs via alterTopics() as well. But having
> them in a common AdminTopic class would suggest that that was supported.
> Yes, alterTopics could return INVALID_REQUEST if it was given topic
> configs, but this is just making the API harder to use since it is
> weakening of the type safety of the API.
>
> I suppose we *could* write a common TopicDetails class and make the
> existing nested one extend the common one, with deprecations, to eventually
> remove the nested one.
>
>
>
>>   *   A typo in the ReplicaStatus : gpartition() instead of partition()
>>   *   In the AlterTopicRequets
>>  *   the replication factor should be INT16
>>
>
> Ah, thanks!
>
>
>>  *   I would use same fields name as CreateTopicsRequest (they are
>> quite similar)
>>   *   What's the broker id in the ReplicaStatusRequest ?
>>
>
> It's the broker, which is expected to have a replica of the given
> partition, that we're querying the status of. It is necessary because we're
> asking the _leader_ for the partition about (a subset of) the status of the
> followers. Put another way, to identify the replica of a particular
> partition on a particular broker we need the tuple (topic, partition,
> broker).
>
> If we were always interested in the status of the partition across all
> brokers we could omit the broker part. But for reassignment we actually
> only care about a subset of the brokers.
>
>
>>   *   Thinking aloud, could make sense having "Partition" in the
>> ReplicaStatusRequest as an array so that I can specify in only one request
>> the status for partitions I'm interested in, in order to avoid e request
>> for each partition for the same topic. Maybe empty array could mean ..
>> "give me status for all partitions of this topic". Of course it means that
>> the ReplicaStatusResponse should c

[GitHub] kafka pull request #3579: Bump version to 0.11.0.1-SNAPSHOT

2017-07-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3579

Bump version to 0.11.0.1-SNAPSHOT



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka bump-to-0.11.0.1-SNAPSHOT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3579.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3579


commit d37e2173a7ead123607b4be0111d100d23e441c4
Author: Ismael Juma 
Date:   2017-07-26T11:43:34Z

Bump version to 0.11.0.1-SNAPSHOT




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3580: MINOR: Next release will be 1.0.0

2017-07-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3580

MINOR: Next release will be 1.0.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka bump-to-1.0.0-SNAPSHOT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3580.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3580


commit 81b59454c49f5c7f2d1b14d96d2a620b4c91f8c4
Author: Ismael Juma 
Date:   2017-07-26T11:46:55Z

MINOR: Bump version to 1.0.0-SNAPSHOT

commit 4d36d9a79717934c040697368eb13d753f01
Author: Ismael Juma 
Date:   2017-07-26T11:54:51Z

Update ApiVersion and upgrade.html




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3210) Using asynchronous calls through the raw ZK API in ZkUtils

2017-07-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3210.

Resolution: Won't Fix

We are following a slightly different approach, see KAFKA-5501.

> Using asynchronous calls through the raw ZK API in ZkUtils
> --
>
> Key: KAFKA-3210
> URL: https://issues.apache.org/jira/browse/KAFKA-3210
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>
> We have observed a number of issues with the controller interaction with 
> ZooKeeper mainly because ZkClient creates new sessions transparently under 
> the hood. Creating sessions transparently enables, for example, old 
> controller to successfully update znodes in ZooKeeper even when they aren't 
> the controller any longer (e.g., KAFKA-3083). To fix this, we need to bypass 
> the ZkClient lib like we did with ZKWatchedEphemeral.
> In addition to fixing such races with the controller, it would improve 
> performance significantly if we used the async API (see KAFKA-3038). The 
> async API is more efficient because it pipelines the requests to ZooKeeper, 
> and the number of requests upon controller recovery can be large.
> This jira proposes to make these two changes to the calls in ZkUtils and to 
> do it, one path is to first replace the calls in ZkUtils with raw async ZK 
> calls and block so that we don't have to change the controller code in this 
> phase. Once this step is accomplished and it is stable, we make changes to 
> the controller to handle the asynchronous calls to ZooKeeper.
> Note that in the first step, we will need to introduce some new logic for 
> session management, which is currently handled entirely by ZkClient. We will 
> also need to implement the subscription mechanism for event notifications 
> (see ZooKeeperLeaderElector as a an exemple).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5328) consider switching json parser from scala to jackson

2017-07-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5328.

Resolution: Duplicate

Duplicate of KAFKA-1595.

> consider switching json parser from scala to jackson
> 
>
> Key: KAFKA-5328
> URL: https://issues.apache.org/jira/browse/KAFKA-5328
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> The scala json parser is significantly slower than jackson.
> This can have a nontrivial impact on controller initialization since the 
> controller loads and parses almost all zookeeper state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-167 (Addendum): Add interface for the state store restoration process

2017-07-26 Thread Damian Guy
+1

On Tue, 25 Jul 2017 at 18:17 Sriram Subramanian  wrote:

> +1
>
> On Fri, Jul 21, 2017 at 12:08 PM, Guozhang Wang 
> wrote:
>
> > +1
> >
> > On Thu, Jul 20, 2017 at 11:00 PM, Matthias J. Sax  >
> > wrote:
> >
> > > +1
> > >
> > > On 7/20/17 4:22 AM, Bill Bejeck wrote:
> > > > Hi,
> > > >
> > > > After working on the PR for this KIP I discovered that we need to add
> > and
> > > > additional parameter (TopicPartition) to the StateRestoreListener
> > > interface
> > > > methods.
> > > >
> > > > The addition of the TopicPartition is required as the
> > > StateRestoreListener
> > > > is for the entire application, thus all tasks with recovering state
> > > stores
> > > > call the same listener instance.  The TopicPartition is needed to
> > > > disambiguate the progress of the state store recovery.
> > > >
> > > > For those that have voted before, please review the updated KIP
> > > >  > > 167:+Add+interface+for+the+state+store+restoration+process>
> > > > and
> > > > re-vote.
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


[jira] [Created] (KAFKA-5645) Use async ZookeeperClient in SimpleAclAuthorizer

2017-07-26 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5645:
--

 Summary: Use async ZookeeperClient in SimpleAclAuthorizer
 Key: KAFKA-5645
 URL: https://issues.apache.org/jira/browse/KAFKA-5645
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5646) Use async ZookeeperClient for Config and ISR management

2017-07-26 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5646:
--

 Summary: Use async ZookeeperClient for Config and ISR management
 Key: KAFKA-5646
 URL: https://issues.apache.org/jira/browse/KAFKA-5646
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #2214: KAFKA-1595; remove global lock from json parser

2017-07-26 Thread resetius
Github user resetius closed the pull request at:

https://github.com/apache/kafka/pull/2214


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-07-26 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5647:
--

 Summary: Use async ZookeeperClient for Admin operations
 Key: KAFKA-5647
 URL: https://issues.apache.org/jira/browse/KAFKA-5647
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #2213: KAFKA-3038; Future'based pseudo-async controller

2017-07-26 Thread resetius
Github user resetius closed the pull request at:

https://github.com/apache/kafka/pull/2213


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5648) make Merger extend Aggregator

2017-07-26 Thread Clemens Valiente (JIRA)
Clemens Valiente created KAFKA-5648:
---

 Summary: make Merger extend Aggregator
 Key: KAFKA-5648
 URL: https://issues.apache.org/jira/browse/KAFKA-5648
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Clemens Valiente
Assignee: Clemens Valiente
Priority: Minor


Hi,

I suggest that Merger should extend Aggregator.
reason:
Both classes usually do very similar things. A merger takes two sessions and 
combines them, an aggregator takes an existing session and aggregates new 
values into it.
in some use cases it is actually the same thing, e.g.:
 -> .map() to > -> 
.groupByKey().aggregate() to >
In this case both merger and aggregator do the same thing: take two lists and 
combine them into one.
With the proposed change we could pass the Merger as both the merger and 
aggregator to the .aggregate() method and keep our business logic within one 
merger class.

Or in other words: The Merger is simply an Aggregator that happens to aggregate 
two objects of the same class




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3581: make Merger extend Aggregator

2017-07-26 Thread cvaliente
GitHub user cvaliente opened a pull request:

https://github.com/apache/kafka/pull/3581

make Merger extend Aggregator

I suggest that Merger should extend Aggregator.
reason:
Both classes usually do very similar things. A merger takes two sessions 
and combines them, an aggregator takes an existing session and aggregates new 
values into it.
in some use cases it is actually the same thing, e.g.:
 -> .map() to > -> 
.groupByKey().aggregate() to >
In this case both merger and aggregator do the same thing: take two lists 
and combine them into one.
With the proposed change we could pass the Merger as both the merger and 
aggregator to the .aggregate() method and keep our business logic within one 
merger class.
Or in other words: The Merger is simply an Aggregator that happens to 
aggregate two objects of the same class

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cvaliente/kafka 
KAFKA-5648-make_Merger_extend_Aggregator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3581.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3581


commit c20cd6fc7fe5a7403584dc3e04f0b8412fa8db6f
Author: Clemens Valiente 
Date:   2017-07-26T15:07:55Z

make Merger extend Aggregator




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5649) Producer is being closed generating ssl exception

2017-07-26 Thread Pablo Panero (JIRA)
Pablo Panero created KAFKA-5649:
---

 Summary: Producer is being closed generating ssl exception
 Key: KAFKA-5649
 URL: https://issues.apache.org/jira/browse/KAFKA-5649
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.2.1
 Environment: Spark 2.2.0 and kafka 0.10.2.0
Reporter: Pablo Panero
Priority: Minor


On a streaming job using built-in kafka source and sink (over SSL), with I am 
getting the following exception:

On a streaming job using built-in kafka source and sink (over SSL), with  I am 
getting the following exception:

Config of the source:

{code:java}
val df = spark.readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", config.bootstrapServers)
  .option("failOnDataLoss", value = false)
  .option("kafka.connections.max.idle.ms", 360)
  //SSL: this only applies to communication between Spark and Kafka 
brokers; you are still responsible for separately securing Spark inter-node 
communication.
  .option("kafka.security.protocol", "SASL_SSL")
  .option("kafka.sasl.mechanism", "GSSAPI")
  .option("kafka.sasl.kerberos.service.name", "kafka")
  .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
  .option("kafka.ssl.truststore.password", "changeit")
  .option("subscribe", config.topicConfigList.keys.mkString(","))
  .load()
{code}

Config of the sink:


{code:java}
.writeStream
.option("checkpointLocation", 
s"${config.checkpointDir}/${topicConfig._1}/")
.format("kafka")
.option("kafka.bootstrap.servers", config.bootstrapServers)
.option("kafka.connections.max.idle.ms", 360)
//SSL: this only applies to communication between Spark and Kafka 
brokers; you are still responsible for separately securing Spark inter-node 
communication.
.option("kafka.security.protocol", "SASL_SSL")
.option("kafka.sasl.mechanism", "GSSAPI")
.option("kafka.sasl.kerberos.service.name", "kafka")
.option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
.option("kafka.ssl.truststore.password", "changeit")
.start()
{code}


{code:java}
17/07/18 10:11:58 WARN SslTransportLayer: Failed to send SSL Close message 
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at 
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
at 
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
at 
org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
at org.apache.kafka.common.network.Selector.close(Selector.java:531)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1047)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:298)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer.org$apache$spark$sql$kafka010$CachedKafkaConsumer$$fetchData(CachedKafkaConsumer.scala:206)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer$$anonfun$get$1.apply(CachedKafkaConsumer.scala:117)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer$$anonfun$get$1.apply(CachedKafkaConsumer.scala:106)
at 
org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:85)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer.runUninterruptiblyIfPossible(CachedKafkaConsumer.scala:68)
at 
org.apache.spark.sql.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:106)
at 
org.apache.spark.sql.kafka010.KafkaSourceRDD$$anon$1.getNext(KafkaSourceRDD.scala:157)
at 
org.apache.spark.sql.kafka010.KafkaSourceRDD$$anon$1.getNext(KafkaSourceRDD.scala:148)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.

[jira] [Created] (KAFKA-5650) Provide a simple way for custom storage engines to use streams wrapped stores (KIP-182)

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5650:
-

 Summary: Provide a simple way for custom storage engines to use 
streams wrapped stores (KIP-182)
 Key: KAFKA-5650
 URL: https://issues.apache.org/jira/browse/KAFKA-5650
 Project: Kafka
  Issue Type: Bug
Reporter: Damian Guy
Assignee: Damian Guy


As per KIP-182:
A new interface will be added:
{code}
/**
 * Implementations of this will provide the ability to wrap a given StateStore
 * with or without caching/loggging etc.
 */
public interface StateStoreBuilder {
 
StateStoreBuilder withCachingEnabled();
StateStoreBuilder withCachingDisabled();
StateStoreBuilder withLoggingEnabled(Map config);
StateStoreBuilder withLoggingDisabled();
T build();
}
{code}

This interface will be used to wrap stores with caching, logging etc.
Additionally some convenience methods on the {{Stores}} class:

{code}
public static  StateStoreSupplier> 
persistentKeyValueStore(final String name,

 final Serde keySerde,

 final Serde valueSerde)
 
public static  StateStoreSupplier> 
inMemoryKeyValueStore(final String name,

final Serde keySerde,

final Serde valueSerde)
 
public static  StateStoreSupplier> lruMap(final 
String name,
final int 
capacity,
final Serde 
keySerde,
final Serde 
valueSerde)
 
public static  StateStoreSupplier> 
persistentWindowStore(final String name,

final Windows windows,

final Serde keySerde,

final Serde valueSerde)
 
public static  StateStoreSupplier> 
persistentSessionStore(final String name,
  
final SessionWindows windows,
  
final Serde keySerde,
  
final Serde valueSerde)
 
/**
 *  The following methods are for use with the PAPI. They allow building of 
StateStores that can be wrapped with
 *  caching, logging, and any other convenient wrappers provided by the 
KafkaStreams library
 */ 
public  StateStoreBuilder> windowStoreBuilder(final 
StateStoreSupplier> supplier)
 
public  StateStoreBuilder> keyValueStoreBuilder(final 
StateStoreSupplier> supplier)
 
public  StateStoreBuilder> sessionStoreBuilder(final 
StateStoreSupplier> supplier)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5651) KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5651:
-

 Summary: KIP-182: Reduce Streams DSL overloads and allow easier 
use of custom storage engines
 Key: KAFKA-5651
 URL: https://issues.apache.org/jira/browse/KAFKA-5651
 Project: Kafka
  Issue Type: New Feature
Reporter: Damian Guy
Assignee: Damian Guy






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5652) Add new api methods to KStream

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5652:
-

 Summary: Add new api methods to KStream
 Key: KAFKA-5652
 URL: https://issues.apache.org/jira/browse/KAFKA-5652
 Project: Kafka
  Issue Type: Sub-task
Reporter: Damian Guy
Assignee: Damian Guy


Add new methods from KIP-182 to {{KStream}}
 until finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5654) Add new API methods to KGroupedStream

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5654:
-

 Summary: Add new API methods to KGroupedStream
 Key: KAFKA-5654
 URL: https://issues.apache.org/jira/browse/KAFKA-5654
 Project: Kafka
  Issue Type: Sub-task
Reporter: Damian Guy
Assignee: Damian Guy


Placeholder until API finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5653) Add new API methods to KTable

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5653:
-

 Summary: Add new API methods to KTable
 Key: KAFKA-5653
 URL: https://issues.apache.org/jira/browse/KAFKA-5653
 Project: Kafka
  Issue Type: Sub-task
Reporter: Damian Guy


placeholder until API finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5655) Add new API methods to KGroupedTable

2017-07-26 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5655:
-

 Summary: Add new API methods to KGroupedTable
 Key: KAFKA-5655
 URL: https://issues.apache.org/jira/browse/KAFKA-5655
 Project: Kafka
  Issue Type: Sub-task
Reporter: Damian Guy


Placeholder until API finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5656) Support bulk attributes request on KafkaMbean where some attributes do not exist

2017-07-26 Thread Erik Kringen (JIRA)
Erik Kringen created KAFKA-5656:
---

 Summary: Support bulk attributes request on KafkaMbean where some 
attributes do not exist
 Key: KAFKA-5656
 URL: https://issues.apache.org/jira/browse/KAFKA-5656
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Erik Kringen
Priority: Minor


According to Oracle documentation on [Implementing a Dynamic 
MBean|http://docs.oracle.com/cd/E19698-01/816-7609/6mdjrf83d/index.html] 

bq. The bulk getter and setter methods usually rely on the generic getter and 
setter, respectively. This makes them independent of the management interface, 
which can simplify certain modifications. In this case, their implementation 
consists mostly of error checking on the list of attributes. However, all bulk 
getters and setters must be implemented so that an error on any one attribute 
does not interrupt or invalidate the bulk operation on the other attributes.

bq. If an attribute cannot be read, then its name-value pair is not included in 
the list of results. If an attribute cannot be written, it will not be copied 
to the returned list of successful set operations. As a result, if there are 
any errors, the lists returned by bulk operators will not have the same length 
as the array or list passed to them. In any case, the bulk operators do not 
guarantee that their returned lists have the same ordering of attributes as the 
input array or list.

The current implementation of 
{code}org.apache.kafka.common.metrics.JmxReporter.KafkaMbean#getAttributes{code}
 returns an empty list if any of the the requested attributes are not found.

This method should instead log the exception but allow all requested attributes 
that are present to be returned, as prescribed via the DynamicMBean interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3576: Fix typo in SMT doc : s/RegexpRouter/RegexRouter

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3576


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2574

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at 

Build failed in Jenkins: kafka-trunk-jdk7 #2575

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

[GitHub] kafka pull request #3582: KAFKA-5656: Support bulk attributes request on Kaf...

2017-07-26 Thread ErikKringen
GitHub user ErikKringen opened a pull request:

https://github.com/apache/kafka/pull/3582

KAFKA-5656: Support bulk attributes request on KafkaMbean where some 
Support bulk attributes request on KafkaMbean where some attributes do not exist



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ErikKringen/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3582.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3582


commit 0febcdd59cee9e1f34bdd9646aee59944c28386e
Author: Erik.Kringen 
Date:   2017-07-26T17:12:04Z

KAFKA-5656: Support bulk attributes request on KafkaMbean where some 
attributes do not exist




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-164 Add unavailablePartitionCount and per-partition Unavailable metrics

2017-07-26 Thread Dong Lin
Thank you all for your vote!

This KIP has been accepted with 3 binding votes (Ismael, Becket and Joel)
and 4 non-binding votes (Mickael, Michal, Edoardo and Bill).

On Tue, Jul 25, 2017 at 9:07 PM, Joel Koshy  wrote:

> +1
>
> On Thu, Jul 20, 2017 at 10:30 AM, Becket Qin  wrote:
>
> > +1, Thanks for the KIP.
> >
> > On Thu, Jul 20, 2017 at 7:08 AM, Ismael Juma  wrote:
> >
> > > Thanks for the KIP, +1 (binding).
> > >
> > > On Thu, Jun 1, 2017 at 9:44 AM, Dong Lin  wrote:
> > >
> > > > Hi all,
> > > >
> > > > Can you please vote for KIP-164? The KIP can be found at
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-164-+Add+
> > > > UnderMinIsrPartitionCount+and+per-partition+UnderMinIsr+metrics
> > > > .
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > >
> >
>


[GitHub] kafka pull request #3583: KAFKA-5341; Add UnderMinIsrPartitionCount and per-...

2017-07-26 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3583

KAFKA-5341; Add UnderMinIsrPartitionCount and per-partition UnderMinIsr 
metrics (KIP-164)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5341

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3583.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3583


commit 93f541c249def6bdc158cb79592278d18cdf3ff8
Author: Dong Lin 
Date:   2017-05-28T08:10:28Z

KAFKA-5341; Add UnderMinIsrPartitionCount and per-partition UnderMinIsr 
metrics (KIP-164)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1852

2017-07-26 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Fix typo in SMT doc: s/RegexpRouter/RegexRouter

--
[...truncated 2.54 MB...]

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Stream 
tasks not updated
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:274)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:252)
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:248)

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
s

Build failed in Jenkins: kafka-trunk-jdk7 #2576

2017-07-26 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on cassandra12 (cassandra ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1110)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 
'https://git-wip-us.apache.org/repos/asf/kafka.git/': Could not resolve host: 
git-wip-us.apache.org

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1903)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1622)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
at ..remote call to cassandra12(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:830)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
at sun.reflect.GeneratedMethodAccessor864.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
at com.sun.proxy.$Proxy104.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
... 11 more
ERROR: Error fetching remote repo 'origin'
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:812)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1079)
at hudson.plugins.git.GitSCM

Re: [VOTE] KIP-167 (Addendum): Add interface for the state store restoration process

2017-07-26 Thread Bill Bejeck
The re-vote has now passed. The votes where:

Binding +1: Sriram, Guozhang, Damian
Non-binding +1: Matthias.

The discussion wrapping up in this PR
https://github.com/apache/kafka/pull/3325

-Bill

On Wed, Jul 26, 2017 at 10:10 AM, Damian Guy  wrote:

> +1
>
> On Tue, 25 Jul 2017 at 18:17 Sriram Subramanian  wrote:
>
> > +1
> >
> > On Fri, Jul 21, 2017 at 12:08 PM, Guozhang Wang 
> > wrote:
> >
> > > +1
> > >
> > > On Thu, Jul 20, 2017 at 11:00 PM, Matthias J. Sax <
> matth...@confluent.io
> > >
> > > wrote:
> > >
> > > > +1
> > > >
> > > > On 7/20/17 4:22 AM, Bill Bejeck wrote:
> > > > > Hi,
> > > > >
> > > > > After working on the PR for this KIP I discovered that we need to
> add
> > > and
> > > > > additional parameter (TopicPartition) to the StateRestoreListener
> > > > interface
> > > > > methods.
> > > > >
> > > > > The addition of the TopicPartition is required as the
> > > > StateRestoreListener
> > > > > is for the entire application, thus all tasks with recovering state
> > > > stores
> > > > > call the same listener instance.  The TopicPartition is needed to
> > > > > disambiguate the progress of the state store recovery.
> > > > >
> > > > > For those that have voted before, please review the updated KIP
> > > > >  > > > 167:+Add+interface+for+the+state+store+restoration+process>
> > > > > and
> > > > > re-vote.
> > > > >
> > > > > Thanks,
> > > > > Bill
> > > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-26 Thread Guozhang Wang
To me `PreparingRebalance` sounds better than `StartingRebalance` since
only by the end of that stage we have formed a new group. More
specifically, this this the workflow from the coordinator's point of view:

1. decided to trigger a rebalance, enter PreparingRebalance phase;
  |
  |   send out error code for all heartbeat reponses
 \|/
  |
  |   waiting for join group requests from members
 \|/
2. formed a new group, increment the generation number, now start
rebalancing, entering AwaitSync phase:
  |
  |   send out the join group responses for whoever
requested join
 \|/
  |
  |   waiting for the sync group request from the leader
 \|/
3. received assignment from the leader; the rebalance has ended, start
ticking for all members, entering Stable phase.
  |
  |   for whoever else sending the sync group request,
reply with the assignment
 \|/

So from the coordinator's point of view the rebalance starts at beginning
of step 2 and ends at beginning of step 3. Maybe we can rename `AwaitSync`
itself to `CompletingRebalance`.

Guozhang



On Tue, Jul 25, 2017 at 6:44 AM, Ismael Juma  wrote:

> Hi Guozhang,
>
> Thanks for the clarification. The naming does seem a bit unclear. Maybe
> `PreparingRebalance` could be `StartingRebalance` or something that makes
> it clear that it is part of the rebalance instead of a step before the
> actual rebalance. `AwaitingSync` could also be `CompletingRebalance`, but
> not sure if that's better.
>
> Ismael
>
> On Mon, Jul 24, 2017 at 7:02 PM, Guozhang Wang  wrote:
>
> > Actually Rebalancing includes two steps, and we name them
> PrepareRebalance
> > and WaitSync (arguably they may not be the best names). But these two
> steps
> > together should be treated as the complete rebalance cycle.
> >
> >
> > Guozhang
> >
> > On Mon, Jul 24, 2017 at 10:46 AM, Colin McCabe 
> wrote:
> >
> > > Hi all,
> > >
> > > I think maybe it makes sense to rename the "PreparingRebalance"
> consumer
> > > group state to "Rebalancing."  To me, "Preparing" implies that there
> > > will be a later "rebalance" state that follows-- but there is not.
> > > Since we're now exposing this state name publicly in these metrics,
> > > perhaps it makes sense to do this rename now.  Thoughts?
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Jul 21, 2017, at 13:52, Colin McCabe wrote:
> > > > That's a good point.  I revised the KIP to add metrics for all the
> > group
> > > > states.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Fri, Jul 21, 2017, at 12:08, Guozhang Wang wrote:
> > > > > Ah, that's right Jason.
> > > > >
> > > > > With that I can be convinced to add one metric per each state.
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Fri, Jul 21, 2017 at 11:44 AM, Jason Gustafson <
> > ja...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > >
> > > > > > > "Dead" and "Empty" states are transient: groups usually only
> > > leaves in
> > > > > > this
> > > > > > > state for a short while and then being deleted or transited to
> > > other
> > > > > > > states.
> > > > > >
> > > > > >
> > > > > > This is not strictly true for the "Empty" state which we also use
> > to
> > > > > > represent simple groups which only use the coordinator to store
> > > offsets. I
> > > > > > think we may as well cover all the states if we're going to cover
> > > any of
> > > > > > them specifically.
> > > > > >
> > > > > > -Jason
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Jul 21, 2017 at 9:45 AM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > wrote:
> > > > > >
> > > > > > > My two cents:
> > > > > > >
> > > > > > > "Dead" and "Empty" states are transient: groups usually only
> > > leaves in
> > > > > > this
> > > > > > > state for a short while and then being deleted or transited to
> > > other
> > > > > > > states.
> > > > > > >
> > > > > > > Since we have the existing "*NumGroups*" metric, `*NumGroups -
> > > > > > > **NumGroupsRebalancing
> > > > > > > - **NumGroupsAwaitingSync`* should cover the above three, where
> > > "Stable"
> > > > > > > should be contributing most of the counts: If we have a bug
> that
> > > causes
> > > > > > the
> > > > > > > num.Dead / Empty to keep increasing, then we would observe
> > > `NumGroups`
> > > > > > keep
> > > > > > > increasing which should be sufficient for alerting. And trouble
> > > shooting
> > > > > > of
> > > > > > > the issue could be relying on the log4j.
> > > > > > >
> > > > > > > *Guozhang*
> > > > > > >
> > > > > > > On Fri, Jul 21, 2017 at 7:19 AM, Ismael Juma <
> ism...@juma.me.uk>
> > > wrote:
> > > > > > >
> > > > > > > > Thanks for the KIP, Colin. This will definitely be useful.
> One
> > > > > > question:
> > > > > > > > would it be useful to have a metric for for the number of
> > groups

Build failed in Jenkins: kafka-trunk-jdk7 #2577

2017-07-26 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Fix typo in SMT doc: s/RegexpRouter/RegexRouter

--
[...truncated 1007.63 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.O

[jira] [Created] (KAFKA-5657) Connect REST API should include the connector type when describing a connector

2017-07-26 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5657:


 Summary: Connect REST API should include the connector type when 
describing a connector
 Key: KAFKA-5657
 URL: https://issues.apache.org/jira/browse/KAFKA-5657
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
 Fix For: 1.0.0


Kafka Connect's REST API's {{connectors/}} and {{connectors/{name}}} endpoints 
should include whether the connector is a source or a sink.

See KAFKA-4343 and KIP-151 for the related modification of the 
{{connector-plugins}} endpoint.

Also see KAFKA-4279 for converter-related endpoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3568: MINOR: updated configs to use one try/catch for se...

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3568


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3580: MINOR: Next release will be 1.0.0

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3580


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-164 Add unavailablePartitionCount and per-partition Unavailable metrics

2017-07-26 Thread Guozhang Wang
Hello,

I would like to call out someone (committer) to voluntarily shepherd this
KIP and drive it to be merged for 1.0.0. Please feel free to add your name
on KIP-164 on the release wiki page:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913


Guozhang

On Wed, Jul 26, 2017 at 10:16 AM, Dong Lin  wrote:

> Thank you all for your vote!
>
> This KIP has been accepted with 3 binding votes (Ismael, Becket and Joel)
> and 4 non-binding votes (Mickael, Michal, Edoardo and Bill).
>
> On Tue, Jul 25, 2017 at 9:07 PM, Joel Koshy  wrote:
>
> > +1
> >
> > On Thu, Jul 20, 2017 at 10:30 AM, Becket Qin 
> wrote:
> >
> > > +1, Thanks for the KIP.
> > >
> > > On Thu, Jul 20, 2017 at 7:08 AM, Ismael Juma 
> wrote:
> > >
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > On Thu, Jun 1, 2017 at 9:44 AM, Dong Lin 
> wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Can you please vote for KIP-164? The KIP can be found at
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-164-+Add+
> > > > > UnderMinIsrPartitionCount+and+per-partition+UnderMinIsr+metrics
> > > > > .
> > > > >
> > > > > Thanks,
> > > > > Dong
> > > > >
> > > >
> > >
> >
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-5658) adminclient will stop working after some amount of time

2017-07-26 Thread dan norwood (JIRA)
dan norwood created KAFKA-5658:
--

 Summary: adminclient will stop working after some amount of time
 Key: KAFKA-5658
 URL: https://issues.apache.org/jira/browse/KAFKA-5658
 Project: Kafka
  Issue Type: Bug
Reporter: dan norwood


if i create an admin client and let it sit unused for some amount of time, then 
attempt to use it i will get the following 

{noformat}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.BrokerNotAvailableException
{noformat}

even though the broker is up. if before each usage of adminclient i create a 
new admin client i do not see the same behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3584: KAFKA-5658. Fix AdminClient request timeout handli...

2017-07-26 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3584

KAFKA-5658. Fix AdminClient request timeout handling bug resulting in 
continual BrokerNotAvailableExceptions

The AdminClient does not properly clear calls from the callsInFlight 
structure. Later, in an effort to clear the lingering call objects, it closes 
the connection they are associated with. This disrupts new incoming calls, 
which then get BrokerNotAvailableException.

This patch fixes this bug by properly removing completed calls from the 
callsInFlight structure.  It also adds the Call#aborted flag, which ensures 
that we only abort a connection once-- even if there is a similar bug in the 
future which causes old Call objects to linger.  This patch also fixes a case 
where AdminClient#describeConfigs was making an extra RPC that had no useful 
effect.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5658

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3584.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3584


commit 7eedf51d2b29565460f04f78435f2bdf5a9cd661
Author: Colin P. Mccabe 
Date:   2017-07-17T17:04:58Z

KAFKA-5602: ducker-ak: support --custom-ducktape

Support a --custom-ducktape flag which allows developers to install
their own versions of ducktape into Docker images.  This is helpful for
ducktape development.

commit 811983f02cb1ff887bbe75ffc22ef51f98a99a36
Author: Colin P. Mccabe 
Date:   2017-07-26T20:57:18Z

KAFKA-5658. Fix AdminClient request timeout handling bug resulting in 
continual BrokerNotAvailableExceptions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5659) AdminClient#describeConfigs makes an extra empty request when only broker info is requested

2017-07-26 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5659:
--

 Summary: AdminClient#describeConfigs makes an extra empty request 
when only broker info is requested
 Key: KAFKA-5659
 URL: https://issues.apache.org/jira/browse/KAFKA-5659
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


AdminClient#describeConfigs makes an extra empty request when only broker info 
is requested



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk7 #2578

2017-07-26 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3585: KAFKA-5659. AdminClient#describeConfigs makes an e...

2017-07-26 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3585

KAFKA-5659. AdminClient#describeConfigs makes an extra empty request …

…when only broker info is requested

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5659

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3585.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3585


commit 593ed34ce08c19a34f2146999f9e07a02927ca61
Author: Colin P. Mccabe 
Date:   2017-07-26T21:20:12Z

KAFKA-5659. AdminClient#describeConfigs makes an extra empty request when 
only broker info is requested




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5660) Don't throw TopologyBuilderException during runtime

2017-07-26 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5660:
--

 Summary: Don't throw TopologyBuilderException during runtime
 Key: KAFKA-5660
 URL: https://issues.apache.org/jira/browse/KAFKA-5660
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Matthias J. Sax


{{TopologyBuilderException}} is a pre-runtime exception that should only be 
thrown {{KafkaStreams#start()}} is called.

However, we do throw {{TopologyBuilderException}} within

- `SourceNodeFactory#getTopics`
- `ProcessorContextImpl#getStateStore`

(and maybe somewhere else: we should double check if there are other places in 
the code like those).

We should replace those exception with either {{StreamsException}} or with a 
new exception type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5661) Develop and understanding of how to tune transactions for optimal performance

2017-07-26 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5661:
---

 Summary: Develop and understanding of how to tune transactions for 
optimal performance
 Key: KAFKA-5661
 URL: https://issues.apache.org/jira/browse/KAFKA-5661
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.11.0.0
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 1.0.0


Currently, we don't have an idea of the throughput curve for transactions 
across a different range of workloads. 

Thus we would like to understand how to tune transactions so that they are 
viable across a broad range of work loads. For instance, what knobs can you 
tweak if you use small messages to yet get acceptable transactional 
performance? We don't understand the performance curve across variables like 
message size, batch size, transaction duration, linger.ms, etc., and it would 
be good to get an understanding of this area and publish recommended 
configurations for different workloads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5662) We should be able to specify min.insync.replicas for the __consumer_offsets topic

2017-07-26 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5662:
---

 Summary: We should be able to specify min.insync.replicas for the 
__consumer_offsets topic
 Key: KAFKA-5662
 URL: https://issues.apache.org/jira/browse/KAFKA-5662
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta


The transaction log has a {{transaction.state.log.min.isr}} setting to control 
the min.isr for the transaction log (by default the min.isr is 2 and 
replication.factor is 3).

Unfortunately, we don't have a similar setting for the offsets topic. We should 
add the following {{offsets.topic.min.isr}} setting and default that to 2 so 
that we have durability on the offsets topic. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5663) LogDirFailureTest system test fails

2017-07-26 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5663:
---

 Summary: LogDirFailureTest system test fails
 Key: KAFKA-5663
 URL: https://issues.apache.org/jira/browse/KAFKA-5663
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Dong Lin


The recently added JBOD system test failed last night.

{noformat}
Producer failed to produce messages for 20s.
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
 line 123, in run
data = self.run_test()
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
 line 176, in run_test
return self.test_context.function(self.test)
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
 line 321, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/tests/kafkatest/tests/core/log_dir_failure_test.py",
 line 166, in test_replication_with_disk_failure
self.start_producer_and_consumer()
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 75, in start_producer_and_consumer
self.producer_start_timeout_sec)
  File 
"/home/jenkins/workspace/system-test-kafka-trunk/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/utils/util.py",
 line 36, in wait_until
raise TimeoutError(err_msg)
TimeoutError: Producer failed to produce messages for 20s.
{noformat}

Complete logs here:

http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2017-07-26--001.1501074756--apache--trunk--91c207c/LogDirFailureTest/test_replication_with_disk_failure/bounce_broker=False.security_protocol=PLAINTEXT.broker_type=follower/48.tgz



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5664) Disable auto offset commit in ConsoleConsumer if no group is provided

2017-07-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5664:
--

 Summary: Disable auto offset commit in ConsoleConsumer if no group 
is provided
 Key: KAFKA-5664
 URL: https://issues.apache.org/jira/browse/KAFKA-5664
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


In ConsoleCosnumer, if no group is provided, we generate a random groupId:
{code}
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, s"console-consumer-${new 
Random().nextInt(10)}")
{code}
In this case, since the group is not likely to be used again, we should disable 
automatic offset commits. This avoids polluting the coordinator cache with 
offsets that will never be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #1853

2017-07-26 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: updated configs to use one try/catch for serdes

--
[...truncated 923.91 KB...]

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
STARTED

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.ClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.ClientIdQuotaTest > testThrottledRequest PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > testInvalidAlterConfigs 
STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > testInvalidAlterConfigs 
PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.UserClientIdQuotaTest > testThrottledRequest PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 

[jira] [Created] (KAFKA-5665) Incorrect interruption invoking method used for Heartbeat thread

2017-07-26 Thread huxihx (JIRA)
huxihx created KAFKA-5665:
-

 Summary: Incorrect interruption invoking method used for Heartbeat 
thread 
 Key: KAFKA-5665
 URL: https://issues.apache.org/jira/browse/KAFKA-5665
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.11.0.0
Reporter: huxihx
Assignee: huxihx
Priority: Minor


When interrupting the background heartbeat thread, `Thread.interrupted();` is 
used. Actually, `Thread.currentThread().interrupt();` should be used to restore 
the interruption status. An alternative way to solve is to remove 
`Thread.interrupted();` since HeartbeatThread extends Thread and all code 
higher up on the call stack is controlled, so we could safely swallow this 
exception. Anyway, `Thread.interrupted();`  should not be used here. It's a 
test method not an action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3586: KAFKA-5665: Heartbeat thread should use correct in...

2017-07-26 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3586

KAFKA-5665: Heartbeat thread should use correct interruption method to 
restore status

When interrupting the background heartbeat thread, `Thread.interrupted();` 
is used. Actually, `Thread.currentThread().interrupt();` should be used to 
restore the interruption status. An alternative way to solve is to remove 
`Thread.interrupted();` since HeartbeatThread extends Thread and all code 
higher up on the call stack is controlled, so we could safely swallow this 
exception. Anyway, `Thread.interrupted();` should not be used here. It's a test 
method not an action.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-5665

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3586.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3586


commit 36d489eede2229db92eda077ae4baff80044fb25
Author: huxihx 
Date:   2017-07-27T03:53:21Z

KAFKA-5665: Incorrect interruption invoking method used for Heartbeat thread

When interrupting the background heartbeat thread, `Thread.interrupted();` 
is used. Actually, `Thread.currentThread().interrupt();` should be used to 
restore the interruption status. An alternative way to solve is to remove 
`Thread.interrupted();` since HeartbeatThread extends Thread and all code 
higher up on the call stack is controlled, so we could safely swallow this 
exception. Anyway, `Thread.interrupted();` should not be used here. It's a test 
method not an action.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1854

2017-07-26 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-164 Add unavailablePartitionCount and per-partition Unavailable metrics

2017-07-26 Thread Ewen Cheslack-Postava
Seems pretty small and simple and I don't have anything for this release
yet, so I'll pick it up. We can rebalance as we near release date if
necessary.

Guozhang, I also notice that a bunch of JIRAs in that release wiki aren't
marked for 1.0.0 yet and aren't blockers. Since you're release mgr I'll
leave it up to you, but we might want to adjust at least the release
versions to help w/ tracking until we decide to bump things from the
release.

-Ewen

On Wed, Jul 26, 2017 at 1:09 PM, Guozhang Wang  wrote:

> Hello,
>
> I would like to call out someone (committer) to voluntarily shepherd this
> KIP and drive it to be merged for 1.0.0. Please feel free to add your name
> on KIP-164 on the release wiki page:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
>
>
> Guozhang
>
> On Wed, Jul 26, 2017 at 10:16 AM, Dong Lin  wrote:
>
> > Thank you all for your vote!
> >
> > This KIP has been accepted with 3 binding votes (Ismael, Becket and Joel)
> > and 4 non-binding votes (Mickael, Michal, Edoardo and Bill).
> >
> > On Tue, Jul 25, 2017 at 9:07 PM, Joel Koshy  wrote:
> >
> > > +1
> > >
> > > On Thu, Jul 20, 2017 at 10:30 AM, Becket Qin 
> > wrote:
> > >
> > > > +1, Thanks for the KIP.
> > > >
> > > > On Thu, Jul 20, 2017 at 7:08 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Thanks for the KIP, +1 (binding).
> > > > >
> > > > > On Thu, Jun 1, 2017 at 9:44 AM, Dong Lin 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Can you please vote for KIP-164? The KIP can be found at
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-164-+Add+
> > > > > > UnderMinIsrPartitionCount+and+per-partition+UnderMinIsr+metrics
> > > > > > .
> > > > > >
> > > > > > Thanks,
> > > > > > Dong
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


[GitHub] kafka pull request #3571: KAFKA-5611; AbstractCoordinator should handle wake...

2017-07-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3571


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---