Build failed in Jenkins: kafka-trunk-jdk8 #3113

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3836, done.
remote: Counting objects:   0% (1/3836)   remote: Counting objects:   
1% (39/3836)   remote: Counting objects:   2% (77/3836)   
remote: Counting objects:   3% (116/3836)   remote: Counting objects:   
4% (154/3836)   remote: Counting objects:   5% (192/3836)   
remote: Counting objects:   6% (231/3836)   remote: Counting objects:   
7% (269/3836)   remote: Counting objects:   8% (307/3836)   
remote: Counting objects:   9% (346/3836)   remote: Counting objects:  
10% (384/3836)   remote: Counting objects:  11% (422/3836)   
remote: Counting objects:  12% (461/3836)   remote: Counting objects:  
13% (499/3836)   remote: Counting objects:  14% (538/3836)   
remote: Counting objects:  15% (576/3836)   remote: Counting objects:  
16% (614/3836)   remote: Counting objects:  17% (653/3836)   
remote: Counting objects:  18% (691/3836)   remote: Counting objects:  
19% (729/3836)   remote: Counting objects:  20% (768/3836)   
remote: Counting objects:  21% (806/3836)   remote: Counting objects:  
22% (844/3836)   remote: Counting objects:  23% (883/3836)   
remote: Counting objects:  24% (921/3836)   remote: Counting objects:  
25% (959/3836)   remote: Counting objects:  26% (998/3836)   
remote: Counting objects:  27% (1036/3836)   remote: Counting objects:  
28% (1075/3836)   remote: Counting objects:  29% (1113/3836)   
remote: Counting objects:  30% (1151/3836)   remote: Counting objects:  
31% (1190/3836)   remote: Counting objects:  32% (1228/3836)   
remote: Counting objects:  33% (1266/3836)   remote: Counting objects:  
34% (1305/3836)   remote: Counting objects:  35% (1343/3836)   
remote: Counting objects:  36% (1381/3836)   remote: Counting objects:  
37% (1420/3836)   remote: Counting objects:  38% (1458/3836)   
remote: Counting objects:  39% (1497/3836)   remote: Counting objects:  
40% (1535/3836)   remote: Counting objects:  41% (1573/3836)   
remote: Counting objects:  42% (1612/3836)   remote: Counting objects:  
43% (1650/3836)   remote: Counting objects:  44% (1688/3836)   
remote: Counting objects:  45% (1727/3836)   remote: Counting objects:  
46% (1765/3836)   remote: Counting objects:  47% (1803/3836)   
remote: Counting objects:  48% (1842/3836)   remote: Counting objects:  
49% (1880/3836)   remote: Counting objects:  50% (1918/3836)   
remote: Counting objects:  51% (1957/3836)   remote: Counting objects:  
52% (1995/3836)   remote: Counting objects:  53% (2034/3836)   
remote: Counting objects:  54% (2072/3836)   remote: Counting objects:  

Build failed in Jenkins: kafka-trunk-jdk8 #3112

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4831, done.
remote: Counting objects:   0% (1/4831)   remote: Counting objects:   
1% (49/4831)   remote: Counting objects:   2% (97/4831)   
remote: Counting objects:   3% (145/4831)   remote: Counting objects:   
4% (194/4831)   remote: Counting objects:   5% (242/4831)   
remote: Counting objects:   6% (290/4831)   remote: Counting objects:   
7% (339/4831)   remote: Counting objects:   8% (387/4831)   
remote: Counting objects:   9% (435/4831)   remote: Counting objects:  
10% (484/4831)   remote: Counting objects:  11% (532/4831)   
remote: Counting objects:  12% (580/4831)   remote: Counting objects:  
13% (629/4831)   remote: Counting objects:  14% (677/4831)   
remote: Counting objects:  15% (725/4831)   remote: Counting objects:  
16% (773/4831)   remote: Counting objects:  17% (822/4831)   
remote: Counting objects:  18% (870/4831)   remote: Counting objects:  
19% (918/4831)   remote: Counting objects:  20% (967/4831)   
remote: Counting objects:  21% (1015/4831)   remote: Counting objects:  
22% (1063/4831)   remote: Counting objects:  23% (1112/4831)   
remote: Counting objects:  24% (1160/4831)   remote: Counting objects:  
25% (1208/4831)   remote: Counting objects:  26% (1257/4831)   
remote: Counting objects:  27% (1305/4831)   remote: Counting objects:  
28% (1353/4831)   remote: Counting objects:  29% (1401/4831)   
remote: Counting objects:  30% (1450/4831)   remote: Counting objects:  
31% (1498/4831)   remote: Counting objects:  32% (1546/4831)   
remote: Counting objects:  33% (1595/4831)   remote: Counting objects:  
34% (1643/4831)   remote: Counting objects:  35% (1691/4831)   
remote: Counting objects:  36% (1740/4831)   remote: Counting objects:  
37% (1788/4831)   remote: Counting objects:  38% (1836/4831)   
remote: Counting objects:  39% (1885/4831)   remote: Counting objects:  
40% (1933/4831)   remote: Counting objects:  41% (1981/4831)   
remote: Counting objects:  42% (2030/4831)   remote: Counting objects:  
43% (2078/4831)   remote: Counting objects:  44% (2126/4831)   
remote: Counting objects:  45% (2174/4831)   remote: Counting objects:  
46% (2223/4831)   remote: Counting objects:  47% (2271/4831)   
remote: Counting objects:  48% (2319/4831)   remote: Counting objects:  
49% (2368/4831)   remote: Counting objects:  50% (2416/4831)   
remote: Counting objects:  51% (2464/4831)   remote: Counting objects:  
52% (2513/4831)   remote: Counting objects:  53% (2561/4831)   
remote: Counting objects:  54% (2609/4831)   remote: Counting objects:  
55% (2658/4831)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-trunk-jdk8 #3111

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3836, done.
remote: Counting objects:   0% (1/3836)   remote: Counting objects:   
1% (39/3836)   remote: Counting objects:   2% (77/3836)   
remote: Counting objects:   3% (116/3836)   remote: Counting objects:   
4% (154/3836)   remote: Counting objects:   5% (192/3836)   
remote: Counting objects:   6% (231/3836)   remote: Counting objects:   
7% (269/3836)   remote: Counting objects:   8% (307/3836)   
remote: Counting objects:   9% (346/3836)   remote: Counting objects:  
10% (384/3836)   remote: Counting objects:  11% (422/3836)   
remote: Counting objects:  12% (461/3836)   remote: Counting objects:  
13% (499/3836)   remote: Counting objects:  14% (538/3836)   
remote: Counting objects:  15% (576/3836)   remote: Counting objects:  
16% (614/3836)   remote: Counting objects:  17% (653/3836)   
remote: Counting objects:  18% (691/3836)   remote: Counting objects:  
19% (729/3836)   remote: Counting objects:  20% (768/3836)   
remote: Counting objects:  21% (806/3836)   remote: Counting objects:  
22% (844/3836)   remote: Counting objects:  23% (883/3836)   
remote: Counting objects:  24% (921/3836)   remote: Counting objects:  
25% (959/3836)   remote: Counting objects:  26% (998/3836)   
remote: Counting objects:  27% (1036/3836)   remote: Counting objects:  
28% (1075/3836)   remote: Counting objects:  29% (1113/3836)   
remote: Counting objects:  30% (1151/3836)   remote: Counting objects:  
31% (1190/3836)   remote: Counting objects:  32% (1228/3836)   
remote: Counting objects:  33% (1266/3836)   remote: Counting objects:  
34% (1305/3836)   remote: Counting objects:  35% (1343/3836)   
remote: Counting objects:  36% (1381/3836)   remote: Counting objects:  
37% (1420/3836)   remote: Counting objects:  38% (1458/3836)   
remote: Counting objects:  39% (1497/3836)   remote: Counting objects:  
40% (1535/3836)   remote: Counting objects:  41% (1573/3836)   
remote: Counting objects:  42% (1612/3836)   remote: Counting objects:  
43% (1650/3836)   remote: Counting objects:  44% (1688/3836)   
remote: Counting objects:  45% (1727/3836)   remote: Counting objects:  
46% (1765/3836)   remote: Counting objects:  47% (1803/3836)   
remote: Counting objects:  48% (1842/3836)   remote: Counting objects:  
49% (1880/3836)   remote: Counting objects:  50% (1918/3836)   
remote: Counting objects:  51% (1957/3836)   remote: Counting objects:  
52% (1995/3836)   remote: Counting objects:  53% (2034/3836)   
remote: Counting objects:  54% (2072/3836)   remote: Counting objects:  

Jenkins build is back to normal : kafka-2.1-jdk8 #21

2018-10-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3110

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4831, done.
remote: Counting objects:   0% (1/4831)   remote: Counting objects:   
1% (49/4831)   remote: Counting objects:   2% (97/4831)   
remote: Counting objects:   3% (145/4831)   remote: Counting objects:   
4% (194/4831)   remote: Counting objects:   5% (242/4831)   
remote: Counting objects:   6% (290/4831)   remote: Counting objects:   
7% (339/4831)   remote: Counting objects:   8% (387/4831)   
remote: Counting objects:   9% (435/4831)   remote: Counting objects:  
10% (484/4831)   remote: Counting objects:  11% (532/4831)   
remote: Counting objects:  12% (580/4831)   remote: Counting objects:  
13% (629/4831)   remote: Counting objects:  14% (677/4831)   
remote: Counting objects:  15% (725/4831)   remote: Counting objects:  
16% (773/4831)   remote: Counting objects:  17% (822/4831)   
remote: Counting objects:  18% (870/4831)   remote: Counting objects:  
19% (918/4831)   remote: Counting objects:  20% (967/4831)   
remote: Counting objects:  21% (1015/4831)   remote: Counting objects:  
22% (1063/4831)   remote: Counting objects:  23% (1112/4831)   
remote: Counting objects:  24% (1160/4831)   remote: Counting objects:  
25% (1208/4831)   remote: Counting objects:  26% (1257/4831)   
remote: Counting objects:  27% (1305/4831)   remote: Counting objects:  
28% (1353/4831)   remote: Counting objects:  29% (1401/4831)   
remote: Counting objects:  30% (1450/4831)   remote: Counting objects:  
31% (1498/4831)   remote: Counting objects:  32% (1546/4831)   
remote: Counting objects:  33% (1595/4831)   remote: Counting objects:  
34% (1643/4831)   remote: Counting objects:  35% (1691/4831)   
remote: Counting objects:  36% (1740/4831)   remote: Counting objects:  
37% (1788/4831)   remote: Counting objects:  38% (1836/4831)   
remote: Counting objects:  39% (1885/4831)   remote: Counting objects:  
40% (1933/4831)   remote: Counting objects:  41% (1981/4831)   
remote: Counting objects:  42% (2030/4831)   remote: Counting objects:  
43% (2078/4831)   remote: Counting objects:  44% (2126/4831)   
remote: Counting objects:  45% (2174/4831)   remote: Counting objects:  
46% (2223/4831)   remote: Counting objects:  47% (2271/4831)   
remote: Counting objects:  48% (2319/4831)   remote: Counting objects:  
49% (2368/4831)   remote: Counting objects:  50% (2416/4831)   
remote: Counting objects:  51% (2464/4831)   remote: Counting objects:  
52% (2513/4831)   remote: Counting objects:  53% (2561/4831)   
remote: Counting objects:  54% (2609/4831)   remote: Counting objects:  
55% (2658/4831)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-trunk-jdk8 #3109

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3836, done.
remote: Counting objects:   0% (1/3836)   remote: Counting objects:   
1% (39/3836)   remote: Counting objects:   2% (77/3836)   
remote: Counting objects:   3% (116/3836)   remote: Counting objects:   
4% (154/3836)   remote: Counting objects:   5% (192/3836)   
remote: Counting objects:   6% (231/3836)   remote: Counting objects:   
7% (269/3836)   remote: Counting objects:   8% (307/3836)   
remote: Counting objects:   9% (346/3836)   remote: Counting objects:  
10% (384/3836)   remote: Counting objects:  11% (422/3836)   
remote: Counting objects:  12% (461/3836)   remote: Counting objects:  
13% (499/3836)   remote: Counting objects:  14% (538/3836)   
remote: Counting objects:  15% (576/3836)   remote: Counting objects:  
16% (614/3836)   remote: Counting objects:  17% (653/3836)   
remote: Counting objects:  18% (691/3836)   remote: Counting objects:  
19% (729/3836)   remote: Counting objects:  20% (768/3836)   
remote: Counting objects:  21% (806/3836)   remote: Counting objects:  
22% (844/3836)   remote: Counting objects:  23% (883/3836)   
remote: Counting objects:  24% (921/3836)   remote: Counting objects:  
25% (959/3836)   remote: Counting objects:  26% (998/3836)   
remote: Counting objects:  27% (1036/3836)   remote: Counting objects:  
28% (1075/3836)   remote: Counting objects:  29% (1113/3836)   
remote: Counting objects:  30% (1151/3836)   remote: Counting objects:  
31% (1190/3836)   remote: Counting objects:  32% (1228/3836)   
remote: Counting objects:  33% (1266/3836)   remote: Counting objects:  
34% (1305/3836)   remote: Counting objects:  35% (1343/3836)   
remote: Counting objects:  36% (1381/3836)   remote: Counting objects:  
37% (1420/3836)   remote: Counting objects:  38% (1458/3836)   
remote: Counting objects:  39% (1497/3836)   remote: Counting objects:  
40% (1535/3836)   remote: Counting objects:  41% (1573/3836)   
remote: Counting objects:  42% (1612/3836)   remote: Counting objects:  
43% (1650/3836)   remote: Counting objects:  44% (1688/3836)   
remote: Counting objects:  45% (1727/3836)   remote: Counting objects:  
46% (1765/3836)   remote: Counting objects:  47% (1803/3836)   
remote: Counting objects:  48% (1842/3836)   remote: Counting objects:  
49% (1880/3836)   remote: Counting objects:  50% (1918/3836)   
remote: Counting objects:  51% (1957/3836)   remote: Counting objects:  
52% (1995/3836)   remote: Counting objects:  53% (2034/3836)   

Build failed in Jenkins: kafka-trunk-jdk8 #3108

2018-10-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7223: Add late-record metrics (#5742)

[github] KAFKA-7482: LeaderAndIsrRequest should be sent to the shutting down

[github] KAFKA-7485: Wait for truststore update request to complete in test

--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED


Jenkins build is back to normal : kafka-trunk-jdk11 #30

2018-10-12 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7502) Cleanup KTable materialization logic in a single place

2018-10-12 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-7502:


 Summary: Cleanup KTable materialization logic in a single place
 Key: KAFKA-7502
 URL: https://issues.apache.org/jira/browse/KAFKA-7502
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


Today since we pre-create all the `KTableXXX` operator along with the logical 
node, we are effectively duplicating the logic to determine whether the 
resulted KTable should be materialized. For example, in `KTableKTableJoinNode` 
and in `KTableImpl#doJoin`. This is bug-vulnerable since we may update the 
logic in one class but forgot to update the other class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-12 Thread Colin McCabe
On Fri, Oct 12, 2018, at 15:45, Yishun Guan wrote:
> Hi Colin,
> 
> Thanks for your suggestions. I have modified the current KIP with your
> comments. However, I still think I should keep the entire list, because it
> is a good way to keep track of which class need to be change, and others
> can discuss if changes on these internal classes are necessary?

Hi Yishun,

I guess I don't feel that strongly about it.  If you want to keep the internal 
classes in the list, that's fine.  They don't really need to be in the KIP but 
it's OK if they're there.

Thanks for working on this.  +1 (binding).

best,
Colin

> 
> Thanks,
> Yishun
> 
> On Fri, Oct 12, 2018 at 11:42 AM Colin McCabe  wrote:
> 
> > Hi Yishun,
> >
> > Thanks for looking at this.
> >
> > Under "proposed changes," it's not necessary to add a section where you
> > demonstrate adding "implements AutoCloseable" to the code.  We know what
> > adding that would look like.
> >
> > Can you create a full, single, list of all the classes that would be
> > affected?  It's not necessary to write who suggested which classes in the
> > KIP.  Also, I noticed some of the classes here are in "internals"
> > packages.  Given that these are internal classes that aren't part of our
> > API, it's not necessary to add them to the KIP, I think.  Since they are
> > implementation details, they can be changed at any time without a KIP.
> >
> > The "compatibility" section should have a discussion of the fact that we
> > can add the new interface without requiring any backwards-incompatible
> > changes at the source or binary level.  In particular, it would be good to
> > highlight that we are not renaming or changing the existing "close" methods.
> >
> > Under "rejected alternatives" we could explain why we chose to implement
> > AutoCloseable rather than Closeable.
> >
> > cheers,
> > Colin
> >
> >
> > On Thu, Oct 11, 2018, at 13:48, Yishun Guan wrote:
> > > Hi,
> > >
> > > Just to bump this voting thread up again. Thanks!
> > >
> > > Best,
> > > Yishun
> > > On Fri, Oct 5, 2018 at 12:58 PM Yishun Guan  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I think we have discussed this well enough to put this into a vote.
> > > >
> > > > Suggestions are welcome!
> > > >
> > > > Best,
> > > > Yishun
> > > >
> > > > On Wed, Oct 3, 2018, 2:30 PM Yishun Guan  wrote:
> > > >>
> > > >> Hi All,
> > > >>
> > > >> I want to start a voting on this KIP:
> > > >>
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> > > >>
> > > >> Here is the discussion thread:
> > > >>
> > https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
> > > >>
> > > >> Thanks,
> > > >> Yishun
> >


[DISCUSS] KIP-351: Add --under-min-isr option to describe topics command

2018-10-12 Thread Kevin Lu
Hi All,

After some feedback, I have reformulated KIP-351

.

This KIP proposes an additional "--under-min-isr" option in TopicCommand to
show topic partitions which are under the configured "min.insync.replicas"
to help operators identify which topic partitions need immediate fixing.

Please take a look and provide some feedback!

Thanks!

Regards,
Kevin


Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-12 Thread Yishun Guan
Hi Colin,

Thanks for your suggestions. I have modified the current KIP with your
comments. However, I still think I should keep the entire list, because it
is a good way to keep track of which class need to be change, and others
can discuss if changes on these internal classes are necessary?

Thanks,
Yishun

On Fri, Oct 12, 2018 at 11:42 AM Colin McCabe  wrote:

> Hi Yishun,
>
> Thanks for looking at this.
>
> Under "proposed changes," it's not necessary to add a section where you
> demonstrate adding "implements AutoCloseable" to the code.  We know what
> adding that would look like.
>
> Can you create a full, single, list of all the classes that would be
> affected?  It's not necessary to write who suggested which classes in the
> KIP.  Also, I noticed some of the classes here are in "internals"
> packages.  Given that these are internal classes that aren't part of our
> API, it's not necessary to add them to the KIP, I think.  Since they are
> implementation details, they can be changed at any time without a KIP.
>
> The "compatibility" section should have a discussion of the fact that we
> can add the new interface without requiring any backwards-incompatible
> changes at the source or binary level.  In particular, it would be good to
> highlight that we are not renaming or changing the existing "close" methods.
>
> Under "rejected alternatives" we could explain why we chose to implement
> AutoCloseable rather than Closeable.
>
> cheers,
> Colin
>
>
> On Thu, Oct 11, 2018, at 13:48, Yishun Guan wrote:
> > Hi,
> >
> > Just to bump this voting thread up again. Thanks!
> >
> > Best,
> > Yishun
> > On Fri, Oct 5, 2018 at 12:58 PM Yishun Guan  wrote:
> > >
> > > Hi,
> > >
> > > I think we have discussed this well enough to put this into a vote.
> > >
> > > Suggestions are welcome!
> > >
> > > Best,
> > > Yishun
> > >
> > > On Wed, Oct 3, 2018, 2:30 PM Yishun Guan  wrote:
> > >>
> > >> Hi All,
> > >>
> > >> I want to start a voting on this KIP:
> > >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> > >>
> > >> Here is the discussion thread:
> > >>
> https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
> > >>
> > >> Thanks,
> > >> Yishun
>


Build failed in Jenkins: kafka-2.1-jdk8 #20

2018-10-12 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7482: LeaderAndIsrRequest should be sent to the shutting down

[rajinisivaram] KAFKA-7485: Wait for truststore update request to complete in 
test

--
[...truncated 428.38 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #29

2018-10-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7223: Add late-record metrics (#5742)

--
[...truncated 1.29 MB...]
  ^
:140:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
val unsecureZkUtils = ZkUtils(zkConnect, 6000, 6000, false)
  ^
:165:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
for (path <- ZkUtils.SecureZkRootPaths) {
 ^
:181:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
val unsecureZkUtils = ZkUtils(zkUrl, 6000, 6000, false)
  ^
:182:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
val secureZkUtils = ZkUtils(zkUrl, 6000, 6000, true)
^
:195:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  private def testMigration(zkUrl: String, firstZk: ZkUtils, secondZk: ZkUtils) 
{
^
:195:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  private def testMigration(zkUrl: String, firstZk: ZkUtils, secondZk: ZkUtils) 
{
   ^
:197:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
for (path <- ZkUtils.SecureZkRootPaths ++ ZkUtils.SensitiveZkRootPaths) {
 ^
:197:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
for (path <- ZkUtils.SecureZkRootPaths ++ ZkUtils.SensitiveZkRootPaths) {
  ^
:210:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
firstZk.createPersistentPath(ZkUtils.ConsumersPath)
 ^
:213:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
secondZk.createPersistentPath(ZkUtils.ConsumersPath)
  ^
:218:
 object ZkUtils in 

Re: [VOTE] KIP-349 Priorities for Source Topics

2018-10-12 Thread Colin McCabe
On Mon, Oct 8, 2018, at 12:35, Thomas Becker wrote:
> Well my (perhaps flawed) understanding of topic priorities is that lower 
> priority topics are not consumed as long as higher priority ones have 
> unconsumed messages (which means our position < HW). So if I'm doing 
> this manually, I have to make some determination as to whether my high 
> priority topic partitions are at the HW before I can decide if I want to 
> poll the lower priority ones. Right?

Hi Thomas,

You could periodically check the last committed position of various partitions 
using KafkaConsumer#committed.  But this would be very inefficient.  For one 
thing, you'd have to keep waking up your consumer thread all the time to do 
this.

The two-consumer solution that I suggested earlier just implies that you have 
two consumers, one for the control data and one for the non-control data.  In 
that case, as long as control data is available, your consumer will always try 
to read it.  It doesn't involve the caller checking committed position using 
KafkaConsumer#committed at any point.

Usually, consumers are reading data that is relatively recent.  If the consumer 
is too slow to keep up with the incoming messages over the long term, the 
system usually gets into a bad state.  I think this is one reason why it's hard 
to think of use-cases for this feature.  If you had a control partition and 
data partition, the data partition wouldn't really block you from getting the 
control messages in a timely fashion.  You almost certainly need to be able to 
keep up with both partitions anyway.  Also, if you have to do some very 
expensive processing on data messages, you should be offloading that processing 
to another thread, rather than doing the expensive thing in your consumer 
thread.  And you can mute a partition while you're processing an expensive 
message from that partition, so it doesn't really block the processing of other 
partitions anyway.

Maybe there's some really cool use-case that I haven't thought of.  But so far 
I can't really think of any time I would need topic priorities if I was muting 
topics and offloading blocking operations in a reasonable way.  It would be 
good to identify use-cases because it would motivate choices like how many 
priorities do we want (2? 256?  4 billion?) and what the API would be like, etc.

best,
Colin

> 
> On Fri, 2018-10-05 at 11:34 -0700, Colin McCabe wrote:
> 
> On Fri, Oct 5, 2018, at 10:58, Thomas Becker wrote:
> 
> Colin,
> 
> Would you mind sharing your vision for how this looks with multiple
> 
> consumers? I'm still getting my bearings with the new consumer but it's
> 
> not immediately obvious to me how this would work.
> 
> 
> Hi Thomas,
> 
> 
> I was just responding to the general idea that you would have some kind 
> of control topic that you wanted to read with very low latency, and some 
> kind of set of data topics where the latency requirements are less 
> strict.  In that case, you can just have two consumers: one for the low-
> latency topic, and one for the less low-latency topics.
> 
> 
> There's a lot of things in this picture that are unclear.  Does the data 
> in one set of topics have any relation to the data in the other?  Why do 
> we want a control channel distinct from the data channel?  That's why I 
> asked for clarification on the use-case.
> 
> 
> In particular, it doesn't seem particularly easy to know when you are at 
> the high
> 
> watermark of a topic.
> 
> 
> KafkaConsumer#committed will return the last committed offset for a 
> partition.  However, I'm not sure I understand why you want this 
> information in this case-- can you expand a bit on this?
> 
> 
> best,
> 
> Colin
> 
> 
> 
> 
> -Tommy
> 
> 
> On Mon, 2018-10-01 at 13:43 -0700, Colin McCabe wrote:
> 
> 
> Hi all,
> 
> 
> 
> I feel like the DISCUSS thread didn't really come to a conclusion, so a
> 
> vote would be premature here.
> 
> 
> 
> In particular, I still don't really understand the use-case for this
> 
> feature.  Can someone give a concrete scenario where you would need
> 
> this?  The control plane / data plane example that is listed in the KIP
> 
> doesn't require this feature.  You can just have one consumer for the
> 
> control plane, and one for the data plane, and do priority that way.
> 
> The discussion feels kind of unfocused since we haven't identified even
> 
> one concrete use-case that needs this feature.
> 
> 
> 
> Unfortunately, this is a feature which consumes server-side memory.  We
> 
> have to store the priorities somehow when doing incremental fetch
> 
> requests.  If we go with an int as suggested, then this is at least 4
> 
> bytes per partition per incremental fetch request.  It also makes it
> 
> more complex and potentially slower to maintain the linked list of
> 
> partitions in the fetch requests.  Before we think about this, I'd like
> 
> to have a concrete use-case in mind, so that we can evaluate the costs
> 
> versus benefits.
> 
> 
> 
> best,
> 
> 
> 

Jenkins build is back to normal : kafka-2.1-jdk8 #19

2018-10-12 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7485) Flaky test `DyanamicBrokerReconfigurationTest.testTrustStoreAlter`

2018-10-12 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7485.
---
   Resolution: Fixed
 Reviewer: Jason Gustafson
Fix Version/s: 2.1.0

> Flaky test `DyanamicBrokerReconfigurationTest.testTrustStoreAlter`
> --
>
> Key: KAFKA-7485
> URL: https://issues.apache.org/jira/browse/KAFKA-7485
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> {code}
> 09:53:53 
> 09:53:53 kafka.server.DynamicBrokerReconfigurationTest > testTrustStoreAlter 
> FAILED
> 09:53:53 org.apache.kafka.common.errors.SslAuthenticationException: SSL 
> handshake failed
> 09:53:53 
> 09:53:53 Caused by:
> 09:53:53 javax.net.ssl.SSLProtocolException: Handshake message 
> sequence violation, 2
> 09:53:53 at 
> java.base/sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1611)
> 09:53:53 at 
> java.base/sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:497)
> 09:53:53 at 
> java.base/sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:745)
> 09:53:53 at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:680)
> 09:53:53 at 
> java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
> 09:53:53 at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:474)
> 09:53:53 at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:274)
> 09:53:53 at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:126)
> 09:53:53 at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:532)
> 09:53:53 at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:467)
> 09:53:53 at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1210)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
> 09:53:53 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1119)
> 09:53:53 at 
> kafka.server.DynamicBrokerReconfigurationTest.kafka$server$DynamicBrokerReconfigurationTest$$awaitInitialPositions(DynamicBrokerReconfigurationTest.scala:997)
> 09:53:53 at 
> kafka.server.DynamicBrokerReconfigurationTest$ConsumerBuilder.build(DynamicBrokerReconfigurationTest.scala:1424)
> 09:53:53 at 
> kafka.server.DynamicBrokerReconfigurationTest.verifySslProduceConsume$1(DynamicBrokerReconfigurationTest.scala:286)
> 09:53:53 at 
> kafka.server.DynamicBrokerReconfigurationTest.testTrustStoreAlter(DynamicBrokerReconfigurationTest.scala:311)
> 09:53:53 
> 09:53:53 Caused by:
> 09:53:53 javax.net.ssl.SSLProtocolException: Handshake message 
> sequence violation, 2
> 09:53:53 at 
> java.base/sun.security.ssl.HandshakeStateManager.check(HandshakeStateManager.java:398)
> 09:53:53 at 
> java.base/sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:215)
> 09:53:53 at 
> java.base/sun.security.ssl.Handshaker.processLoop(Handshaker.java:1098)
> 09:53:53 at 
> java.base/sun.security.ssl.Handshaker$1.run(Handshaker.java:1031)
> 09:53:53 at 
> java.base/sun.security.ssl.Handshaker$1.run(Handshaker.java:1028)
> 09:53:53 at 
> java.base/java.security.AccessController.doPrivileged(Native Method)
> 09:53:53 at 
> java.base/sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1540)
> 09:53:53 at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:399)
> 09:53:53 at 
> 

[jira] [Created] (KAFKA-7501) double deallocation of producer batch upon expiration of inflight requests and error response

2018-10-12 Thread xiongqi wu (JIRA)
xiongqi wu created KAFKA-7501:
-

 Summary: double deallocation of producer batch upon expiration of 
inflight requests and error response
 Key: KAFKA-7501
 URL: https://issues.apache.org/jira/browse/KAFKA-7501
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: xiongqi wu
Assignee: xiongqi wu


The following event sequence will lead to double deallocation of a producer 
batch.

1) a producer batch is sent and the response is not received. 

2) the inflight producer batch is expired when deliveryTimeoutMs has reached.  
The  sender fail the producer batch via "failBatch" and the producer batch is 
deallocated via "accumulator.deallocate(batch)". 

3) the response for the batch finally arrived after batch expiration, and the 
response contains the error "Errors.MESSAGE_TOO_LARGE" .

4) the producer batch is split and the original batch is deallocated a second 
time. As a result, the "IllegalStateException" will be raised. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3107

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4816, done.
remote: Counting objects:   0% (1/4816)   remote: Counting objects:   
1% (49/4816)   remote: Counting objects:   2% (97/4816)   
remote: Counting objects:   3% (145/4816)   remote: Counting objects:   
4% (193/4816)   remote: Counting objects:   5% (241/4816)   
remote: Counting objects:   6% (289/4816)   remote: Counting objects:   
7% (338/4816)   remote: Counting objects:   8% (386/4816)   
remote: Counting objects:   9% (434/4816)   remote: Counting objects:  
10% (482/4816)   remote: Counting objects:  11% (530/4816)   
remote: Counting objects:  12% (578/4816)   remote: Counting objects:  
13% (627/4816)   remote: Counting objects:  14% (675/4816)   
remote: Counting objects:  15% (723/4816)   remote: Counting objects:  
16% (771/4816)   remote: Counting objects:  17% (819/4816)   
remote: Counting objects:  18% (867/4816)   remote: Counting objects:  
19% (916/4816)   remote: Counting objects:  20% (964/4816)   
remote: Counting objects:  21% (1012/4816)   remote: Counting objects:  
22% (1060/4816)   remote: Counting objects:  23% (1108/4816)   
remote: Counting objects:  24% (1156/4816)   remote: Counting objects:  
25% (1204/4816)   remote: Counting objects:  26% (1253/4816)   
remote: Counting objects:  27% (1301/4816)   remote: Counting objects:  
28% (1349/4816)   remote: Counting objects:  29% (1397/4816)   
remote: Counting objects:  30% (1445/4816)   remote: Counting objects:  
31% (1493/4816)   remote: Counting objects:  32% (1542/4816)   
remote: Counting objects:  33% (1590/4816)   remote: Counting objects:  
34% (1638/4816)   remote: Counting objects:  35% (1686/4816)   
remote: Counting objects:  36% (1734/4816)   remote: Counting objects:  
37% (1782/4816)   remote: Counting objects:  38% (1831/4816)   
remote: Counting objects:  39% (1879/4816)   remote: Counting objects:  
40% (1927/4816)   remote: Counting objects:  41% (1975/4816)   
remote: Counting objects:  42% (2023/4816)   remote: Counting objects:  
43% (2071/4816)   remote: Counting objects:  44% (2120/4816)   
remote: Counting objects:  45% (2168/4816)   remote: Counting objects:  
46% (2216/4816)   remote: Counting objects:  47% (2264/4816)   
remote: Counting objects:  48% (2312/4816)   remote: Counting objects:  
49% (2360/4816)   remote: Counting objects:  50% (2408/4816)   
remote: Counting objects:  51% (2457/4816)   remote: Counting objects:  
52% (2505/4816)   remote: Counting objects:  53% (2553/4816)   
remote: Counting objects:  54% (2601/4816)   remote: Counting objects:  
55% (2649/4816)   remote: Counting objects:  56% 

Build failed in Jenkins: kafka-trunk-jdk8 #3106

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3821, done.
remote: Counting objects:   0% (1/3821)   remote: Counting objects:   
1% (39/3821)   remote: Counting objects:   2% (77/3821)   
remote: Counting objects:   3% (115/3821)   remote: Counting objects:   
4% (153/3821)   remote: Counting objects:   5% (192/3821)   
remote: Counting objects:   6% (230/3821)   remote: Counting objects:   
7% (268/3821)   remote: Counting objects:   8% (306/3821)   
remote: Counting objects:   9% (344/3821)   remote: Counting objects:  
10% (383/3821)   remote: Counting objects:  11% (421/3821)   
remote: Counting objects:  12% (459/3821)   remote: Counting objects:  
13% (497/3821)   remote: Counting objects:  14% (535/3821)   
remote: Counting objects:  15% (574/3821)   remote: Counting objects:  
16% (612/3821)   remote: Counting objects:  17% (650/3821)   
remote: Counting objects:  18% (688/3821)   remote: Counting objects:  
19% (726/3821)   remote: Counting objects:  20% (765/3821)   
remote: Counting objects:  21% (803/3821)   remote: Counting objects:  
22% (841/3821)   remote: Counting objects:  23% (879/3821)   
remote: Counting objects:  24% (918/3821)   remote: Counting objects:  
25% (956/3821)   remote: Counting objects:  26% (994/3821)   
remote: Counting objects:  27% (1032/3821)   remote: Counting objects:  
28% (1070/3821)   remote: Counting objects:  29% (1109/3821)   
remote: Counting objects:  30% (1147/3821)   remote: Counting objects:  
31% (1185/3821)   remote: Counting objects:  32% (1223/3821)   
remote: Counting objects:  33% (1261/3821)   remote: Counting objects:  
34% (1300/3821)   remote: Counting objects:  35% (1338/3821)   
remote: Counting objects:  36% (1376/3821)   remote: Counting objects:  
37% (1414/3821)   remote: Counting objects:  38% (1452/3821)   
remote: Counting objects:  39% (1491/3821)   remote: Counting objects:  
40% (1529/3821)   remote: Counting objects:  41% (1567/3821)   
remote: Counting objects:  42% (1605/3821)   remote: Counting objects:  
43% (1644/3821)   remote: Counting objects:  44% (1682/3821)   
remote: Counting objects:  45% (1720/3821)   remote: Counting objects:  
46% (1758/3821)   remote: Counting objects:  47% (1796/3821)   
remote: Counting objects:  48% (1835/3821)   remote: Counting objects:  
49% (1873/3821)   remote: Counting objects:  50% (1911/3821)   
remote: Counting objects:  51% (1949/3821)   remote: Counting objects:  
52% (1987/3821)   remote: Counting objects:  53% (2026/3821)   
remote: Counting objects:  54% (2064/3821)   remote: Counting objects:  

Re: [DISCUSS] KIP-377: TopicCommand to use AdminClient

2018-10-12 Thread Colin McCabe
Hi Viktor,

Thanks for bumping this thread.

I think we should just focus on transitioning the TopicCommand to use 
AdminClient, and talk about protocol changes in a separate KIP.  Protocol 
changes often involve a lot of discussion.  This does mean that we couldn't 
implement the "list topics under deletion" feature when using AdminClient at 
the moment.  We could add a note to the tool output indicating this.

We should move the protocol discussion to a separate thread.  Probably also 
look at KIP-142 as well.

best,
Colin


On Tue, Oct 9, 2018, at 07:45, Viktor Somogyi-Vass wrote:
> Hi All,
> 
> Would like to bump this as the conversation sank a little bit, but more
> importantly I'd like to validate my plans/ideas on extending the Metadata
> protocol. I was thinking about two other alternatives, namely:
> 1. Create a ListTopicUnderDeletion protocol. This however would be
> unnecessary: it'd have one very narrow functionality which we can't extend.
> I'd make sense to have a list topics or describe topics protocol where we
> can list/describe topics under deletion but for normal listing/describing
> we already use the metadata, so it would be a duplication of functionality.
> 2. DeleteTopicsResponse could return the topics under deletion if the
> request's argument list is empty which might make sense at the first look,
> but actually we'd mix the query functionality with the delete functionality
> which is counterintuitive.
> 
> Even though most clients won't need these "limbo" topics (which are under
> deletion) in the foreseeable future, it can be considered as part of the
> cluster state or metadata and to me it makes sense. Also it doesn't have a
> big overhead in the response size as typically users don't delete topics
> too often as far as I experienced.
> 
> I'd be happy to receive some ideas/feedback on this.
> 
> Cheers,
> Viktor
> 
> 
> On Fri, Sep 28, 2018 at 4:51 PM Viktor Somogyi-Vass 
> wrote:
> 
> > Hi All,
> >
> > I made an update to the KIP. Just in short:
> > Currently KafkaAdminClient.describeTopics() and
> > KafkaAdminClient.listTopics() uses the Metadata protocol to acquire topic
> > information. The returned response however won't contain the topics that
> > are under deletion but couldn't complete yet (for instance because of some
> > replicas offline), therefore it is not possible to implement the current
> > command's "marked for deletion" feature. To get around this I introduced
> > some changes in the Metadata protocol.
> >
> > Thanks,
> > Viktor
> >
> > On Fri, Sep 28, 2018 at 4:48 PM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com> wrote:
> >
> >> Hi Mickael,
> >>
> >> Thanks for the feedback, I also think that many customers wanted this for
> >> a long time.
> >>
> >> Cheers,
> >> Viktor
> >>
> >> On Fri, Sep 28, 2018 at 11:45 AM Mickael Maison 
> >> wrote:
> >>
> >>> Hi Viktor,
> >>> Thanks for taking this task!
> >>> This is a very nice change as it will allow users to use this tool in
> >>> many Cloud environments where direct zookeeper access is not
> >>> available.
> >>>
> >>>
> >>> On Thu, Sep 27, 2018 at 10:34 AM Viktor Somogyi-Vass
> >>>  wrote:
> >>> >
> >>> > Hi All,
> >>> >
> >>> > This is the continuation of the old KIP-375 with the same title:
> >>> >
> >>> https://lists.apache.org/thread.html/dc71d08de8cd2f082765be22c9f88bc9f8b39bb8e0929a3a4394e9da@%3Cdev.kafka.apache.org%3E
> >>> >
> >>> > The problem there was that two KIPs were created around the same time
> >>> and I
> >>> > chose to reorganize mine a bit and give it a new number to avoid
> >>> > duplication.
> >>> >
> >>> > The KIP summary here once again:
> >>> >
> >>> > I wrote up a relatively simple KIP about improving the Kafka protocol
> >>> and
> >>> > the TopicCommand tool to support the new Java based AdminClient and
> >>> > hopefully to deprecate the Zookeeper side of it.
> >>> >
> >>> > I would be happy to receive some opinions about this. In general I
> >>> think
> >>> > this would be an important addition as this is one of the few left but
> >>> > important tools that still uses direct Zookeeper connection.
> >>> >
> >>> > Here is the link for the KIP:
> >>> >
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-377%3A+TopicCommand+to+use+AdminClient
> >>> >
> >>> > Cheers,
> >>> > Viktor
> >>>
> >>


Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-12 Thread Colin McCabe
Hi Yishun,

Thanks for looking at this.

Under "proposed changes," it's not necessary to add a section where you 
demonstrate adding "implements AutoCloseable" to the code.  We know what adding 
that would look like.

Can you create a full, single, list of all the classes that would be affected?  
It's not necessary to write who suggested which classes in the KIP.  Also, I 
noticed some of the classes here are in "internals" packages.  Given that these 
are internal classes that aren't part of our API, it's not necessary to add 
them to the KIP, I think.  Since they are implementation details, they can be 
changed at any time without a KIP.

The "compatibility" section should have a discussion of the fact that we can 
add the new interface without requiring any backwards-incompatible changes at 
the source or binary level.  In particular, it would be good to highlight that 
we are not renaming or changing the existing "close" methods.

Under "rejected alternatives" we could explain why we chose to implement 
AutoCloseable rather than Closeable.

cheers,
Colin


On Thu, Oct 11, 2018, at 13:48, Yishun Guan wrote:
> Hi,
> 
> Just to bump this voting thread up again. Thanks!
> 
> Best,
> Yishun
> On Fri, Oct 5, 2018 at 12:58 PM Yishun Guan  wrote:
> >
> > Hi,
> >
> > I think we have discussed this well enough to put this into a vote.
> >
> > Suggestions are welcome!
> >
> > Best,
> > Yishun
> >
> > On Wed, Oct 3, 2018, 2:30 PM Yishun Guan  wrote:
> >>
> >> Hi All,
> >>
> >> I want to start a voting on this KIP:
> >> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> >>
> >> Here is the discussion thread:
> >> https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
> >>
> >> Thanks,
> >> Yishun


Build failed in Jenkins: kafka-trunk-jdk8 #3105

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3821, done.
remote: Counting objects:   0% (1/3821)   remote: Counting objects:   
1% (39/3821)   remote: Counting objects:   2% (77/3821)   
remote: Counting objects:   3% (115/3821)   remote: Counting objects:   
4% (153/3821)   remote: Counting objects:   5% (192/3821)   
remote: Counting objects:   6% (230/3821)   remote: Counting objects:   
7% (268/3821)   remote: Counting objects:   8% (306/3821)   
remote: Counting objects:   9% (344/3821)   remote: Counting objects:  
10% (383/3821)   remote: Counting objects:  11% (421/3821)   
remote: Counting objects:  12% (459/3821)   remote: Counting objects:  
13% (497/3821)   remote: Counting objects:  14% (535/3821)   
remote: Counting objects:  15% (574/3821)   remote: Counting objects:  
16% (612/3821)   remote: Counting objects:  17% (650/3821)   
remote: Counting objects:  18% (688/3821)   remote: Counting objects:  
19% (726/3821)   remote: Counting objects:  20% (765/3821)   
remote: Counting objects:  21% (803/3821)   remote: Counting objects:  
22% (841/3821)   remote: Counting objects:  23% (879/3821)   
remote: Counting objects:  24% (918/3821)   remote: Counting objects:  
25% (956/3821)   remote: Counting objects:  26% (994/3821)   
remote: Counting objects:  27% (1032/3821)   remote: Counting objects:  
28% (1070/3821)   remote: Counting objects:  29% (1109/3821)   
remote: Counting objects:  30% (1147/3821)   remote: Counting objects:  
31% (1185/3821)   remote: Counting objects:  32% (1223/3821)   
remote: Counting objects:  33% (1261/3821)   remote: Counting objects:  
34% (1300/3821)   remote: Counting objects:  35% (1338/3821)   
remote: Counting objects:  36% (1376/3821)   remote: Counting objects:  
37% (1414/3821)   remote: Counting objects:  38% (1452/3821)   
remote: Counting objects:  39% (1491/3821)   remote: Counting objects:  
40% (1529/3821)   remote: Counting objects:  41% (1567/3821)   
remote: Counting objects:  42% (1605/3821)   remote: Counting objects:  
43% (1644/3821)   remote: Counting objects:  44% (1682/3821)   
remote: Counting objects:  45% (1720/3821)   remote: Counting objects:  
46% (1758/3821)   remote: Counting objects:  47% (1796/3821)   
remote: Counting objects:  48% (1835/3821)   remote: Counting objects:  
49% (1873/3821)   remote: Counting objects:  50% (1911/3821)   
remote: Counting objects:  51% (1949/3821)   remote: Counting objects:  
52% (1987/3821)   remote: Counting objects:  53% (2026/3821)   
remote: Counting objects:  54% (2064/3821)   remote: Counting objects:  

[jira] [Resolved] (KAFKA-7482) LeaderAndIsrRequest should be sent to the shutting down broker

2018-10-12 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7482.

   Resolution: Fixed
Fix Version/s: 2.1.0

merged to 2.1 and trunk

> LeaderAndIsrRequest should be sent to the shutting down broker
> --
>
> Key: KAFKA-7482
> URL: https://issues.apache.org/jira/browse/KAFKA-7482
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
> Fix For: 2.1.0
>
>
> We introduced a regression in KAFKA-5642 in 1.1. Before 1.1, during a 
> controlled shutdown, the LeaderAndIsrRequest is sent to the shutting down 
> broker to inform it that it's no longer the leader for partitions whose 
> leader have been moved. After 1.1, such LeaderAndIsrRequest is no longer sent 
> to the shutting down broker. This can delay the time for the client to find 
> out the new leader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3104

2018-10-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3841, done.
remote: Counting objects:   0% (1/3841)   remote: Counting objects:   
1% (39/3841)   remote: Counting objects:   2% (77/3841)   
remote: Counting objects:   3% (116/3841)   remote: Counting objects:   
4% (154/3841)   remote: Counting objects:   5% (193/3841)   
remote: Counting objects:   6% (231/3841)   remote: Counting objects:   
7% (269/3841)   remote: Counting objects:   8% (308/3841)   
remote: Counting objects:   9% (346/3841)   remote: Counting objects:  
10% (385/3841)   remote: Counting objects:  11% (423/3841)   
remote: Counting objects:  12% (461/3841)   remote: Counting objects:  
13% (500/3841)   remote: Counting objects:  14% (538/3841)   
remote: Counting objects:  15% (577/3841)   remote: Counting objects:  
16% (615/3841)   remote: Counting objects:  17% (653/3841)   
remote: Counting objects:  18% (692/3841)   remote: Counting objects:  
19% (730/3841)   remote: Counting objects:  20% (769/3841)   
remote: Counting objects:  21% (807/3841)   remote: Counting objects:  
22% (846/3841)   remote: Counting objects:  23% (884/3841)   
remote: Counting objects:  24% (922/3841)   remote: Counting objects:  
25% (961/3841)   remote: Counting objects:  26% (999/3841)   
remote: Counting objects:  27% (1038/3841)   remote: Counting objects:  
28% (1076/3841)   remote: Counting objects:  29% (1114/3841)   
remote: Counting objects:  30% (1153/3841)   remote: Counting objects:  
31% (1191/3841)   remote: Counting objects:  32% (1230/3841)   
remote: Counting objects:  33% (1268/3841)   remote: Counting objects:  
34% (1306/3841)   remote: Counting objects:  35% (1345/3841)   
remote: Counting objects:  36% (1383/3841)   remote: Counting objects:  
37% (1422/3841)   remote: Counting objects:  38% (1460/3841)   
remote: Counting objects:  39% (1498/3841)   remote: Counting objects:  
40% (1537/3841)   remote: Counting objects:  41% (1575/3841)   
remote: Counting objects:  42% (1614/3841)   remote: Counting objects:  
43% (1652/3841)   remote: Counting objects:  44% (1691/3841)   
remote: Counting objects:  45% (1729/3841)   remote: Counting objects:  
46% (1767/3841)   remote: Counting objects:  47% (1806/3841)   
remote: Counting objects:  48% (1844/3841)   remote: Counting objects:  
49% (1883/3841)   remote: Counting objects:  50% (1921/3841)   
remote: Counting objects:  51% (1959/3841)   remote: Counting objects:  
52% (1998/3841)   remote: Counting objects:  53% (2036/3841)   
remote: Counting objects:  54% (2075/3841)   remote: Counting objects:  

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-12 Thread Colin McCabe
Congratulations, Manikumar!  Well done.

best,
Colin


On Fri, Oct 12, 2018, at 01:25, Edoardo Comar wrote:
> Well done Manikumar !
> --
> 
> Edoardo Comar
> 
> IBM Event Streams
> IBM UK Ltd, Hursley Park, SO21 2JN
> 
> 
> 
> 
> From:   "Matthias J. Sax" 
> To: dev 
> Cc: users 
> Date:   11/10/2018 23:41
> Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy
> 
> 
> 
> Congrats!
> 
> 
> On 10/11/18 2:31 PM, Yishun Guan wrote:
> > Congrats Manikumar!
> > On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
> >  wrote:
> >>
> >> Great news, congratulations Manikumar!!
> >>
> >> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian 
> 
> >> wrote:
> >>
> >>> Congrats Manikumar!
> >>>
> >>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
> >>> wrote:
> >>>
>  Bravo!
> 
>  On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  
> wrote:
> 
> > Congratulations Manikumar! Thanks for your continued contributions.
> >
> > Ismael
> >
> > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 
> 
> > wrote:
> >
> >> Hi all,
> >>
> >> The PMC for Apache Kafka has invited Manikumar Reddy as a committer
> >>> and
> > we
> >> are
> >> pleased to announce that he has accepted!
> >>
> >> Manikumar has contributed 134 commits including significant work to
> >>> add
> >> support for delegation tokens in Kafka:
> >>
> >> KIP-48:
> >>
> >>
> >
> 
> >>> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> 
> >> KIP-249
> >> <
> >
> 
> >>> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> 
> >>
> >> :
> >>
> >>
> >
> 
> >>> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> 
> >>
> >> He has broad experience working with many of the core components in
>  Kafka
> >> and he has reviewed over 80 PRs. He has also made huge progress
> > addressing
> >> some of our technical debt.
> >>
> >> We appreciate the contributions and we are looking forward to more.
> >> Congrats Manikumar!
> >>
> >> Jason, on behalf of the Apache Kafka PMC
> >>
> >
> 
> >>>
> >>
> >>
> >> --
> >> Sönke Liebau
> >> Partner
> >> Tel. +49 179 7940878
> >> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> 
> [attachment "signature.asc" deleted by Edoardo Comar/UK/IBM] 
> 
> 
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number 
> 741598. 
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-12 Thread Edoardo Comar
Well done Manikumar !
--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN




From:   "Matthias J. Sax" 
To: dev 
Cc: users 
Date:   11/10/2018 23:41
Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy



Congrats!


On 10/11/18 2:31 PM, Yishun Guan wrote:
> Congrats Manikumar!
> On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
>  wrote:
>>
>> Great news, congratulations Manikumar!!
>>
>> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian 

>> wrote:
>>
>>> Congrats Manikumar!
>>>
>>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
>>> wrote:
>>>
 Bravo!

 On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma  
wrote:

> Congratulations Manikumar! Thanks for your continued contributions.
>
> Ismael
>
> On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson 

> wrote:
>
>> Hi all,
>>
>> The PMC for Apache Kafka has invited Manikumar Reddy as a committer
>>> and
> we
>> are
>> pleased to announce that he has accepted!
>>
>> Manikumar has contributed 134 commits including significant work to
>>> add
>> support for delegation tokens in Kafka:
>>
>> KIP-48:
>>
>>
>

>>> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka

>> KIP-249
>> <
>

>>> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249

>>
>> :
>>
>>
>

>>> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient

>>
>> He has broad experience working with many of the core components in
 Kafka
>> and he has reviewed over 80 PRs. He has also made huge progress
> addressing
>> some of our technical debt.
>>
>> We appreciate the contributions and we are looking forward to more.
>> Congrats Manikumar!
>>
>> Jason, on behalf of the Apache Kafka PMC
>>
>

>>>
>>
>>
>> --
>> Sönke Liebau
>> Partner
>> Tel. +49 179 7940878
>> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany

[attachment "signature.asc" deleted by Edoardo Comar/UK/IBM] 


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-10-12 Thread Jan Filipiak

Id say you can just call the vote.

that happens all the time, and if something comes up, it just goes back 
to discuss.


would not expect to much attention with another another email in this 
thread.


best Jan

On 09.10.2018 13:56, Adam Bellemare wrote:

Hello Contributors

I know that 2.1 is about to be released, but I do need to bump this to keep
visibility up. I am still intending to push this through once contributor
feedback is given.

Main points that need addressing:
1) Any way (or benefit) in structuring the current singular graph node into
multiple nodes? It has a whopping 25 parameters right now. I am a bit fuzzy
on how the optimizations are supposed to work, so I would appreciate any
help on this aspect.

2) Overall strategy for joining + resolving. This thread has much discourse
between Jan and I between the current highwater mark proposal and a groupBy
+ reduce proposal. I am of the opinion that we need to strictly handle any
chance of out-of-order data and leave none of it up to the consumer. Any
comments or suggestions here would also help.

3) Anything else that you see that would prevent this from moving to a vote?

Thanks

Adam







On Sun, Sep 30, 2018 at 10:23 AM Adam Bellemare 
wrote:


Hi Jan

With the Stores.windowStoreBuilder and Stores.persistentWindowStore, you
actually only need to specify the amount of segments you want and how large
they are. To the best of my understanding, what happens is that the
segments are automatically rolled over as new data with new timestamps are
created. We use this exact functionality in some of the work done
internally at my company. For reference, this is the hopping windowed store.

https://kafka.apache.org/11/documentation/streams/developer-guide/dsl-api.html#id21

In the code that I have provided, there are going to be two 24h segments.
When a record is put into the windowStore, it will be inserted at time T in
both segments. The two segments will always overlap by 12h. As time goes on
and new records are added (say at time T+12h+), the oldest segment will be
automatically deleted and a new segment created. The records are by default
inserted with the context.timestamp(), such that it is the record time, not
the clock time, which is used.

To the best of my understanding, the timestamps are retained when
restoring from the changelog.

Basically, this is heavy-handed way to deal with TTL at a segment-level,
instead of at an individual record level.

On Tue, Sep 25, 2018 at 5:18 PM Jan Filipiak 
wrote:


Will that work? I expected it to blow up with ClassCastException or
similar.

You either would have to specify the window you fetch/put or iterate
across all windows the key was found in right?

I just hope the window-store doesn't check stream-time under the hoods
that would be a questionable interface.

If it does: did you see my comment on checking all the windows earlier?
that would be needed to actually give reasonable time gurantees.

Best



On 25.09.2018 13:18, Adam Bellemare wrote:

Hi Jan

Check for  " highwaterMat " in the PR. I only changed the state store,

not

the ProcessorSupplier.

Thanks,
Adam

On Mon, Sep 24, 2018 at 2:47 PM, Jan Filipiak 


On 24.09.2018 16:26, Adam Bellemare wrote:


@Guozhang

Thanks for the information. This is indeed something that will be
extremely
useful for this KIP.

@Jan
Thanks for your explanations. That being said, I will not be moving

ahead

with an implementation using reshuffle/groupBy solution as you

propose.

That being said, if you wish to implement it yourself off of my

current PR

and submit it as a competitive alternative, I would be more than

happy to

help vet that as an alternate solution. As it stands right now, I do

not

really have more time to invest into alternatives without there being

a

strong indication from the binding voters which they would prefer.



Hey, total no worries. I think I personally gave up on the streams DSL

for

some time already, otherwise I would have pulled this KIP through

already.

I am currently reimplementing my own DSL based on PAPI.



I will look at finishing up my PR with the windowed state store in the
next
week or so, exercising it via tests, and then I will come back for

final

discussions. In the meantime, I hope that any of the binding voters

could

take a look at the KIP in the wiki. I have updated it according to the
latest plan:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+
Support+non-key+joining+in+KTable

I have also updated the KIP PR to use a windowed store. This could be
replaced by the results of KIP-258 whenever they are completed.
https://github.com/apache/kafka/pull/5527

Thanks,

Adam



Is the HighWatermarkResolverProccessorsupplier already updated in the

PR?

expected it to change to Windowed,Long Missing something?






On Fri, Sep 14, 2018 at 2:24 PM, Guozhang Wang 
wrote:

Correction on my previous email: KAFKA-5533 is the wrong link, as it

is

for
corresponding changelog mechanisms. But as 

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-12 Thread Viktor Somogyi-Vass
Congratulations Manikumar, well deserved!

On Fri, 12 Oct 2018, 06:30 Andras Beni, 
wrote:

> Congratulations, Manikumar!
>
> Srinivas Reddy  ezt írta (időpont: 2018. okt.
> 12., P 3:00):
>
> > Congratulations Mani. We'll deserved 
> >
> > -
> > Srinivas
> >
> > - Typed on tiny keys. pls ignore typos.{mobile app}
> >
> > On Fri 12 Oct, 2018, 01:39 Jason Gustafson,  wrote:
> >
> > > Hi all,
> > >
> > > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> > we
> > > are
> > > pleased to announce that he has accepted!
> > >
> > > Manikumar has contributed 134 commits including significant work to add
> > > support for delegation tokens in Kafka:
> > >
> > > KIP-48:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > KIP-249
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > >
> > > :
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > >
> > > He has broad experience working with many of the core components in
> Kafka
> > > and he has reviewed over 80 PRs. He has also made huge progress
> > addressing
> > > some of our technical debt.
> > >
> > > We appreciate the contributions and we are looking forward to more.
> > > Congrats Manikumar!
> > >
> > > Jason, on behalf of the Apache Kafka PMC
> > >
> >
>