Build failed in Jenkins: kafka-trunk-jdk8 #3540

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

[jira] [Created] (KAFKA-8231) Expansion of ConnectClusterState interface

2019-04-13 Thread Chris Egerton (JIRA)
Chris Egerton created KAFKA-8231:


 Summary: Expansion of ConnectClusterState interface
 Key: KAFKA-8231
 URL: https://issues.apache.org/jira/browse/KAFKA-8231
 Project: Kafka
  Issue Type: Improvement
Reporter: Chris Egerton
Assignee: Chris Egerton
 Fix For: 2.3.0


This covers [KIP-454: Expansion of the ConnectClusterState 
interface|https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3539

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

[DISCUSS] KIP-454: Expansion of the ConnectClusterState interface

2019-04-13 Thread Chris Egerton
Hi all,

I've posted "KIP-454: Expansion of the ConnectClusterState interface",
which proposes that we add provide more information about the Connect
cluster to REST extensions.

The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-454%3A+Expansion+of+the+ConnectClusterState+interface

I'm eager to hear people's thoughts on this and will appreciate any
feedback.

Cheers,

Chris


Build failed in Jenkins: kafka-trunk-jdk8 #3538

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

Build failed in Jenkins: kafka-2.2-jdk8 #79

2019-04-13 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8213 - Fix typo in Streams Dev Guide (#6574)

[matthias] KAFKA-8212 DOCS (kafka) - Fix Maven artifacts table from cutting off

--
[...truncated 2.73 MB...]
kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3537

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

Build failed in Jenkins: kafka-trunk-jdk11 #434

2019-04-13 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-8213 - Fix typo in Streams Dev Guide (#6574)

[mjsax] KAFKA-8212 DOCS (kafka) - Fix Maven artifacts table from cutting off

[github] KAFKA-8209: Wrong link for KStreams DSL in core concepts doc (#6564)

[github] KAFKA-8210: Fix link for streams table duality (#6573)

[github] KAFKA-8208: Change paper link directly to ASM (#6572)

--
[...truncated 2.37 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.CastTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3536

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

Build failed in Jenkins: kafka-1.1-jdk7 #256

2019-04-13 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8213 - Fix typo in Streams Dev Guide (#6574)

[matthias] KAFKA-8212 DOCS (kafka) - Fix Maven artifacts table from cutting off

--
[...truncated 1.51 MB...]
K extends Object declared in interface KGroupedStream
:405:
 warning: [deprecation] count(SessionWindows,StateStoreSupplier) 
in KGroupedStream has been deprecated
public KTable, Long> count(final SessionWindows sessionWindows,
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:399:
 warning: [deprecation] count(SessionWindows) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows) 
{
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:391:
 warning: [deprecation] count(SessionWindows,String) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows, 
final String queryableStoreName) {
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:303:
 warning: [deprecation] count(Windows,StateStoreSupplier) in 
KGroupedStream has been deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method 
count(Windows,StateStoreSupplier)
K extends Object declared in interface KGroupedStream
:297:
 warning: [deprecation] count(Windows) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows) {
^
  where W,K are type-variables:
W extends Window declared in method count(Windows)
K extends Object declared in interface KGroupedStream
:289:
 warning: [deprecation] count(Windows,String) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method count(Windows,String)
K extends Object declared in interface KGroupedStream
:272:
 warning: [deprecation] count(StateStoreSupplier) in 
KGroupedStream has been deprecated
public KTable count(final 
org.apache.kafka.streams.processor.StateStoreSupplier 
storeSupplier) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:260:
 warning: [deprecation] count(String) in KGroupedStream has been deprecated
public KTable count(final String queryableStoreName) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:174:
 warning: [deprecation] punctuate(long) in Processor has been deprecated
public void punctuate(long timestamp) {
^
:110:
 warning: [deprecation] schedule(long) in ProcessorContext has been deprecated
public void schedule(final long interval) {
^
:826:
 warning: [deprecation] leftJoin(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream leftJoin(final KTable other,
 ^
  where 

Build failed in Jenkins: kafka-trunk-jdk8 #3535

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

[jira] [Created] (KAFKA-8230) Add static membership support in librd consumer client

2019-04-13 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8230:
--

 Summary: Add static membership support in librd consumer client 
 Key: KAFKA-8230
 URL: https://issues.apache.org/jira/browse/KAFKA-8230
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen


Once the effort in https://issues.apache.org/jira/browse/KAFKA-7018 is done, 
one of the low hanging fruit is to add this support for other language Kafka 
consumers, such as c consumer in librdKafka.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3534

2019-04-13 Thread Apache Jenkins Server
See 

--
[...truncated 593 B...]
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read 81a23220014b719ff39bcb3d422ed489e461fe78
error: missing object referenced by 'refs/tags/2.2.0'
error: Could not read 79db30a8d7e3e85641c3c838f48b286cc7500814
error: Could not read bc745af6a7a640360a092d8bbde5f68b6ad637be
remote: Enumerating objects: 3990, done.
remote: Counting objects:   0% (1/3990)   remote: Counting objects:   
1% (40/3990)   remote: Counting objects:   2% (80/3990)   
remote: Counting objects:   3% (120/3990)   remote: Counting objects:   
4% (160/3990)   remote: Counting objects:   5% (200/3990)   
remote: Counting objects:   6% (240/3990)   remote: Counting objects:   
7% (280/3990)   remote: Counting objects:   8% (320/3990)   
remote: Counting objects:   9% (360/3990)   remote: Counting objects:  
10% (399/3990)   remote: Counting objects:  11% (439/3990)   
remote: Counting objects:  12% (479/3990)   remote: Counting objects:  
13% (519/3990)   remote: Counting objects:  14% (559/3990)   
remote: Counting objects:  15% (599/3990)   remote: Counting objects:  
16% (639/3990)   remote: Counting objects:  17% (679/3990)   
remote: Counting objects:  18% (719/3990)   remote: Counting objects:  
19% (759/3990)   remote: Counting objects:  20% (798/3990)   
remote: Counting objects:  21% (838/3990)   remote: Counting objects:  
22% (878/3990)   remote: Counting objects:  23% (918/3990)   
remote: Counting objects:  24% (958/3990)   remote: Counting objects:  
25% (998/3990)   remote: Counting objects:  26% (1038/3990)   
remote: Counting objects:  27% (1078/3990)   remote: Counting objects:  
28% (1118/3990)   remote: Counting objects:  29% (1158/3990)   
remote: Counting objects:  30% (1197/3990)   remote: Counting objects:  
31% (1237/3990)   remote: Counting objects:  32% (1277/3990)   
remote: Counting objects:  33% (1317/3990)   remote: Counting objects:  
34% (1357/3990)   remote: Counting objects:  35% (1397/3990)   
remote: Counting objects:  36% (1437/3990)   remote: Counting objects:  
37% (1477/3990)   remote: Counting objects:  38% (1517/3990)   
remote: Counting objects:  39% (1557/3990)   remote: Counting objects:  
40% (1596/3990)   remote: Counting objects:  41% (1636/3990)   
remote: Counting objects:  42% (1676/3990)   remote: Counting objects:  
43% (1716/3990)   remote: Counting objects:  44% (1756/3990)   
remote: Counting objects:  45% (1796/3990)   remote: Counting objects:  
46% (1836/3990)   remote: Counting objects:  47% (1876/3990)   
remote: Counting objects:  48% (1916/3990)   remote: Counting objects:  
49% (1956/3990)   remote: Counting objects:  50% (1995/3990)   
remote: Counting objects:  51% (2035/3990)   remote: Counting objects:  
52% (2075/3990)   remote: Counting objects:  53% (2115/3990)   
remote: Counting objects:  54% (2155/3990)   remote: Counting objects:  
55% (2195/3990)   remote: Counting objects:  56% (2235/3990)   
remote: Counting objects:  57% (2275/3990)   remote: Counting objects:  
58% 

[jira] [Created] (KAFKA-8229) Connect Sink Task updates nextCommit when commitRequest is true

2019-04-13 Thread Scott Reynolds (JIRA)
Scott Reynolds created KAFKA-8229:
-

 Summary: Connect Sink Task updates nextCommit when commitRequest 
is true
 Key: KAFKA-8229
 URL: https://issues.apache.org/jira/browse/KAFKA-8229
 Project: Kafka
  Issue Type: Bug
Reporter: Scott Reynolds


Today, when a WorkerSinkTask uses context.requestCommit(), the next call to 
iteration will cause the commit to happen. As part of the commit execution it 
will also change the nextCommit milliseconds.

This creates some weird behaviors when a SinkTask calls context.requestCommit 
multiple times. In our case, we were calling requestCommit when the number of 
kafka records we processed exceed a threshold. This resulted in the nextCommit 
being several days in the future and caused it to only commit when the record 
threshold was reached.

We expected the task to commit when the record threshold was reached OR when 
the timer went off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8209) Wrong link for KStreams DSL in Core Concepts doc

2019-04-13 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8209.

Resolution: Fixed

> Wrong link for KStreams DSL in Core Concepts doc
> 
>
> Key: KAFKA-8209
> URL: https://issues.apache.org/jira/browse/KAFKA-8209
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 2.3.0
>
>
> In the [core concepts 
> doc|https://kafka.apache.org/21/documentation/streams/core-concepts], there 
> is a link in the "States" section for "Kafka Streams DSL". It points to the 
> wrong link.
> Actual: 
> https://kafka.apache.org/21/documentation/streams/developer-guide/#streams_dsl
> Expected: 
> https://kafka.apache.org/21/documentation/streams/developer-guide/dsl-api.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8210) Missing link for KStreams in Streams DSL docs

2019-04-13 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8210.

Resolution: Fixed

> Missing link for KStreams in Streams DSL docs
> -
>
> Key: KAFKA-8210
> URL: https://issues.apache.org/jira/browse/KAFKA-8210
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Bill Bejeck
>Priority: Minor
>
> In [the Streams DSL 
> docs|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html],
>  there is some text under the KTable section that reads: "We have already 
> seen an example of a changelog stream in the section 
> streams_concepts_duality."
> "streams_concepts_duality" seems to indicate that it should be a link, but it 
> is not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8208) Broken link for out-of-order data in KStreams Core Concepts doc

2019-04-13 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8208.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Broken link for out-of-order data in KStreams Core Concepts doc
> ---
>
> Key: KAFKA-8208
> URL: https://issues.apache.org/jira/browse/KAFKA-8208
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 2.3.0
>
>
> In the [core concepts 
> doc|https://kafka.apache.org/21/documentation/streams/core-concepts], there 
> is a link in the "Out-of-Order Handling" section for "out-of-order data". It 
> 404's to https://kafka.apache.org/21/documentation/streams/tbd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8212) KStreams documentation Maven artifact table is cut off

2019-04-13 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8212.

Resolution: Fixed

> KStreams documentation Maven artifact table is cut off
> --
>
> Key: KAFKA-8212
> URL: https://issues.apache.org/jira/browse/KAFKA-8212
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Victoria Bialas
>Priority: Minor
> Attachments: Screen Shot 2019-04-10 at 2.04.09 PM.png
>
>
> In the [Writing a Streams Application 
> doc|https://kafka.apache.org/21/documentation/streams/developer-guide/write-streams.html],
>  the section "LIBRARIES AND MAVEN ARTIFACTS" has a table that lists out the 
> Maven artifacts. The items in the group ID overflow and are cut off by the 
> table column, even on a very large monitor.
> Note that "artifact ID" seems to have its word break property set correctly. 
> See the attached image.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8213) KStreams interactive query documentation typo

2019-04-13 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8213.

Resolution: Fixed

> KStreams interactive query documentation typo
> -
>
> Key: KAFKA-8213
> URL: https://issues.apache.org/jira/browse/KAFKA-8213
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Victoria Bialas
>Priority: Minor
>
> In [the Interactive Queries 
> docs|https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html#querying-remote-state-stores-for-the-entire-app],
>  we have a minor typo:
> Actual: You can use the corresponding local data in other parts of your 
> application code, as long as it doesn’t required calling the Kafka Streams 
> API.
> Expected: You can use the corresponding local data in other parts of your 
> application code, as long as it doesn’t require calling the Kafka Streams API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-446: Add changelog topic configuration to KTable suppress

2019-04-13 Thread Matthias J. Sax
> Are you sure the users are aware that with `withLoggingDisabled()`, they
> might lose data during failover?

I hope so :D

Of course, we can always improve the JavaDocs.


-Matthias


On 4/12/19 2:48 PM, Bill Bejeck wrote:
> Thanks for the KIP Maarten.
> 
> I also agree that keeping the `withLoggingDisabled()` and
> `withLoggingEnabled(Map)` methods is the better option.
> 
> When it comes to educating the users on the downside of disabling logging,
> IMHO I think a comment in the JavaDoc should be sufficient.
> 
> -Bill
> 
> On Fri, Apr 12, 2019 at 3:59 PM Bruno Cadonna  wrote:
> 
>> Matthias,
>>
>> Are you sure the users are aware that with `withLoggingDisabled()`, they
>> might lose data during failover?
>>
>> OK, we maybe do not necessarily need a WARN log. However, I would at least
>> add a comment like in `StoreBuilder`,ie,
>>
>> /**
>> * Disable the changelog for store built by this {@link StoreBuilder}.
>> * This will turn off fault-tolerance for your store.
>> * By default the changelog is enabled.
>> * @return this
>> */
>> StoreBuilder withLoggingDisabled();
>>
>> What do you think?
>>
>> Best,
>> Bruno
>>
>> On Thu, Apr 11, 2019 at 12:04 AM Matthias J. Sax 
>> wrote:
>>
>>> I think that the current proposal to add `withLoggingDisabled()` and
>>> `withLoggingEnabled(Map)` should be the best option.
>>>
>>> IMHO there is no reason to add a WARN log. We also don't have a WARN log
>>> when people disable logging on regular stores. As Bruno mentioned, this
>>> might also lead to data loss, so I don't see why we should treat
>>> suppress() different to other stores.
>>>
>>>
>>> -Matthias
>>>
>>> On 4/10/19 3:36 PM, Bruno Cadonna wrote:
 Hi Marteen and John,

 I would opt for option 1 with an additional log message on INFO or WARN
 level, since the log file is the place where you would look first to
 understand what went wrong. I would also not adjust it when persistence
 stores are available for suppress.

 I would not go for option 2 or 3, because IIUC, with
 `withLoggingDisabled()` also persistent state stores do not guarantee
>> not
 to loose records. Persisting state stores is basically a way to
>> optimize
 recovery in certain cases. The changelog topic is the component that
 guarantees no data loss. So regarding data loss, in my opinion,
>> disabling
 logging on the suppression buffer is not different from disabling
>> logging
 on other state stores. Please correct me if I am wrong.

 Best,
 Bruno

 On Wed, Apr 10, 2019 at 12:12 PM John Roesler 
>> wrote:

> Thanks for the update and comments, Maarten. It would be interesting
>> to
> hear what others think as well.
> -John
>
> On Thu, Apr 4, 2019 at 2:43 PM Maarten Duijn 
>>> wrote:
>
>> Thank you for the explanation regarding the internals, I have edited
>>> the
>> KIP accordingly and updated the Javadoc. About the possible data loss
> when
>> altering changelog config, I think we can improve by doing (one of)
>> the
>> following.
>>
>> 1) Add a warning in the comments that clearly states what might
>> happen
>> when change logging is disabled and adjust it when persistent stores
>>> are
>> added.
>>
>> 2) Change `withLoggingDisabled` to `minimizeLogging`. Instead of
> disabling
>> logging, a call to this method minimizes the topic size by
>> aggressively
>> removing the records emitted downstream by the suppress operator. I
> believe
>> this can be achieved by setting `delete.retention.ms=0` in the topic
>> config.
>>
>> 3) Remove `withLoggingDisabled` from the proposal.
>>
>> 4) Leave both methods as-proposed, as you indicated, this is in line
>>> with
>> the other parts of the Streams API
>>
>> A user might want to disable logging when downstream is not a Kafka
>>> topic
>> but some other service that does not benefit from
>> atleast-once-delivery
> of
>> the suppressed records in case of failover or rebalance.
>> Seeing as it might cause data loss, the methods should not be used
> lightly
>> and I think some comments are warranted. Personally, I rely purely on
> Kafka
>> to prevent data loss even when a store persisted locally, so when
>>> support
>> is added for persistent suppression, I feel the comments may stay.
>>
>> Maarten
>>
>

>>>
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams

2019-04-13 Thread Matthias J. Sax
Thanks for your answers Guozhang!

I slowly understand more and more details. Couple of follow up questions:



10) Consumer Coordinator Algorithm--1a:

> If subscription has changed: revoke all partitions by calling 
> onPartitionsRevoked, send join-group request with empty owned partitions in 
> Subscription.

Could we call `onPartitionsRevoked()` not on all partitions, but only
the assigned ones for topics that got removed from the subscription? And
send corresponding potentially non-empty owned partitions in the
Subscription? In your reply to mentioned to avoid "split brain" -- what
scenario do you have in mind? Releasing partitions seem save, and we
enter a rebalance afterwards anyway.


20) Consumer Coordinator Algorithm--1b:

> If topic metadata has changed: call onPartitionsLost on those 
> owned-but-no-longer-exist partitions; and if the consumer is the leader, send 
> join-group request.

Why should only the leader send join-group request? In any client detect
a metadata change, it seems that any client could trigger a new rebalance?


30) Consumer Coordinator Algorithm--1c:

> If received REBALANCE_IN_PROGRESS from heartbeat response or commit response: 
> same as a) above.

For this case, we missed a rebalance. Should we rather call
`onPartitionsLost()` instead of `onPartitionsRevoked()` for this case?


40) Consumer Coordinator Algorithm--1d:

> If received MEMBER_ID_REQUIRED from join-group request: same as a) above.

This can only happen when a new consumer starts and thus no partitions
are assigned. Why do we need to call `onPartitionsRevoked()` before we
send the second join-group request?


50) Consumer Coordinator Algorithm--2c:

> Note the this set otherwise the we would fall into the case 3.b) forevercould 
> be different from owned-partitions.
> Compare the owned-partitions with assigned-partitions and generate three 
> exclusive sub-sets:

Incomplete sentence?


60) Consumer Coordinator Algorithm--3:

> For every consumer: after received the sync-group request, do the following:

Do you mean sync-group _response_?


70) nit: typo, double `since`

> It is safe to just follow the above algorithm since for V0 members, since 
> they've revoked everything 


80) Downgrading and Old-Versioned New Member

> We will rely on the above consumer-side metric so that users would be 
> notified in time.

What does this exactly mean, ie, how is the user notified? What metric
are you referring to?


90) About the upgrade path discussion: To use the already existing
mechanism as proposed by Jason, we could sub-class `PartitionAssignor`
as `IncrementalPartitionAssignor extends PartitionAssignor` (or
introduce a marker interface). This would allow the coordinator to
distinguish between both cases and either revoke eagerly or not.



-Matthias




On 4/12/19 6:08 PM, Jason Gustafson wrote:
> Hi Guozhang,
> 
> Responses below:
> 
> 2. The interface's default implementation will just be
>> `onPartitionRevoked`, so for user's instantiation if they do not make any
>> code changes they should be able to recompile the code and continue.
> 
> 
> Ack, makes sense.
> 
> 4. Hmm.. not sure if it will work. The main issue is that the
>> consumer-coordinator behavior (whether to revoke all or none at
>> onRebalancePrepare) is independent of the selected protocol's assignor
>> (eager or cooperative), so even if the assignor is selected to be the
>> old-versioned one, we will still not revoke at the consumer-coordinator
>> layer and hence has the same risk of migrating still-owned partitions,
>> right?
> 
> 
> Yeah, basically we would have to push the eager/cooperative logic into the
> PartitionAssignor itself and make the consumer aware of the rebalance
> protocol it is compatible with. As long as an eager protocol _could_ be
> selected, the consumer would have to be pessimistic and do eager
> revocation. But if all the assignors configured in the consumer support
> cooperative reassignment, then either 1) a cooperative protocol will be
> selected and cooperative revocation can be safely used, or 2) if the rest
> of the group does not support it, then the consumer will simply fail.
> 
> Another point which you raised offline and I will repeat here is that this
> proposal's benefit is mostly limited to sticky assignment logic. Arguably
> the range assignor may have some incidental stickiness, particularly if the
> group is rebalancing for a newly created or deleted topic. For other cases,
> the proposal is mostly additional overhead since it takes an additional
> rebalance and many of the partitions will move. Perhaps it doesn't make as
> much sense to use the cooperative protocol for strategies like range and
> round-robin. That kind of argues in favor of pushing some of the control
> into the assignor itself. Maybe we would not bother creating
> CooperativeRange as I suggested above, but it would make sense to create a
> cooperative version of the sticky assignment strategy. I thought we might
> have to create a 

Re: [DISCUSSION] KIP-418: A method-chaining way to branch KStream

2019-04-13 Thread Ivan Ponomarev

Hi all!

I have updated the KIP-418 according to the new vision.

Matthias, thanks for your comment!


Renaming KStream#branch() -> #split()


I can see your point: this is to make the name similar to String#split 
that also returns an array, right? But is it worth the loss of backwards 
compatibility? We can have overloaded branch() as well without affecting 
the existing code. Maybe the old array-based `branch` method should be 
deprecated, but this is a subject for discussion.


> Renaming KBranchedStream#addBranch() -> BranchingKStream#branch(), 
KBranchedStream#defaultBranch() -> BranchingKStream#default()


Totally agree with 'addBranch->branch' rename. 'default' is, however, a 
reserved word, so unfortunately we cannot have a method with such name :-)


> defaultBranch() does take an `Predicate` as argument, but I think 
that is not required?


Absolutely! I think that was just copy-paste error or something.

Dear colleagues,

please revise the new version of the KIP and Paul's PR 
(https://github.com/apache/kafka/pull/6512)


Any new suggestions/objections?

Regards,

Ivan


11.04.2019 11:47, Matthias J. Sax пишет:

Thanks for driving the discussion of this KIP. It seems that everybody
agrees that the current branch() method using arrays is not optimal.

I had a quick look into the PR and I like the overall proposal. There
are some minor things we need to consider. I would recommend the
following renaming:

KStream#branch() -> #split()
KBranchedStream#addBranch() -> BranchingKStream#branch()
KBranchedStream#defaultBranch() -> BranchingKStream#default()

It's just a suggestion to get slightly shorter method names.

In the current PR, defaultBranch() does take an `Predicate` as argument,
but I think that is not required?

Also, we should consider KIP-307, that was recently accepted and is
currently implemented:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL

Ie, we should add overloads that accepted a `Named` parameter.


For the issue that the created `KStream` object are in different scopes:
could we extend `KBranchedStream` with a `get(int index)` method that
returns the corresponding "branched" result `KStream` object? Maybe, the
second argument of `addBranch()` should not be a `Consumer` but
a `Function` and `get()` could return whatever the
`Function` returns?


Finally, I would also suggest to update the KIP with the current
proposal. That makes it easier to review.


-Matthias



On 3/31/19 12:22 PM, Paul Whalen wrote:

Ivan,

I'm a bit of a novice here as well, but I think it makes sense for you to
revise the KIP and continue the discussion.  Obviously we'll need some
buy-in from committers that have actual binding votes on whether the KIP
could be adopted.  It would be great to hear if they think this is a good
idea overall.  I'm not sure if that happens just by starting a vote, or if
there is generally some indication of interest beforehand.

That being said, I'll continue the discussion a bit: assuming we do move
forward the solution of "stream.branch() returns KBranchedStream", do we
deprecate "stream.branch(...) returns KStream[]"?  I would favor
deprecating, since having two mutually exclusive APIs that accomplish the
same thing is confusing, especially when they're fairly similar anyway.  We
just need to be sure we're not making something impossible/difficult that
is currently possible/easy.

Regarding my PR - I think the general structure would work, it's just a
little sloppy overall in terms of naming and clarity. In particular,
passing in the "predicates" and "children" lists which get modified in
KBranchedStream but read from all the way KStreamLazyBranch is a bit
complicated to follow.

Thanks,
Paul

On Fri, Mar 29, 2019 at 5:37 AM Ivan Ponomarev  wrote:


Hi Paul!

I read your code carefully and now I am fully convinced: your proposal
looks better and should work. We just have to document the crucial fact
that KStream consumers are invoked as they're added. And then it's all
going to be very nice.

What shall we do now? I should re-write the KIP and resume the
discussion here, right?

Why are you telling that your PR 'should not be even a starting point if
we go in this direction'? To me it looks like a good starting point. But
as a novice in this project I might miss some important details.

Regards,

Ivan


28.03.2019 17:38, Paul Whalen пишет:

Ivan,

Maybe I’m missing the point, but I believe the stream.branch() solution

supports this. The couponIssuer::set* consumers will be invoked as they’re
added, not during streamsBuilder.build(). So the user still ought to be
able to call couponIssuer.coupons() afterward and depend on the branched
streams having been set.

The issue I mean to point out is that it is hard to access the branched

streams in the same scope as the original stream (that is, not inside the
couponIssuer), which is a problem with both proposed solutions. It can be
worked around though.


[jira] [Created] (KAFKA-8228) Exactly once semantics break during server restart for kafka-streams application

2019-04-13 Thread Boquan Tang (JIRA)
Boquan Tang created KAFKA-8228:
--

 Summary: Exactly once semantics break during server restart for 
kafka-streams application
 Key: KAFKA-8228
 URL: https://issues.apache.org/jira/browse/KAFKA-8228
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0
Reporter: Boquan Tang


We are using 2.2.0 for kafka-streams client and 2.0.1 for server.

We have a simple kafka-streams application that has the following topology:
{code:java}
Source: KSTREAM-SOURCE-04 (topics: [deduped-adclick]) 
--> KSTREAM-TRANSFORM-05 
Processor: KSTREAM-TRANSFORM-05 (stores: [uid-offset-store]) 
--> KSTREAM-TRANSFORM-06 
<-- KSTREAM-SOURCE-04 
Source: KSTREAM-SOURCE-00 (topics: [advertiser-budget]) 
--> KTABLE-SOURCE-01 
Source: KSTREAM-SOURCE-02 (topics: [advertisement-budget]) 
--> KTABLE-SOURCE-03 
Processor: KSTREAM-TRANSFORM-06 (stores: [advertiser-budget-store, 
advertisement-budget-store]) 
--> KSTREAM-SINK-07 
<-- KSTREAM-TRANSFORM-05 
Sink: KSTREAM-SINK-07 (topic: budget-adclick) 
<-- KSTREAM-TRANSFORM-06 
Processor: KTABLE-SOURCE-01 (stores: [advertiser-budget-store]) 
--> none 
<-- KSTREAM-SOURCE-00 
Processor: KTABLE-SOURCE-03 (stores: [advertisement-budget-store]) 
--> none 
<-- KSTREAM-SOURCE-02{code}
The *Processor: KSTREAM-TRANSFORM-05 (stores: [uid-offset-store])* is 
added additionally to investigate this EOS broken issue, and its transform() is 
like this (specific K V class name is removed):
{code:java}
public void init(final ProcessorContext context) {
uidStore = (WindowStore) context.getStateStore(uidStoreName);
this.context = context;
}

public KeyValue transform(final K key, final V value) {
final long offset = context.offset();
final String uid = value.getUid();
final long beginningOfHour = 
Instant.ofEpochMilli(clickTimestamp).atZone(ZoneId.systemDefault()).withMinute(0).withSecond(0).toEpochSecond()
 * 1000;
final Long maybeLastOffset = uidStore.fetch(uid, beginningOfHour);
final boolean dupe = null != maybeLastOffset && offset == maybeLastOffset;
uidStore.put(uid, offset, beginningOfHour);
if (dupe) {
LOGGER.warn("Find duplication in partition {}, uid is {}, current offset is {}, 
last offset is {}",
context.partition(),
uid,
value.getAdInfo().getAdId(),
offset,
maybeLastOffset);
statsEmitter.count("duplication");
}
return dupe ? null : new KeyValue<>(key, value);
}
{code}
Although not 100% reproduce-able, we found that after we restart one or more 
server on the cluster side, the duplication would be found:
{code:java}
2019-04-12T07:12:58Z WARN [org.apache.kafka.clients.NetworkClient] 
[kafka-producer-network-thread | 
adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_9-producer]
 [Producer 
clientId=adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_9-producer,
 transactionalId=adclick-budget-decorator-streams-0_9] Connection to node 2 
(*:9092) could not be established. Broker may not be available.
2019-04-12T07:12:58Z WARN [org.apache.kafka.clients.NetworkClient] 
[kafka-producer-network-thread | 
adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_9-producer]
 [Producer 
clientId=adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_9-producer,
 transactionalId=adclick-budget-decorator-streams-0_9] Connection to node 2 
(*:9092) could not be established. Broker may not be available.
2019-04-12T07:14:02Z WARN [org.apache.kafka.clients.NetworkClient] 
[kafka-producer-network-thread | 
adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_12-producer]
 [Producer 
clientId=adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1-0_12-producer,
 transactionalId=adclick-budget-decorator-streams-0_12] Connection to node 2 
(*:9092) could not be established. Broker may not be available.
2019-04-12T07:27:39Z WARN 
[org.apache.kafka.streams.processor.internals.StreamThread] 
[adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1]
 stream-thread 
[adclick-budget-decorator-streams-e70f7538-4125-4e8f-aeee-0c8717d663bb-StreamThread-1]
 Detected task 0_9 that got migrated to another thread. This implies that this 
thread missed a rebalance and dropped out of the consumer group. Will try to 
rejoin the consumer group. Below is the detailed description of the task: 
>TaskId: 0_9 >> ProcessorTopology: > KSTREAM-SOURCE-00: > topics: 
[advertiser-budget] > children: [KTABLE-SOURCE-01] > 
KTABLE-SOURCE-01: > states: [advertiser-budget-store] > 
KSTREAM-SOURCE-04: > topics: [deduped-adclick] > children: 
[KSTREAM-TRANSFORM-05] > KSTREAM-TRANSFORM-05: >