Build failed in Jenkins: kafka-trunk-jdk8 #3055

2018-10-03 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: Counting objects:  54% (1606/2974)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3054

2018-10-03 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2974, done.
remote: Counting objects:   0% (1/2974)   remote: Counting objects:   
1% (30/2974)   remote: Counting objects:   2% (60/2974)   
remote: Counting objects:   3% (90/2974)   remote: Counting objects:   
4% (119/2974)   remote: Counting objects:   5% (149/2974)   
remote: Counting objects:   6% (179/2974)   remote: Counting objects:   
7% (209/2974)   remote: Counting objects:   8% (238/2974)   
remote: Counting objects:   9% (268/2974)   remote: Counting objects:  
10% (298/2974)   remote: Counting objects:  11% (328/2974)   
remote: Counting objects:  12% (357/2974)   remote: Counting objects:  
13% (387/2974)   remote: Counting objects:  14% (417/2974)   
remote: Counting objects:  15% (447/2974)   remote: Counting objects:  
16% (476/2974)   remote: Counting objects:  17% (506/2974)   
remote: Counting objects:  18% (536/2974)   remote: Counting objects:  
19% (566/2974)   remote: Counting objects:  20% (595/2974)   
remote: Counting objects:  21% (625/2974)   remote: Counting objects:  
22% (655/2974)   remote: Counting objects:  23% (685/2974)   
remote: Counting objects:  24% (714/2974)   remote: Counting objects:  
25% (744/2974)   remote: Counting objects:  26% (774/2974)   
remote: Counting objects:  27% (803/2974)   remote: Counting objects:  
28% (833/2974)   remote: Counting objects:  29% (863/2974)   
remote: Counting objects:  30% (893/2974)   remote: Counting objects:  
31% (922/2974)   remote: Counting objects:  32% (952/2974)   
remote: Counting objects:  33% (982/2974)   remote: Counting objects:  
34% (1012/2974)   remote: Counting objects:  35% (1041/2974)   
remote: Counting objects:  36% (1071/2974)   remote: Counting objects:  
37% (1101/2974)   remote: Counting objects:  38% (1131/2974)   
remote: Counting objects:  39% (1160/2974)   remote: Counting objects:  
40% (1190/2974)   remote: Counting objects:  41% (1220/2974)   
remote: Counting objects:  42% (1250/2974)   remote: Counting objects:  
43% (1279/2974)   remote: Counting objects:  44% (1309/2974)   
remote: Counting objects:  45% (1339/2974)   remote: Counting objects:  
46% (1369/2974)   remote: Counting objects:  47% (1398/2974)   
remote: Counting objects:  48% (1428/2974)   remote: Counting objects:  
49% (1458/2974)   remote: Counting objects:  50% (1487/2974)   
remote: Counting objects:  51% (1517/2974)   remote: Counting objects:  
52% (1547/2974)   remote: Counting objects:  53% (1577/2974)   
remote: 

Build failed in Jenkins: kafka-trunk-jdk10 #563

2018-10-03 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7223: Add name config to Suppressed (#5731)

[wangguoz] HOTFIX: Fix broken links (#5676)

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset 

Build failed in Jenkins: kafka-trunk-jdk8 #3053

2018-10-03 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Docs on state store instantiation (#5698)

--
[...truncated 508.07 KB...]
:997:
 method poll in class KafkaConsumer is deprecated: see corresponding Javadoc 
for more information.
  consumer.poll(1)
   ^
:1030:
 method poll in class KafkaConsumer is deprecated: see corresponding Javadoc 
for more information.
  received += consumer.poll(50).count
   ^
:1310:
 method poll in class KafkaConsumer is deprecated: see corresponding Javadoc 
for more information.
assertEquals(0, consumer.poll(100).count)
 ^
:1464:
 method poll in class KafkaConsumer is deprecated: see corresponding Javadoc 
for more information.
  val records = consumer.poll(50)
 ^
:174:
 method poll in class KafkaConsumer is deprecated: see corresponding Javadoc 
for more information.
records ++= consumer.poll(50).asScala
 ^
:141:
 object ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  ReassignPartitionsCommand.executeAssignment(zkClient, None, 
ZkUtils.getReassignmentJson(newAssignment), Throttle(config.throttle))
  ^
:22:
 trait AdminUtilities in package admin is deprecated (since 1.1.0): This class 
is deprecated and will be replaced by kafka.zk.AdminZkClient.
class TestAdminUtils extends AdminUtilities {
 ^
:23:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  override def changeBrokerConfig(zkUtils: ZkUtils, brokerIds: Seq[Int], 
configs: Properties): Unit = {}
   ^
:24:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  override def fetchEntityConfig(zkUtils: ZkUtils, entityType: String, 
entityName: String): Properties = {new Properties}
  ^
:25:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  override def changeClientIdConfig(zkUtils: ZkUtils, clientId: String, 
configs: Properties): Unit = {}
 ^
:26:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use org.apache.kafka.clients.admin.AdminClient instead.
  override def changeUserOrUserClientIdConfig(zkUtils: ZkUtils, 
sanitizedEntityName: String, configs: Properties): Unit = {}
   ^
:27:
 class ZkUtils in package utils is deprecated (since 2.0.0): This is an 
internal class that is no longer used by Kafka and will be removed in a future 
release. Please use 

Build failed in Jenkins: kafka-trunk-jdk10 #562

2018-10-03 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read 57d7f11e38e41892191f6fe87faae8f23aa0362e
remote: Enumerating objects: 3922, done.
remote: Counting objects:   0% (1/3922)   remote: Counting objects:   
1% (40/3922)   remote: Counting objects:   2% (79/3922)   
remote: Counting objects:   3% (118/3922)   remote: Counting objects:   
4% (157/3922)   remote: Counting objects:   5% (197/3922)   
remote: Counting objects:   6% (236/3922)   remote: Counting objects:   
7% (275/3922)   remote: Counting objects:   8% (314/3922)   
remote: Counting objects:   9% (353/3922)   remote: Counting objects:  
10% (393/3922)   remote: Counting objects:  11% (432/3922)   
remote: Counting objects:  12% (471/3922)   remote: Counting objects:  
13% (510/3922)   remote: Counting objects:  14% (550/3922)   
remote: Counting objects:  15% (589/3922)   remote: Counting objects:  
16% (628/3922)   remote: Counting objects:  17% (667/3922)   
remote: Counting objects:  18% (706/3922)   remote: Counting objects:  
19% (746/3922)   remote: Counting objects:  20% (785/3922)   
remote: Counting objects:  21% (824/3922)   remote: Counting objects:  
22% (863/3922)   remote: Counting objects:  23% (903/3922)   
remote: Counting objects:  24% (942/3922)   remote: Counting objects:  
25% (981/3922)   remote: Counting objects:  26% (1020/3922)   
remote: Counting objects:  27% (1059/3922)   remote: Counting objects:  
28% (1099/3922)   remote: Counting objects:  29% (1138/3922)   
remote: Counting objects:  30% (1177/3922)   remote: Counting objects:  
31% (1216/3922)   remote: Counting objects:  32% (1256/3922)   
remote: Counting objects:  33% (1295/3922)   remote: Counting objects:  
34% (1334/3922)   remote: Counting objects:  35% (1373/3922)   
remote: Counting objects:  36% (1412/3922)   remote: Counting objects:  
37% (1452/3922)   remote: Counting objects:  38% (1491/3922)   
remote: Counting objects:  39% (1530/3922)   remote: Counting objects:  
40% (1569/3922)   remote: Counting objects:  41% (1609/3922)   
remote: Counting objects:  42% (1648/3922)   remote: Counting objects:  
43% (1687/3922)   remote: Counting objects:  44% (1726/3922)   
remote: Counting objects:  45% (1765/3922)   remote: Counting objects:  
46% (1805/3922)   remote: Counting objects:  47% (1844/3922)   
remote: Counting objects:  48% (1883/3922)   remote: Counting objects:  
49% (1922/3922)   remote: Counting objects:  50% (1961/3922)   
remote: Counting objects:  51% (2001/3922)   remote: Counting objects:  
52% (2040/3922)   remote: Counting objects:  53% (2079/3922)   
remote: Counting objects:  54% (2118/3922)   remote: Counting objects:  
55% 

Re: [VOTE] KIP-349 Priorities for Source Topics

2018-10-03 Thread nick


> On Oct 3, 2018, at 12:41 PM, Colin McCabe  wrote:
> 
> Will there be a separate code path for people who don't want to use this 
> feature?


Yes, I tried to capture this in the KIP by indicating that this API change is 
100% backwards compatible.  Current consumer semantics and performance would be 
unaffected.  

Best,
--
  Nick





Re: Incremental Cooperative Rebalancing

2018-10-03 Thread Ryanne Dolan
Konstantine, this is exciting work! Couple questions:

I understand that, overall, rebalances would require less work and less
time acquiring and releasing resources. But OTOH I believe individual
consumers might see successive revokes/assigns while a rebalance settles.
Is that right? And if so, would the rebalance actually take longer in terms
of wall time vs stop-the-world?

Ryanne

On Tue, Oct 2, 2018 at 6:22 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Hey everyone,
>
> I'd like to bring to your attention a general design document that was just
> published in Apache Kafka's wiki space:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing%3A+Support+and+Policies
>
> It deals with the subject of Rebalancing of groups in Kafka and proposes
> basic infrastructure to support improvements on the current rebalancing
> protocol as well as a set of policies that can be implemented to optimize
> rebalancing under a number of real-world scenarios.
>
> Currently, this wiki page is meant to serve as a reference to the
> proposition of Incremental Cooperative Rebalancing overall. Specific KIPs
> will follow in order to describe in more detail - using the standard KIP
> format - the basic infrastructure and the first policies that will be
> proposed for implementation in components such as Connect, the Kafka
> Consumer and Streams.
>
> Stay tuned!
> Konstantine
>


Build failed in Jenkins: kafka-trunk-jdk10 #561

2018-10-03 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7223: Make suppression buffer durable (#5724)

[colin] KAFKA-6123: Give client MetricsReporter auto-generated client.id (#5383)

[github] MINOR: Docs on state store instantiation (#5698)

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3052

2018-10-03 Thread Apache Jenkins Server
See 


Changes:

[colin] MINOR: Increase timeout for starting JMX tool (#5735)

[wangguoz] KAFKA-7223: Make suppression buffer durable (#5724)

[colin] KAFKA-6123: Give client MetricsReporter auto-generated client.id (#5383)

--
[...truncated 2.72 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-03 Thread Yishun Guan
Hi All,

I want to start a voting on this KIP:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308

Here is the discussion thread:
https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E

Thanks,
Yishun


[jira] [Resolved] (KAFKA-7393) Consider making suppression node names independent of topologically numbered names.

2018-10-03 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-7393.
-
Resolution: Won't Fix

We realized that Suppressed needs to support a name parameter in keeping with 
KIP-372, which provides a way to solve this problem. If you supply `withName` 
to Suppressed, it will be independent of topologically numbered names, as 
desired.

> Consider making suppression node names independent of topologically numbered 
> names.
> ---
>
> Key: KAFKA-7393
> URL: https://issues.apache.org/jira/browse/KAFKA-7393
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>
> See [https://github.com/apache/kafka/pull/5567#discussion_r214188984] (please 
> keep the discussion on this Jira ticket).
> There is an opportunity to use a slightly different naming strategy for 
> suppression nodes so that they can be moved around the topology without 
> upsetting other nodes' names.
> The main downside is that it might be confusing to have a separate naming 
> strategy for just this one kind of node.
> Then again, this could create important opportunities for topology-compatible 
> optimizations.
>  
> At a higher design/implementation cost, but a lower ongoing complexity, we 
> could consider this naming approach for all node types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7477) Improve Streams close timeout semantics

2018-10-03 Thread John Roesler (JIRA)
John Roesler created KAFKA-7477:
---

 Summary: Improve Streams close timeout semantics
 Key: KAFKA-7477
 URL: https://issues.apache.org/jira/browse/KAFKA-7477
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


See [https://github.com/apache/kafka/pull/5682#discussion_r221473451]

The current timeout semantics are a little "magical":
 * 0 means to block forever
 * negative numbers cause the close to complete immediately without checking 
the state

I think this would make more sense:
 * reject negative numbers
 * make 0 just signal and return immediately (after checking the state once)
 * if I want to wait "forever", I can use {{ofYears(1)}} or 
{{ofMillis(Long.MAX_VALUE)}} or some other intuitively "long enough to be 
forever" value instead of a magic value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #560

2018-10-03 Thread Apache Jenkins Server
See 


Changes:

[colin] MINOR: Increase timeout for starting JMX tool (#5735)

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED


Re: Incremental Cooperative Rebalancing

2018-10-03 Thread Matthias J. Sax
Thanks for sharing! Excellent write up!

-Matthias

On 10/2/18 4:22 PM, Konstantine Karantasis wrote:
> Hey everyone,
> 
> I'd like to bring to your attention a general design document that was just
> published in Apache Kafka's wiki space:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing%3A+Support+and+Policies
> 
> It deals with the subject of Rebalancing of groups in Kafka and proposes
> basic infrastructure to support improvements on the current rebalancing
> protocol as well as a set of policies that can be implemented to optimize
> rebalancing under a number of real-world scenarios.
> 
> Currently, this wiki page is meant to serve as a reference to the
> proposition of Incremental Cooperative Rebalancing overall. Specific KIPs
> will follow in order to describe in more detail - using the standard KIP
> format - the basic infrastructure and the first policies that will be
> proposed for implementation in components such as Connect, the Kafka
> Consumer and Streams.
> 
> Stay tuned!
> Konstantine
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-03 Thread Matthias J. Sax
Thanks for clarifying. I thought that if we inherit `close() throws
Exception` we need to declare the same exception -- this would have been
an issue. Thus, my backward compatibility concerns are resolved.

About try-with-resources: I think, allowing to use try-with-resources is
the core motivation of this KIP to begin with. Also note, that `Closable
extends AutoClosable`. Thus, both interfaces work with try-with-resource.

Overall, it seems that `AutoClosable` might be the better interface to
use though because it's more generic.

-Matthias


On 10/3/18 9:48 AM, Colin McCabe wrote:
> On Sun, Sep 30, 2018, at 13:19, Matthias J. Sax wrote:
>> Closeable is part of `java.io` while AutoClosable is part of
>> `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
>> out that `Closable#close()` must be idempotent while
>> `AutoClosable#close()` can have side effects.
> 
> That's an interesting note.   Looks like the exact JavaDoc text is:
> 
>  > Note that unlike the close method of Closeable, this close method is not 
>  > required to be idempotent. In other words, calling this close method more 
>  > than once may have some visible side effect, unlike Closeable.close which 
>  > is required to have no effect if called more than once. However, 
>  > implementers of this interface are strongly encouraged to make their close 
>  > methods idempotent.
> 
> So you can make it non-idempotent, but it's still recommended to make it 
> idempotent.
> 
>>
>> Thus, I am not sure atm which one suits better.
>>
>> However, it's a good hint, that `AutoClosable#close()` declares `throws
>> Exception` and thus, it seems to be a backward incompatible change.
>> Hence, I am not sure if we can actually move forward easily with this KIP.
> 
> I was worried about that too, but actually you can implement the 
> AutoCloseable interface without declaring "throws Exception".  In general, 
> you can implement an interface while throwing a subset of the possible 
> checked exceptions.
> 
> There is one big benefit of AutoCloseable that I haven't seen mentioned here 
> yet: the ability to use constructrs like try-with-resources transparently.  
> So you can do things like
> 
>> try (MyClass m = new MyClass()) {
>>   m.doSomething(...);
>> }
> 
> best,
> Colin
> 
>>
>> Nit: `RecordCollectorImpl` is an internal class that implements
>> `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
>>
>>
>> -Matthias
>>
>>
>> On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
 (Although I am not quite sure
 when one is more desirable than the other)
>>>
>>> Most kafka's classes implementing Closeable/AutoCloseable doesn't throw 
>>> checked exception in close() method. Perhaps we should have a 
>>> "KafkaCloseable" interface which has a close() method without throwing any 
>>> checked exception...
>>>
>>> On 2018/09/27 19:11:20, Yishun Guan  wrote: 
 Hi All,

 Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
 should be implementing Closeable as well (Although I am not quite sure
 when one is more desirable than the other), also I just looked through
 your list - these are some great additions, I will add them to the
 list.

 Thanks,
 Yishun
 On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee  wrote:
>
> Hi Yishun,
>
> Thank you for your great KIP. In fact, I have also encountered the cases
> where Autoclosable is so desired several times! Let me inspect more
> candidate classes as well.
>
> +1. I also refined your KIP a little bit.
>
> Best,
> Dongjin
>
> On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai  
> wrote:
>
>> hi Yishun
>>
>> Thanks for nice KIP!
>>
>> Q1)
>> Why VerifiableProducer extend Closeable rather than AutoCloseable?
>>
>> Q2)
>> I grep project and then noticed there are other close methods but do not
>> implement AutoCloseable.
>> For example:
>> 1) WorkerConnector
>> 2) MemoryRecordsBuilder
>> 3) MetricsReporter
>> 4) ExpiringCredentialRefreshingLogin
>> 5) KafkaChannel
>> 6) ConsumerInterceptor
>> 7) SelectorMetrics
>> 8) HeartbeatThread
>>
>> Cheers,
>> Chia-Ping
>>
>>
>> On 2018/09/26 23:44:31, Yishun Guan  wrote:
>>> Hi All,
>>>
>>> Here is a trivial KIP:
>>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
>>>
>>> Suggestions are welcome.
>>>
>>> Thanks,
>>> Yishun
>>>
>>
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> slideshare:
> www.slideshare.net/dongjinleekr
> *

>>
>> Email had 1 attachment:
>> + signature.asc

Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-03 Thread Yishun Guan
Hi Colin,

Yes, I absolutely agree with you. We can implement an interface while
throwing a subset of the possible checked exceptions, and empty set is also
a subset-so we also don't have to throw an exception.

And yes, you are right, I should emphasize the benefit of AutoCloseable in
the KIP.

Thanks!
Yishun

On Wed, Oct 3, 2018, 9:48 AM Colin McCabe  wrote:

> On Sun, Sep 30, 2018, at 13:19, Matthias J. Sax wrote:
> > Closeable is part of `java.io` while AutoClosable is part of
> > `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
> > out that `Closable#close()` must be idempotent while
> > `AutoClosable#close()` can have side effects.
>
> That's an interesting note.   Looks like the exact JavaDoc text is:
>
>  > Note that unlike the close method of Closeable, this close method is
> not
>  > required to be idempotent. In other words, calling this close method
> more
>  > than once may have some visible side effect, unlike Closeable.close
> which
>  > is required to have no effect if called more than once. However,
>  > implementers of this interface are strongly encouraged to make their
> close
>  > methods idempotent.
>
> So you can make it non-idempotent, but it's still recommended to make it
> idempotent.
>
> >
> > Thus, I am not sure atm which one suits better.
> >
> > However, it's a good hint, that `AutoClosable#close()` declares `throws
> > Exception` and thus, it seems to be a backward incompatible change.
> > Hence, I am not sure if we can actually move forward easily with this
> KIP.
>
> I was worried about that too, but actually you can implement the
> AutoCloseable interface without declaring "throws Exception".  In general,
> you can implement an interface while throwing a subset of the possible
> checked exceptions.
>
> There is one big benefit of AutoCloseable that I haven't seen mentioned
> here yet: the ability to use constructrs like try-with-resources
> transparently.  So you can do things like
>
> > try (MyClass m = new MyClass()) {
> >   m.doSomething(...);
> > }
>
> best,
> Colin
>
> >
> > Nit: `RecordCollectorImpl` is an internal class that implements
> > `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
> >
> >
> > -Matthias
> >
> >
> > On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
> > >> (Although I am not quite sure
> > >> when one is more desirable than the other)
> > >
> > > Most kafka's classes implementing Closeable/AutoCloseable doesn't
> throw checked exception in close() method. Perhaps we should have a
> "KafkaCloseable" interface which has a close() method without throwing any
> checked exception...
> > >
> > > On 2018/09/27 19:11:20, Yishun Guan  wrote:
> > >> Hi All,
> > >>
> > >> Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
> > >> should be implementing Closeable as well (Although I am not quite sure
> > >> when one is more desirable than the other), also I just looked through
> > >> your list - these are some great additions, I will add them to the
> > >> list.
> > >>
> > >> Thanks,
> > >> Yishun
> > >> On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee 
> wrote:
> > >>>
> > >>> Hi Yishun,
> > >>>
> > >>> Thank you for your great KIP. In fact, I have also encountered the
> cases
> > >>> where Autoclosable is so desired several times! Let me inspect more
> > >>> candidate classes as well.
> > >>>
> > >>> +1. I also refined your KIP a little bit.
> > >>>
> > >>> Best,
> > >>> Dongjin
> > >>>
> > >>> On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai 
> wrote:
> > >>>
> >  hi Yishun
> > 
> >  Thanks for nice KIP!
> > 
> >  Q1)
> >  Why VerifiableProducer extend Closeable rather than AutoCloseable?
> > 
> >  Q2)
> >  I grep project and then noticed there are other close methods but
> do not
> >  implement AutoCloseable.
> >  For example:
> >  1) WorkerConnector
> >  2) MemoryRecordsBuilder
> >  3) MetricsReporter
> >  4) ExpiringCredentialRefreshingLogin
> >  5) KafkaChannel
> >  6) ConsumerInterceptor
> >  7) SelectorMetrics
> >  8) HeartbeatThread
> > 
> >  Cheers,
> >  Chia-Ping
> > 
> > 
> >  On 2018/09/26 23:44:31, Yishun Guan  wrote:
> > > Hi All,
> > >
> > > Here is a trivial KIP:
> > >
> > 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> > >
> > > Suggestions are welcome.
> > >
> > > Thanks,
> > > Yishun
> > >
> > 
> > >>>
> > >>>
> > >>> --
> > >>> *Dongjin Lee*
> > >>>
> > >>> *A hitchhiker in the mathematical world.*
> > >>>
> > >>> *github:  github.com/dongjinleekr
> > >>> linkedin:
> kr.linkedin.com/in/dongjinleekr
> > >>> slideshare:
> > >>> www.slideshare.net/dongjinleekr
> > >>> *
> > >>
> >
> > Email had 1 attachment:
> > + signature.asc
> >   1k (application/pgp-signature)
>


[DISCUSS] KIP-379: Multiple Consumer Group Management

2018-10-03 Thread Alex D
Hello, friends!

Welcome to the Multiple Consumer Group Management feature for
kafka-consumer-groups utility discussion thread.

KIP is available here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-379%3A+Multiple+Consumer+Group+Management

Pull Request: https://github.com/apache/kafka/pull/5726

JIRA ticket: https://issues.apache.org/jira/browse/KAFKA-7471

What do you think?

Thanks,
Alexander Dunayevsky


Re: Edit permissions for https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem

2018-10-03 Thread Matthias J. Sax
Done

On 10/2/18 6:36 PM, Bill Mclane wrote:
> User ID should be wmclane
> 
> Thanks Matthias...
> 
> Bill—
> 
> William P. McLane
> Messaging Evangelist
> TIBCO Software
> 
>> On Oct 2, 2018, at 8:31 PM, Matthias J. Sax  wrote:
>>
>> Could not find your user -- please provide your wiki user ID so we can
>> grant permissions to edit wiki pages.
>>
>> If you don't have an account yet, you can just create one.
>>
>>
>> -Matthias
>>
>>> On 10/2/18 1:33 PM, Bill Mclane wrote:
>>> Hi can you either enable edit permissions for wmcl...@tibco.com 
>>>  for the Kafka Project cwiki or modify the 
>>> Ecosystem Page to include the below under the Distributions & Packaging 
>>> section:
>>>
>>> TIBCO Platform  - https://www.tibco.com/products/apache-kafka 
>>>  Downloads - 
>>> https://www.tibco.com/products/tibco-messaging/downloads 
>>> 
>>>
>>> Thank you,
>>>
>>> Bill—
>>>
>>>
>>



signature.asc
Description: OpenPGP digital signature


Re: Re run build of a pull request

2018-10-03 Thread Matthias J. Sax
We don't know why it fails yet. We are investigating.

If anyone knows, please share so we can fix it :)


-Matthias

On 10/3/18 9:07 AM, Kamal Chandraprakash wrote:
> No need to push a commit to run the test case. Type "retest this please" as
> comment.
> It will again run all the test cases.
> 
> On Wed, Oct 3, 2018 at 1:12 PM Suman B N  wrote:
> 
>> Team,
>>
>> Every once in a while, PR builds fails due to below test failure with Null
>> Pointer Exception.
>>
>>- org.apache.kafka.streams.scala.kstream.KStreamTest.join 2 KStreams
>>should join correctly records
>>
>> But locally it is successfully run. Sometimes, when re-run, it just passes.
>> Now I have two problems:
>>
>>- Why does this fail?
>>- To re-run I am forced to make a commit, only then the build is
>>triggered. How can I manually re-run the build only for failed tests
>> rather
>>than running all tests?
>>
>> --
>> *Suman*
>> *OlaCabs*
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-03 Thread Colin McCabe
On Sun, Sep 30, 2018, at 13:19, Matthias J. Sax wrote:
> Closeable is part of `java.io` while AutoClosable is part of
> `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
> out that `Closable#close()` must be idempotent while
> `AutoClosable#close()` can have side effects.

That's an interesting note.   Looks like the exact JavaDoc text is:

 > Note that unlike the close method of Closeable, this close method is not 
 > required to be idempotent. In other words, calling this close method more 
 > than once may have some visible side effect, unlike Closeable.close which 
 > is required to have no effect if called more than once. However, 
 > implementers of this interface are strongly encouraged to make their close 
 > methods idempotent.

So you can make it non-idempotent, but it's still recommended to make it 
idempotent.

> 
> Thus, I am not sure atm which one suits better.
> 
> However, it's a good hint, that `AutoClosable#close()` declares `throws
> Exception` and thus, it seems to be a backward incompatible change.
> Hence, I am not sure if we can actually move forward easily with this KIP.

I was worried about that too, but actually you can implement the AutoCloseable 
interface without declaring "throws Exception".  In general, you can implement 
an interface while throwing a subset of the possible checked exceptions.

There is one big benefit of AutoCloseable that I haven't seen mentioned here 
yet: the ability to use constructrs like try-with-resources transparently.  So 
you can do things like

> try (MyClass m = new MyClass()) {
>   m.doSomething(...);
> }

best,
Colin

> 
> Nit: `RecordCollectorImpl` is an internal class that implements
> `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
> 
> 
> -Matthias
> 
> 
> On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
> >> (Although I am not quite sure
> >> when one is more desirable than the other)
> > 
> > Most kafka's classes implementing Closeable/AutoCloseable doesn't throw 
> > checked exception in close() method. Perhaps we should have a 
> > "KafkaCloseable" interface which has a close() method without throwing any 
> > checked exception...
> > 
> > On 2018/09/27 19:11:20, Yishun Guan  wrote: 
> >> Hi All,
> >>
> >> Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
> >> should be implementing Closeable as well (Although I am not quite sure
> >> when one is more desirable than the other), also I just looked through
> >> your list - these are some great additions, I will add them to the
> >> list.
> >>
> >> Thanks,
> >> Yishun
> >> On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee  wrote:
> >>>
> >>> Hi Yishun,
> >>>
> >>> Thank you for your great KIP. In fact, I have also encountered the cases
> >>> where Autoclosable is so desired several times! Let me inspect more
> >>> candidate classes as well.
> >>>
> >>> +1. I also refined your KIP a little bit.
> >>>
> >>> Best,
> >>> Dongjin
> >>>
> >>> On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai  
> >>> wrote:
> >>>
>  hi Yishun
> 
>  Thanks for nice KIP!
> 
>  Q1)
>  Why VerifiableProducer extend Closeable rather than AutoCloseable?
> 
>  Q2)
>  I grep project and then noticed there are other close methods but do not
>  implement AutoCloseable.
>  For example:
>  1) WorkerConnector
>  2) MemoryRecordsBuilder
>  3) MetricsReporter
>  4) ExpiringCredentialRefreshingLogin
>  5) KafkaChannel
>  6) ConsumerInterceptor
>  7) SelectorMetrics
>  8) HeartbeatThread
> 
>  Cheers,
>  Chia-Ping
> 
> 
>  On 2018/09/26 23:44:31, Yishun Guan  wrote:
> > Hi All,
> >
> > Here is a trivial KIP:
> >
>  https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> >
> > Suggestions are welcome.
> >
> > Thanks,
> > Yishun
> >
> 
> >>>
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>>
> >>> *github:  github.com/dongjinleekr
> >>> linkedin: kr.linkedin.com/in/dongjinleekr
> >>> slideshare:
> >>> www.slideshare.net/dongjinleekr
> >>> *
> >>
> 
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)


Re: Re run build of a pull request

2018-10-03 Thread Kamal Chandraprakash
No need to push a commit to run the test case. Type "retest this please" as
comment.
It will again run all the test cases.

On Wed, Oct 3, 2018 at 1:12 PM Suman B N  wrote:

> Team,
>
> Every once in a while, PR builds fails due to below test failure with Null
> Pointer Exception.
>
>- org.apache.kafka.streams.scala.kstream.KStreamTest.join 2 KStreams
>should join correctly records
>
> But locally it is successfully run. Sometimes, when re-run, it just passes.
> Now I have two problems:
>
>- Why does this fail?
>- To re-run I am forced to make a commit, only then the build is
>triggered. How can I manually re-run the build only for failed tests
> rather
>than running all tests?
>
> --
> *Suman*
> *OlaCabs*
>


Jenkins build is back to normal : kafka-trunk-jdk10 #559

2018-10-03 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-10-03 Thread M. Manna
Thanks for reminding me about the "Binding" vote Bill. I remember some
people with non-binding vote, so jumped the gun a bit too early.
We will wait for 2 more as you stated.

Regards,

On Wed, 3 Oct 2018 at 16:07, Bill Bejeck  wrote:

> +1 from me.
>
> As for closing the vote, it needs to be open for a minimum of 72 and
> requires three binding +1 votes.  Additionally, there needs more +1 binding
> votes than -1 votes.  The description for the lazy majority vote process is
> described here
> https://cwiki.apache.org/confluence/display/KAFKA/Bylaws#Bylaws-Approvals.
>
> I haven't been tracking the vote results, but from what I can see in the
> voting thread, this KIP still requires two more +1 binding votes.
>
> HTH,
> BIll
>
> On Wed, Oct 3, 2018 at 8:58 AM M. Manna  wrote:
>
> > Since this has been open for a while, I am assuming that it's good to go?
> >
> > if so, I will update the KIP page - and start coding this. I would prefer
> > re-using existing tests written for DefaultPartitioner, so that we don't
> > need to write new tests.
> >
> > Regards,
> >
> > On Sun, 30 Sep 2018 at 19:34, Matthias J. Sax 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > @Abhimanyu: can you please update the table in
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > and add a link to the KIP. Thanks.
> > >
> > > -Matthias
> > >
> > > On 9/4/18 9:56 PM, Abhimanyu Nagrath wrote:
> > > > +1
> > > >
> > > > On Wed, Sep 5, 2018 at 2:39 AM Magesh Nandakumar <
> mage...@confluent.io
> > >
> > > > wrote:
> > > >
> > > >> +1 ( non-binding)
> > > >>
> > > >> On Tue, Sep 4, 2018 at 3:39 AM M. Manna  wrote:
> > > >>
> > > >>> Hello,
> > > >>>
> > > >>> I have made necessary changes as per the original discussion
> thread,
> > > and
> > > >>> would like to put it for votes.
> > > >>>
> > > >>> Thank you very much for your suggestion and guidance so far.
> > > >>>
> > > >>> Regards,
> > > >>>
> > > >>
> > > >
> > >
> > >
> >
>


Re: [VOTE] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-10-03 Thread Bill Bejeck
+1 from me.

As for closing the vote, it needs to be open for a minimum of 72 and
requires three binding +1 votes.  Additionally, there needs more +1 binding
votes than -1 votes.  The description for the lazy majority vote process is
described here
https://cwiki.apache.org/confluence/display/KAFKA/Bylaws#Bylaws-Approvals.

I haven't been tracking the vote results, but from what I can see in the
voting thread, this KIP still requires two more +1 binding votes.

HTH,
BIll

On Wed, Oct 3, 2018 at 8:58 AM M. Manna  wrote:

> Since this has been open for a while, I am assuming that it's good to go?
>
> if so, I will update the KIP page - and start coding this. I would prefer
> re-using existing tests written for DefaultPartitioner, so that we don't
> need to write new tests.
>
> Regards,
>
> On Sun, 30 Sep 2018 at 19:34, Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > @Abhimanyu: can you please update the table in
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > and add a link to the KIP. Thanks.
> >
> > -Matthias
> >
> > On 9/4/18 9:56 PM, Abhimanyu Nagrath wrote:
> > > +1
> > >
> > > On Wed, Sep 5, 2018 at 2:39 AM Magesh Nandakumar  >
> > > wrote:
> > >
> > >> +1 ( non-binding)
> > >>
> > >> On Tue, Sep 4, 2018 at 3:39 AM M. Manna  wrote:
> > >>
> > >>> Hello,
> > >>>
> > >>> I have made necessary changes as per the original discussion thread,
> > and
> > >>> would like to put it for votes.
> > >>>
> > >>> Thank you very much for your suggestion and guidance so far.
> > >>>
> > >>> Regards,
> > >>>
> > >>
> > >
> >
> >
>


Re: [VOTE] KIP-372: Naming Repartition Topics for Joins and Grouping

2018-10-03 Thread Bill Bejeck
All,

One additional minor change "Grouped.named" has been changed to
"Grouped.as".  I've updated the KIP with this change.

Thanks,
Bill


[jira] [Resolved] (KAFKA-6632) Very slow hashCode methods in Kafka Connect types

2018-10-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Bryński resolved KAFKA-6632.
---
Resolution: Fixed

> Very slow hashCode methods in Kafka Connect types
> -
>
> Key: KAFKA-6632
> URL: https://issues.apache.org/jira/browse/KAFKA-6632
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Maciej Bryński
>Priority: Major
>
> hashCode method of ConnectSchema (and Field) is used a lot in SMT and 
> fromConnect.
> Example:
> [https://github.com/apache/kafka/blob/e5d6c9a79a4ca9b82502b8e7f503d86ddaddb7fb/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/InsertField.java#L164]
> Unfortunately it's using Objects.hash which is very slow.
> I rewrite this to own implementation and gain 6x speedup.
> Microbencharks gives:
>  * Original ConnectSchema hashCode: 2995ms
>  * My implementation: 517ms
> (1 iterations of calculating: hashCode for on new 
> ConnectSchema(Schema.Type.STRING))
> {code:java}
> @Override
> public int hashCode() {
> int result = 5;
> result = 31 * result + type.hashCode();
> result = 31 * result + (optional ? 1 : 0);
> result = 31 * result + (defaultValue == null ? 0 : 
> defaultValue.hashCode());
> if (fields != null) {
> for (Field f : fields) {
> result = 31 * result + f.hashCode();
> }
> }
> result = 31 * result + (keySchema == null ? 0 : keySchema.hashCode());
> result = 31 * result + (valueSchema == null ? 0 : valueSchema.hashCode());
> result = 31 * result + (name == null ? 0 : name.hashCode());
> result = 31 * result + (version == null ? 0 : version);
> result = 31 * result + (doc == null ? 0 : doc.hashCode());
> if (parameters != null) {
> for (Map.Entry e : parameters.entrySet()) {
> result = 31 * result + e.getKey().hashCode() + 
> e.getValue().hashCode();
> }
> }
> return result;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6626) Performance bottleneck in Kafka Connect sendRecords

2018-10-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Bryński resolved KAFKA-6626.
---
Resolution: Unresolved

> Performance bottleneck in Kafka Connect sendRecords
> ---
>
> Key: KAFKA-6626
> URL: https://issues.apache.org/jira/browse/KAFKA-6626
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Maciej Bryński
>Priority: Major
> Attachments: MapPerf.java, image-2018-03-08-08-35-19-247.png
>
>
> Kafka Connect is using IdentityHashMap for storing records.
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L239]
> Unfortunately this solution is very slow (2-4 times slower than normal 
> HashMap / HashSet).
> Benchmark result (code in attachment). 
> {code:java}
> Identity 4220
> Set 2115
> Map 1941
> Fast Set 2121
> {code}
> Things are even worse when using default GC configuration 
>  (-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35  -Djava.awt.headless=true)
> {code:java}
> Identity 7885
> Set 2364
> Map 1548
> Fast Set 1520
> {code}
> Java version
> {code:java}
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
> {code}
> This problem is greatly slowing Kafka Connect.
> !image-2018-03-08-08-35-19-247.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-10-03 Thread M. Manna
Since this has been open for a while, I am assuming that it's good to go?

if so, I will update the KIP page - and start coding this. I would prefer
re-using existing tests written for DefaultPartitioner, so that we don't
need to write new tests.

Regards,

On Sun, 30 Sep 2018 at 19:34, Matthias J. Sax  wrote:

> +1 (binding)
>
> @Abhimanyu: can you please update the table in
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> and add a link to the KIP. Thanks.
>
> -Matthias
>
> On 9/4/18 9:56 PM, Abhimanyu Nagrath wrote:
> > +1
> >
> > On Wed, Sep 5, 2018 at 2:39 AM Magesh Nandakumar 
> > wrote:
> >
> >> +1 ( non-binding)
> >>
> >> On Tue, Sep 4, 2018 at 3:39 AM M. Manna  wrote:
> >>
> >>> Hello,
> >>>
> >>> I have made necessary changes as per the original discussion thread,
> and
> >>> would like to put it for votes.
> >>>
> >>> Thank you very much for your suggestion and guidance so far.
> >>>
> >>> Regards,
> >>>
> >>
> >
>
>


[jira] [Resolved] (KAFKA-3601) fail fast when newer client connecting to older server

2018-10-03 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3601.
--
Resolution: Fixed

Closing in favour of newer releases.  Please reopen if you think otherwise

> fail fast when newer client connecting to older server
> --
>
> Key: KAFKA-3601
> URL: https://issues.apache.org/jira/browse/KAFKA-3601
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Chris Pennello
>Assignee: Ashish Singh
>Priority: Major
>
> I know that connecting with a newer client to an older server is forbidden, 
> but I would like to suggest that the behavior be that it predictably fail 
> noisily, explicitly, and with specific detail indicating why the failure 
> occurred.
> As-is, trying to connect to a v0.8.1.1 cluster with a v0.9.1 client yields a 
> hang when trying to get a coordinator metadata update.
> (This may be related to KAFKA-1894.  I certainly did note 
> {{poll(Long.MAX_VALUE)}} and wept many tears.  At least we could include a 
> TODO-commented constant in the code with a non-forever, timeout, right?)
> {noformat}
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>   - locked <1c8cadab> (a sun.nio.ch.Util$2)
>   - locked <2324ec49> (a java.util.Collections$UnmodifiableSet)
>   - locked <3f3f5e0b> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>   at org.apache.kafka.common.network.Selector.select(Selector.java:425)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:254)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>   ... my code that calls poll...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3484) Transient failure in SslConsumerTest.testListTopics

2018-10-03 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3484.
--
Resolution: Not A Problem

Closing as testListTopics is moved to PlaintextConsumerTest.

> Transient failure in SslConsumerTest.testListTopics
> ---
>
> Key: KAFKA-3484
> URL: https://issues.apache.org/jira/browse/KAFKA-3484
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Priority: Major
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: expected:<5> but was:<6>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at kafka.api.BaseConsumerTest.testListTopics(BaseConsumerTest.scala:174)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Resolved] (KAFKA-7429) Enable dynamic key/truststore update with same filename/password

2018-10-03 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7429.
--
Resolution: Fixed

> Enable dynamic key/truststore update with same filename/password
> 
>
> Key: KAFKA-7429
> URL: https://issues.apache.org/jira/browse/KAFKA-7429
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> At the moment, SSL keystores and truststores on brokers can be dynamically 
> updated using AdminClient by providing a new keystore or truststore. But we 
> require either the filename or password to be modified to trigger the update. 
> In some scenarios, we may want to perform the update using the same file (and 
> password). So it will be good to provide a way to trigger reload of existing 
> keystores and truststores using the same AdminClient update mechanism. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re run build of a pull request

2018-10-03 Thread Suman B N
Team,

Every once in a while, PR builds fails due to below test failure with Null
Pointer Exception.

   - org.apache.kafka.streams.scala.kstream.KStreamTest.join 2 KStreams
   should join correctly records

But locally it is successfully run. Sometimes, when re-run, it just passes.
Now I have two problems:

   - Why does this fail?
   - To re-run I am forced to make a commit, only then the build is
   triggered. How can I manually re-run the build only for failed tests rather
   than running all tests?

-- 
*Suman*
*OlaCabs*


[jira] [Resolved] (KAFKA-7096) Consumer should drop the data for unassigned topic partitions

2018-10-03 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7096.
-
Resolution: Fixed

> Consumer should drop the data for unassigned topic partitions
> -
>
> Key: KAFKA-7096
> URL: https://issues.apache.org/jira/browse/KAFKA-7096
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>Priority: Major
> Fix For: 2.2.0
>
>
> currently if a client has assigned topics : T1, T2, T3 and calls poll(), the 
> poll might fetch data for partitions for all 3 topics T1, T2, T3. Now if the 
> client unassigns some topics (for example T3) and calls poll() we still hold 
> the data (for T3) in the completedFetches queue until we actually reach the 
> buffered data for the unassigned Topics (T3 in our example) on subsequent 
> poll() calls, at which point we drop that data. This process of holding the 
> data is unnecessary.
> When a client creates a topic, it takes time for the broker to fetch ACLs for 
> the topic. But during this time, the client will issue fetchRequest for the 
> topic, it will get response for the partitions of this topic. The response 
> consist of TopicAuthorizationException for each of the partitions. This 
> response for each partition is wrapped with a completedFetch and added to the 
> completedFetches queue. Now when the client calls the next poll() it sees the 
> TopicAuthorizationException from the first buffered CompletedFetch. At this 
> point the client chooses to sleep for 1.5 min as a backoff (as per the 
> design), hoping that the Broker fetches the ACL from ACL store in the 
> meantime. Actually the Broker has already fetched the ACL by this time. When 
> the client calls poll() after the sleep, it again sees the 
> TopicAuthorizationException from the second completedFetch and it sleeps 
> again. So it takes (1.5 * 60 * partitions) seconds before the client can see 
> any data. With this patch, the client when it sees the first 
> TopicAuthorizationException, it can all assign(EmptySet), which will get rid 
> of the buffered completedFetches (those with TopicAuthorizationException) and 
> it can again call assign(TopicPartitions) before calling poll(). With this 
> patch we found that client was able to get the records as soon as the Broker 
> fetched the ACLs from ACL store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6045) All access to log should fail if log is closed

2018-10-03 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6045.
-
Resolution: Won't Fix

> All access to log should fail if log is closed
> --
>
> Key: KAFKA-6045
> URL: https://issues.apache.org/jira/browse/KAFKA-6045
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Priority: Major
> Fix For: 2.1.0
>
>
> After log.close() or log.closeHandlers() is called for a given log, all uses 
> of the Log's API should fail with proper exception. For example, 
> log.appendAsLeader() should throw KafkaStorageException. APIs such as 
> Log.activeProducersWithLastSequence() should also fail but not necessarily 
> with KafkaStorageException, since the KafkaStorageException indicates failure 
> to access disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5950) AdminClient should retry based on returned error codes

2018-10-03 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-5950.
-
Resolution: Fixed

Per comment in the PR, it appears that the issue has been addressed in 
KAFKA-6299 with PR [https://github.com/apache/kafka/pull/4295.] Please feel 
free to re-open if this is still an issue.

> AdminClient should retry based on returned error codes
> --
>
> Key: KAFKA-5950
> URL: https://issues.apache.org/jira/browse/KAFKA-5950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Andrey Dyachkov
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.2.0
>
>
> The AdminClient only retries if the request fails with a retriable error. If 
> a response is returned, then a retry is never attempted. This is inconsistent 
> with other clients that check the error codes in the response and retry for 
> each retriable error code.
> We should consider if it makes sense to adopt this behaviour in the 
> AdminClient so that users don't have to do it themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5857) Excessive heap usage on controller node during reassignment

2018-10-03 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-5857.
-
Resolution: Cannot Reproduce

In general we need heapdump to investigate the issue. Since we don't have 
heapdump when the issue happened and there is significant improvement in the 
heap size used by controller in 1.1.0 per Ismael's comment, it seems reasonable 
to just close this JIRA. We can re-open it if there is still the same issue 
with Kafka 1.1.0 or later and we have heapdump.

> Excessive heap usage on controller node during reassignment
> ---
>
> Key: KAFKA-5857
> URL: https://issues.apache.org/jira/browse/KAFKA-5857
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.11.0.0
> Environment: CentOs 7, Java 1.8
>Reporter: Raoufeh Hashemian
>Priority: Major
>  Labels: reliability
> Fix For: 2.1.0
>
> Attachments: CPU.png, disk_write_x.png, memory.png, 
> reassignment_plan.txt
>
>
> I was trying to expand our kafka cluster of 6 broker nodes to 12 broker 
> nodes. 
> Before expansion, we had a single topic with 960 partitions and a replication 
> factor of 3. So each node had 480 partitions. The size of data in each node 
> was 3TB . 
> To do the expansion, I submitted a partition reassignment plan (see attached 
> file for the current/new assignments). The plan was optimized to minimize 
> data movement and be rack aware. 
> When I submitted the plan, it took approximately 3 hours for moving data from 
> old to new nodes to complete. After that, it started deleting source 
> partitions (I say this based on the number of file descriptors) and 
> rebalancing leaders which has not been successful. Meanwhile, the heap usage 
> in the controller node started to go up with a large slope (along with long 
> GC times) and it took 5 hours for the controller to go out of memory and 
> another controller started to have the same behaviour for another 4 hours. At 
> this time the zookeeper ran out of disk and the service stopped.
> To recover from this condition:
> 1) Removed zk logs to free up disk and restarted all 3 zk nodes
> 2) Deleted /kafka/admin/reassign_partitions node from zk
> 3) Had to do unclean restarts of kafka service on oom controller nodes which 
> took 3 hours to complete  . After this stage there was still 676 under 
> replicated partitions.
> 4) Do a clean restart on all 12 broker nodes.
> After step 4 , number of under replicated nodes went to 0.
> So I was wondering if this memory footprint from controller is expected for 
> 1k partitions ? Did we do sth wrong or it is a bug?
> Attached are some resource usage graph during this 30 hours event and the 
> reassignment plan. I'll try to add log files as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)