[jira] [Commented] (KAFKA-9113) Clean up task management

2019-12-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16996132#comment-16996132
 ] 

ASF GitHub Bot commented on KAFKA-9113:
---

guozhangwang commented on pull request #7833: KAFKA-9113: Extract clients from 
tasks to record collectors
URL: https://github.com/apache/kafka/pull/7833
 
 
   WIP.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Clean up task management
> 
>
> Key: KAFKA-9113
> URL: https://issues.apache.org/jira/browse/KAFKA-9113
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Major
>
> Along KIP-429 we did a lot of refactoring of the task management classes, 
> including the TaskManager and AssignedTasks (and children).  While hopefully 
> easier to reason about there's still significant opportunity for further 
> cleanup including safer state tracking.  Some potential improvements:
> 1) Verify that no tasks are ever in more than one state at once. One 
> possibility is to just check that the suspended, created, restoring, and 
> running maps are all disjoint, but this begs the question of when and where 
> to do those checks, and how often. Another idea might be to put all tasks 
> into a single map and just track their state on a per-task basis. Whatever it 
> is should be aware that some methods are on the critical code path, and 
> should not be burdened with excessive safety checks (ie 
> AssignedStreamTasks#process). Alternatively, it seems to make sense to just 
> make each state its own type. We can then do some cleanup of the AbstractTask 
> and StreamTask classes, which currently contain a number of methods specific 
> to only one type/state of task. For example
>  * only active running tasks ever need to be suspendable, yet every task does 
> through suspend then closeSuspended during close.
>  * as the name suggests, closeSuspended should technically only ever apply to 
> suspended tasks
>  * the code paths needed to perform certain actions such as closing or 
> committing a task vary widely between the different states. A restoring task 
> need only close its state manager, but skipping the task.close call and 
> calling only closeStateManager has lead to confusion and time wasted trying 
> to remember or ask someone why that is sufficient
> 2) Cleanup of closing and/or shutdown logic – there are some potential 
> improvements to be made here as well, for example AssignedTasks currently 
> implements a closeZombieTask method despite the fact that standby tasks are 
> never zombies. 
> 3)  The StoreChangelogReader also interacts with (only) the 
> AssignedStreamsTasks class, through the TaskManager. It can be difficult to 
> reason about these interactions and the state of the changelog reader.
> 4) All 4 classes and their state have very strict consistency requirements 
> that currently are almost impossible to verify, which has already resulted in 
> several bugs that we were lucky to catch in time. We should tighten up how 
> these classes manage their own state, and how the overall state is managed 
> between them, so that it is easy to make changes without introducing new bugs 
> because one class updated its own state without knowing it needed to tell 
> another class to also update its



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9176) Flaky test failure: OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore

2019-12-13 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995993#comment-16995993
 ] 

Matthias J. Sax commented on KAFKA-9176:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/27147/testReport/junit/org.apache.kafka.streams.integration/OptimizedKTableIntegrationTest/shouldApplyUpdatesToStandbyStore/]

> Flaky test failure:  
> OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore
> 
>
> Key: KAFKA-9176
> URL: https://issues.apache.org/jira/browse/KAFKA-9176
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Priority: Major
>  Labels: flaky-test
>
> h4. 
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.4-jdk8/detail/kafka-2.4-jdk8/65/tests]
> h4. Error
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
> h4. Stacktrace
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>  at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:51)
>  at 
> org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:59)
>  at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1129)
>  at 
> org.apache.kafka.streams.integration.OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore(OptimizedKTableIntegrationTest.java:157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>  at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at 

[jira] [Commented] (KAFKA-8250) Flaky Test DelegationTokenEndToEndAuthorizationTest#testProduceConsumeViaAssign

2019-12-13 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995991#comment-16995991
 ] 

Matthias J. Sax commented on KAFKA-8250:


Different test method: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/27147/testReport/junit/kafka.api/DelegationTokenEndToEndAuthorizationTest/testNoConsumeWithDescribeAclViaAssign/]
{quote}java.lang.AssertionError: Should have been zero expired connections 
killed: 1(total=0.0) at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
kafka.api.EndToEndAuthorizationTest$$anonfun$confirmReauthenticationMetrics$1.apply(EndToEndAuthorizationTest.scala:225)
 at 
kafka.api.EndToEndAuthorizationTest$$anonfun$confirmReauthenticationMetrics$1.apply(EndToEndAuthorizationTest.scala:223)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
kafka.api.EndToEndAuthorizationTest.confirmReauthenticationMetrics(EndToEndAuthorizationTest.scala:223)
 at 
kafka.api.EndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign(EndToEndAuthorizationTest.scala:455){quote}

> Flaky Test 
> DelegationTokenEndToEndAuthorizationTest#testProduceConsumeViaAssign
> ---
>
> Key: KAFKA-8250
> URL: https://issues.apache.org/jira/browse/KAFKA-8250
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.5.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/442/tests]
> {quote}java.lang.AssertionError: Consumed more records than expected 
> expected:<1> but was:<2>
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.failNotEquals(Assert.java:835)
> at org.junit.Assert.assertEquals(Assert.java:647)
> at kafka.utils.TestUtils$.consumeRecords(TestUtils.scala:1288)
> at 
> kafka.api.EndToEndAuthorizationTest.consumeRecords(EndToEndAuthorizationTest.scala:460)
> at 
> kafka.api.EndToEndAuthorizationTest.testProduceConsumeViaAssign(EndToEndAuthorizationTest.scala:209){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-9298:
--

Assignee: Bill Bejeck

> Reuse of a mapped stream causes an Invalid Topology
> ---
>
> Key: KAFKA-9298
> URL: https://issues.apache.org/jira/browse/KAFKA-9298
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: join, streams
>
> Can be found with in the KStreamKStreamJoinTest.java
> @Test
> public void optimizerIsEager() {
> final StreamsBuilder builder = new StreamsBuilder();
> final KStream stream1 = builder.stream("topic", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream stream2 = builder.stream("topic2", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream stream3 = builder.stream("topic3", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream newStream = stream1.map((k, v) -> new 
> KeyValue<>(v, k));
> newStream.join(stream2,
> (value1, value2) -> value1 + value2,
> JoinWindows.of(ofMillis(100)),
> StreamJoined.with(Serdes.String(), Serdes.String(), 
> Serdes.String()));
> newStream.join(stream3,
> (value1, value2) -> value1 + value2,
> JoinWindows.of(ofMillis(100)),
> StreamJoined.with(Serdes.String(), Serdes.String(), 
> Serdes.String()));
> System.err.println(builder.build().describe().toString());
> }
> **results in 
> **
> Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
> registered by another source.
> org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
> KSTREAM-MAP-03-repartition has already been registered by another 
> source.
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
>   at 
> org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
>   at 
> org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
>   at 
> org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> 

[jira] [Assigned] (KAFKA-9299) Over eager optimization

2019-12-13 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-9299:
--

Assignee: Bill Bejeck

> Over eager optimization
> ---
>
> Key: KAFKA-9299
> URL: https://issues.apache.org/jira/browse/KAFKA-9299
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: Stream, cogroup, join, optimizer
>
> There are a few cases where the optimizer will attempt an optimization that 
> can cause a copartitioning failure. Known case of this are related to join 
> and cogroup, however could also effect merge or others. 
> Take for example three input topics A, B and C  with 2, 3 and 4 partitions 
> respectively.
> B' = B.map();
> B'.join(A)
> B'.join(C)
>  
> the optimizer will push up the repartition upstream and with will cause the 
> copartitioning to fail.
> Can be seen with the following test:
> @Test
> public void shouldInsertRepartitionsTopicForCogroupsUsedTwice() {
> final StreamsBuilder builder = new StreamsBuilder();
> final Properties properties = new Properties();
> properties.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
> StreamsConfig.OPTIMIZE);
> final KStream stream1 = builder.stream("one", 
> stringConsumed);
> final KGroupedStream groupedOne = stream1.map((k, v) 
> -> new KeyValue<>(v, k)).groupByKey(Grouped.as("foo"));
> final CogroupedKStream one = 
> groupedOne.cogroup(STRING_AGGREGATOR);
> one.aggregate(STRING_INITIALIZER);
> one.aggregate(STRING_INITIALIZER);
> final String topologyDescription = 
> builder.build(properties).describe().toString();
> System.err.println(topologyDescription);
> }
> Topologies:
>Sub-topology: 0
> Source: KSTREAM-SOURCE-00 (topics: [one])
>   --> KSTREAM-MAP-01
> Processor: KSTREAM-MAP-01 (stores: [])
>   --> foo-repartition-filter
>   <-- KSTREAM-SOURCE-00
> Processor: foo-repartition-filter (stores: [])
>   --> foo-repartition-sink
>   <-- KSTREAM-MAP-01
> Sink: foo-repartition-sink (topic: foo-repartition)
>   <-- foo-repartition-filter
>   Sub-topology: 1
> Source: foo-repartition-source (topics: [foo-repartition])
>   --> COGROUPKSTREAM-AGGREGATE-06, 
> COGROUPKSTREAM-AGGREGATE-12
> Processor: COGROUPKSTREAM-AGGREGATE-06 (stores: 
> [COGROUPKSTREAM-AGGREGATE-STATE-STORE-02])
>   --> COGROUPKSTREAM-MERGE-07
>   <-- foo-repartition-source
> Processor: COGROUPKSTREAM-AGGREGATE-12 (stores: 
> [COGROUPKSTREAM-AGGREGATE-STATE-STORE-08])
>   --> COGROUPKSTREAM-MERGE-13
>   <-- foo-repartition-source
> Processor: COGROUPKSTREAM-MERGE-07 (stores: [])
>   --> none
>   <-- COGROUPKSTREAM-AGGREGATE-06
> Processor: COGROUPKSTREAM-MERGE-13 (stores: [])
>   --> none
>   <-- COGROUPKSTREAM-AGGREGATE-12



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995970#comment-16995970
 ] 

Walker Carlson commented on KAFKA-9298:
---

 We solved a similar bug for cogoup in 
https://github.com/apache/kafka/pull/7792 #CogroupedStreamAggregateBuilder.java

> Reuse of a mapped stream causes an Invalid Topology
> ---
>
> Key: KAFKA-9298
> URL: https://issues.apache.org/jira/browse/KAFKA-9298
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Walker Carlson
>Priority: Minor
>  Labels: join, streams
>
> Can be found with in the KStreamKStreamJoinTest.java
> @Test
> public void optimizerIsEager() {
> final StreamsBuilder builder = new StreamsBuilder();
> final KStream stream1 = builder.stream("topic", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream stream2 = builder.stream("topic2", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream stream3 = builder.stream("topic3", 
> Consumed.with(Serdes.String(), Serdes.String()));
> final KStream newStream = stream1.map((k, v) -> new 
> KeyValue<>(v, k));
> newStream.join(stream2,
> (value1, value2) -> value1 + value2,
> JoinWindows.of(ofMillis(100)),
> StreamJoined.with(Serdes.String(), Serdes.String(), 
> Serdes.String()));
> newStream.join(stream3,
> (value1, value2) -> value1 + value2,
> JoinWindows.of(ofMillis(100)),
> StreamJoined.with(Serdes.String(), Serdes.String(), 
> Serdes.String()));
> System.err.println(builder.build().describe().toString());
> }
> **results in 
> **
> Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
> registered by another source.
> org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
> KSTREAM-MAP-03-repartition has already been registered by another 
> source.
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
>   at 
> org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
>   at 
> org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
>   at 
> org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> 

[jira] [Created] (KAFKA-9299) Over eager optimization

2019-12-13 Thread Walker Carlson (Jira)
Walker Carlson created KAFKA-9299:
-

 Summary: Over eager optimization
 Key: KAFKA-9299
 URL: https://issues.apache.org/jira/browse/KAFKA-9299
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Walker Carlson


There are a few cases where the optimizer will attempt an optimization that can 
cause a copartitioning failure. Known case of this are related to join and 
cogroup, however could also effect merge or others. 

Take for example three input topics A, B and C  with 2, 3 and 4 partitions 
respectively.

B' = B.map();

B'.join(A)

B'.join(C)

 

the optimizer will push up the repartition upstream and with will cause the 
copartitioning to fail.

Can be seen with the following test:

@Test
public void shouldInsertRepartitionsTopicForCogroupsUsedTwice() {
final StreamsBuilder builder = new StreamsBuilder();

final Properties properties = new Properties();
properties.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.OPTIMIZE);

final KStream stream1 = builder.stream("one", 
stringConsumed);

final KGroupedStream groupedOne = stream1.map((k, v) -> 
new KeyValue<>(v, k)).groupByKey(Grouped.as("foo"));

final CogroupedKStream one = 
groupedOne.cogroup(STRING_AGGREGATOR);
one.aggregate(STRING_INITIALIZER);
one.aggregate(STRING_INITIALIZER);

final String topologyDescription = 
builder.build(properties).describe().toString();

System.err.println(topologyDescription);
}

Topologies:
   Sub-topology: 0
Source: KSTREAM-SOURCE-00 (topics: [one])
  --> KSTREAM-MAP-01
Processor: KSTREAM-MAP-01 (stores: [])
  --> foo-repartition-filter
  <-- KSTREAM-SOURCE-00
Processor: foo-repartition-filter (stores: [])
  --> foo-repartition-sink
  <-- KSTREAM-MAP-01
Sink: foo-repartition-sink (topic: foo-repartition)
  <-- foo-repartition-filter

  Sub-topology: 1
Source: foo-repartition-source (topics: [foo-repartition])
  --> COGROUPKSTREAM-AGGREGATE-06, 
COGROUPKSTREAM-AGGREGATE-12
Processor: COGROUPKSTREAM-AGGREGATE-06 (stores: 
[COGROUPKSTREAM-AGGREGATE-STATE-STORE-02])
  --> COGROUPKSTREAM-MERGE-07
  <-- foo-repartition-source
Processor: COGROUPKSTREAM-AGGREGATE-12 (stores: 
[COGROUPKSTREAM-AGGREGATE-STATE-STORE-08])
  --> COGROUPKSTREAM-MERGE-13
  <-- foo-repartition-source
Processor: COGROUPKSTREAM-MERGE-07 (stores: [])
  --> none
  <-- COGROUPKSTREAM-AGGREGATE-06
Processor: COGROUPKSTREAM-MERGE-13 (stores: [])
  --> none
  <-- COGROUPKSTREAM-AGGREGATE-12




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Walker Carlson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walker Carlson updated KAFKA-9298:
--
Description: 
Can be found with in the KStreamKStreamJoinTest.java
@Test
public void optimizerIsEager() {
final StreamsBuilder builder = new StreamsBuilder();
final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
newStream.join(stream2,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));
newStream.join(stream3,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));

System.err.println(builder.build().describe().toString());
}

*results in 
*
Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
registered by another source.
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
KSTREAM-MAP-03-repartition has already been registered by another 
source.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
at 
org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
at 
org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
at 
org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Updated] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Walker Carlson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walker Carlson updated KAFKA-9298:
--
Description: 
Can be found with in the KStreamKStreamJoinTest.java
@Test
public void optimizerIsEager() {
final StreamsBuilder builder = new StreamsBuilder();
final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
newStream.join(stream2,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));
newStream.join(stream3,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));

System.err.println(builder.build().describe().toString());
}

**results in 
**
Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
registered by another source.
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
KSTREAM-MAP-03-repartition has already been registered by another 
source.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
at 
org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
at 
org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
at 
org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Created] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Walker Carlson (Jira)
Walker Carlson created KAFKA-9298:
-

 Summary: Reuse of a mapped stream causes an Invalid Topology
 Key: KAFKA-9298
 URL: https://issues.apache.org/jira/browse/KAFKA-9298
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Walker Carlson


Can be found with in the KStreamKStreamJoinTest.java
@Test
public void optimizerIsEager() {
final StreamsBuilder builder = new StreamsBuilder();
final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
newStream.join(stream2,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));
newStream.join(stream3,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));

System.err.println(builder.build().describe().toString());
}

results in 

Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
registered by another source.
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
KSTREAM-MAP-03-repartition has already been registered by another 
source.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
at 
org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
at 
org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
at 
org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Commented] (KAFKA-6049) Kafka Streams: Add Cogroup in the DSL

2019-12-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995954#comment-16995954
 ] 

ASF GitHub Bot commented on KAFKA-6049:
---

mjsax commented on pull request #7792: KAFKA-6049: Add auto-repartitioning for 
cogroup
URL: https://github.com/apache/kafka/pull/7792
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka Streams: Add Cogroup in the DSL
> -
>
> Key: KAFKA-6049
> URL: https://issues.apache.org/jira/browse/KAFKA-6049
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Walker Carlson
>Priority: Major
>  Labels: api, kip, user-experience
>
> When multiple streams aggregate together to form a single larger object (e.g. 
> a shopping website may have a cart stream, a wish list stream, and a 
> purchases stream. Together they make up a Customer), it is very difficult to 
> accommodate this in the Kafka-Streams DSL: it generally requires you to group 
> and aggregate all of the streams to KTables then make multiple outer join 
> calls to end up with a KTable with your desired object. This will create a 
> state store for each stream and a long chain of ValueJoiners that each new 
> record must go through to get to the final object.
> Creating a cogroup method where you use a single state store will:
> * Reduce the number of gets from state stores. With the multiple joins when a 
> new value comes into any of the streams a chain reaction happens where the 
> join processor keep calling ValueGetters until we have accessed all state 
> stores.
> * Slight performance increase. As described above all ValueGetters are called 
> also causing all ValueJoiners to be called forcing a recalculation of the 
> current joined value of all other streams, impacting performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8855) Collect and Expose Client's Name and Version in the Brokers

2019-12-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995813#comment-16995813
 ] 

ASF GitHub Bot commented on KAFKA-8855:
---

cmccabe commented on pull request #7749: KAFKA-8855; Collect and Expose 
Client's Name and Version in the Brokers (KIP-511 Part 2)
URL: https://github.com/apache/kafka/pull/7749
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Collect and Expose Client's Name and Version in the Brokers
> ---
>
> Key: KAFKA-8855
> URL: https://issues.apache.org/jira/browse/KAFKA-8855
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> Implements KIP-511 as documented here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9283) Flaky Test - kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker

2019-12-13 Thread Israel Ekpo (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995611#comment-16995611
 ] 

Israel Ekpo commented on KAFKA-9283:


Thanks [~manikumar], thanks for the update. I will work on it and will share my 
updates here.

> Flaky Test - 
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> 
>
> Key: KAFKA-9283
> URL: https://issues.apache.org/jira/browse/KAFKA-9283
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.4.0
> Environment: OS: Ubuntu 18.04.3 LTS
> Java Version: OpenJDK 11.0.4
> Scala Versions: 12.13.0, 12.13.1
> Gradle Version: 5.6.2
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>  Labels: flaky-test
>
> This same test fails occasionally on when run in Scala  2.12.10 but has been 
> failing consistently in Scala versions 2.13.0, 2.13.1.
> Needs review.
> Also, I had to adjust the scalaVersion variable in the gradle.properties 
> config file to the target version in my environment before it was picked up 
> in the integration test
> cat ~/scratchpad/111744.out/kafka-2.4.0-src/gradle.properties
> OS: Ubuntu 18.04.3 LTS
>  Java Version: OpenJDK 11.0.4
>  Scala Versions: 12.13.0, 12.13.1
>  Gradle Version: 5.6.2
> ./gradlew :core:integrationTest --tests 
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > Configure project :
>  Building project 'core' with Scala version 2.13.1
>  Building project 'streams-scala' with Scala version 2.13.1
> > Task :core:integrationTest FAILED
>  
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
>  failed, log available in 
> /home/isekpo/scratchpad/111744.out/kafka-2.4.0-src/core/build/reports/testOutput/kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker.test.stdout
> kafka.admin.ReassignPartitionsClusterTest > 
> shouldMoveSinglePartitionWithinBroker FAILED
>  org.scalatest.exceptions.TestFailedException: Partition should have been 
> moved to the expected log directory
>  at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>  at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>  at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
>  at org.scalatest.Assertions.fail(Assertions.scala:1091)
>  at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>  at org.scalatest.Assertions$.fail(Assertions.scala:1389)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
>  at 
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker(ReassignPartitionsClusterTest.scala:177)
> 1 test completed, 1 failed
> FAILURE: Build failed with an exception.
>  * What went wrong:
>  Execution failed for task ':core:integrationTest'.
>  > There were failing tests. See the report at: 
> [file:///home/isekpo/scratchpad/111744.out/kafka-2.4.0-src/core/build/reports/tests/integrationTest/index.html]
>  * Try:
>  Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output. Run with --scan to get full insights.
>  * Get more help at [https://help.gradle.org|https://help.gradle.org/]
> Deprecated Gradle features were used in this build, making it incompatible 
> with Gradle 6.0.
>  Use '--warning-mode all' to show the individual deprecation warnings.
>  See 
> [https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings]
> BUILD FAILED in 1m 56s
>  13 actionable tasks: 4 executed, 9 up-to-date



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-9297:
---
Description: 
The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throws 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

  was:
The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throws 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

It seems that it has been introduced in KIP-525.


> CreateTopic API do not work with older version of the request/response
> --
>
> Key: KAFKA-9297
> URL: https://issues.apache.org/jira/browse/KAFKA-9297
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> The create topic api do not work with older version of the api. It can be 
> reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
> timeouts.
> The latest version of the response has introduced new fields with default 
> values. When those fields are not supported by the version used by the 
> client, the serialization mechanism expect to have the default values and 
> throws otherwise. The current implementation in KafkaApis set them regardless 
> of the version used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-9297:
---
Priority: Major  (was: Blocker)

> CreateTopic API do not work with older version of the request/response
> --
>
> Key: KAFKA-9297
> URL: https://issues.apache.org/jira/browse/KAFKA-9297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> The create topic api do not work with older version of the api. It can be 
> reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
> timeouts.
> The latest version of the response has introduced new fields with default 
> values. When those fields are not supported by the version used by the 
> client, the serialization mechanism expect to have the default values and 
> throws otherwise. The current implementation in KafkaApis set them regardless 
> of the version used.
> It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-9297:
---
Affects Version/s: (was: 2.4.0)

> CreateTopic API do not work with older version of the request/response
> --
>
> Key: KAFKA-9297
> URL: https://issues.apache.org/jira/browse/KAFKA-9297
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> The create topic api do not work with older version of the api. It can be 
> reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
> timeouts.
> The latest version of the response has introduced new fields with default 
> values. When those fields are not supported by the version used by the 
> client, the serialization mechanism expect to have the default values and 
> throws otherwise. The current implementation in KafkaApis set them regardless 
> of the version used.
> It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-9297:
---
Description: 
The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throws 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

It seems that it has been introduced in KIP-525.

  was:
The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throw 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

It seems that it has been introduced in KIP-525.


> CreateTopic API do not work with older version of the request/response
> --
>
> Key: KAFKA-9297
> URL: https://issues.apache.org/jira/browse/KAFKA-9297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
>
> The create topic api do not work with older version of the api. It can be 
> reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
> timeouts.
> The latest version of the response has introduced new fields with default 
> values. When those fields are not supported by the version used by the 
> client, the serialization mechanism expect to have the default values and 
> throws otherwise. The current implementation in KafkaApis set them regardless 
> of the version used.
> It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995488#comment-16995488
 ] 

ASF GitHub Bot commented on KAFKA-9297:
---

dajac commented on pull request #7829: KAFKA-9297; CreateTopic API do not work 
with older version of the request/response
URL: https://github.com/apache/kafka/pull/7829
 
 
   The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.
   
   The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialisation mechanism expect to have the default values and throw 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.
   
   It seems that it has been introduced in KIP-525.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> CreateTopic API do not work with older version of the request/response
> --
>
> Key: KAFKA-9297
> URL: https://issues.apache.org/jira/browse/KAFKA-9297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
>
> The create topic api do not work with older version of the api. It can be 
> reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
> timeouts.
> The latest version of the response has introduced new fields with default 
> values. When those fields are not supported by the version used by the 
> client, the serialization mechanism expect to have the default values and 
> throw otherwise. The current implementation in KafkaApis set them regardless 
> of the version used.
> It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)
David Jacot created KAFKA-9297:
--

 Summary: CreateTopic API do not work with older version of the 
request/response
 Key: KAFKA-9297
 URL: https://issues.apache.org/jira/browse/KAFKA-9297
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: David Jacot
Assignee: David Jacot


The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throw 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)