[jira] [Resolved] (KAFKA-9471) Throw exception for DEAD StreamThread.State
[ https://issues.apache.org/jira/browse/KAFKA-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-9471. --- Resolution: Duplicate > Throw exception for DEAD StreamThread.State > --- > > Key: KAFKA-9471 > URL: https://issues.apache.org/jira/browse/KAFKA-9471 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > > In StreamThreadStateStoreProvider we have: > {code} > if (streamThread.state() == StreamThread.State.DEAD) { > return Collections.emptyList(); > {code} > If user cannot retry anymore, we should throw exception which is handled in > the else block. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9471) Return empty collection for PENDING_SHUTDOWN
Ted Yu created KAFKA-9471: - Summary: Return empty collection for PENDING_SHUTDOWN Key: KAFKA-9471 URL: https://issues.apache.org/jira/browse/KAFKA-9471 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Assignee: Ted Yu In StreamThreadStateStoreProvider we have: {code} if (streamThread.state() == StreamThread.State.DEAD) { return Collections.emptyList(); {code} PENDING_SHUTDOWN should be treated the same way as DEAD. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-9464) Close the producer in completeShutdown
[ https://issues.apache.org/jira/browse/KAFKA-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-9464. --- Resolution: Not A Problem > Close the producer in completeShutdown > -- > > Key: KAFKA-9464 > URL: https://issues.apache.org/jira/browse/KAFKA-9464 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > > In StreamThread#completeShutdown, the producer (if not null) should be closed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9465) Enclose consumer call with catching InvalidOffsetException
Ted Yu created KAFKA-9465: - Summary: Enclose consumer call with catching InvalidOffsetException Key: KAFKA-9465 URL: https://issues.apache.org/jira/browse/KAFKA-9465 Project: Kafka Issue Type: Improvement Reporter: Ted Yu In maybeUpdateStandbyTasks, the try block encloses restoreConsumer.poll and record handling. Since InvalidOffsetException is thrown by restoreConsumer.poll, we should enclose this call in the try block. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9464) Close the producer in completeShutdown
Ted Yu created KAFKA-9464: - Summary: Close the producer in completeShutdown Key: KAFKA-9464 URL: https://issues.apache.org/jira/browse/KAFKA-9464 Project: Kafka Issue Type: Bug Reporter: Ted Yu In StreamThread#completeShutdown, the producer (if not null) should be closed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9463) Transient failure in KafkaAdminClientTest.testListOffsets
Ted Yu created KAFKA-9463: - Summary: Transient failure in KafkaAdminClientTest.testListOffsets Key: KAFKA-9463 URL: https://issues.apache.org/jira/browse/KAFKA-9463 Project: Kafka Issue Type: Test Reporter: Ted Yu When running tests with Java 11, I got the following test failure: {code} java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) at org.apache.kafka.clients.admin.KafkaAdminClientTest.testListOffsets(KafkaAdminClientTest.java:2336) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. {code} KafkaAdminClientTest.testListOffsets passes when it is run alone. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9462) Correct exception message in DistributedHerder
Ted Yu created KAFKA-9462: - Summary: Correct exception message in DistributedHerder Key: KAFKA-9462 URL: https://issues.apache.org/jira/browse/KAFKA-9462 Project: Kafka Issue Type: Bug Reporter: Ted Yu There are a few exception messages in DistributedHerder which were copied from other exception message. This task corrects the messages to reflect actual condition -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-7345) Potentially unclosed FileChannel in StateDirectory#unlock
Ted Yu created KAFKA-7345: - Summary: Potentially unclosed FileChannel in StateDirectory#unlock Key: KAFKA-7345 URL: https://issues.apache.org/jira/browse/KAFKA-7345 Project: Kafka Issue Type: Bug Components: streams Reporter: Ted Yu {code} lockAndOwner.lock.release(); log.debug("{} Released state dir lock for task {}", logPrefix(), taskId); final FileChannel fileChannel = channels.remove(taskId); if (fileChannel != null) { fileChannel.close(); {code} If {{lockAndOwner.lock.release()}} throws IOE, the closing of the FileChannel would be skipped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7344) Return early when all tasks are assigned in StickyTaskAssignor#assignActive
Ted Yu created KAFKA-7344: - Summary: Return early when all tasks are assigned in StickyTaskAssignor#assignActive Key: KAFKA-7344 URL: https://issues.apache.org/jira/browse/KAFKA-7344 Project: Kafka Issue Type: Improvement Components: streams Reporter: Ted Yu After re-assigning existing active tasks to clients that previously had the same active task, there is chance that {{taskIds.size() == assigned.size()}}, i.e. all tasks are assigned . The method continues with: {code} final Set unassigned = new HashSet<>(taskIds); unassigned.removeAll(assigned); {code} We can check the above condition and return early before allocating HashSet. Similar optimization can be done before the following (around line 112): {code} // assign any remaining unassigned tasks final List sortedTasks = new ArrayList<>(unassigned); {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7334) Suggest changing config for state.dir in case of FileNotFoundException
Ted Yu created KAFKA-7334: - Summary: Suggest changing config for state.dir in case of FileNotFoundException Key: KAFKA-7334 URL: https://issues.apache.org/jira/browse/KAFKA-7334 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Quoting stack trace from KAFKA-5998 : {code} WARN [2018-08-22 03:17:03,745] org.apache.kafka.streams.processor.internals.ProcessorStateManager: task [0_45] Failed to write offset checkpoint file to /tmp/kafka-streams/ {{ /0_45/.checkpoint: {}}} {{ ! java.nio.file.NoSuchFileException: /tmp/kafka-streams//0_45/.checkpoint.tmp}} {{ ! at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)}} {{ ! at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)}} {code} When state.dir is left at default configuration, there is a chance that certain files under the state directory are cleaned by OS. [~mjsax] and I proposed to suggest user, through exception message, to change the location for state.dir . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7316) Use of filter method in KTable.scala may result in StackOverflowError
Ted Yu created KAFKA-7316: - Summary: Use of filter method in KTable.scala may result in StackOverflowError Key: KAFKA-7316 URL: https://issues.apache.org/jira/browse/KAFKA-7316 Project: Kafka Issue Type: Bug Reporter: Ted Yu In this thread: http://search-hadoop.com/m/Kafka/uyzND1dNbYKXzC4F1?subj=Issue+in+Kafka+2+0+0+ Druhin reported seeing StackOverflowError when using filter method from KTable.scala This can be reproduced with the following change: {code} diff --git a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala b/streams/streams-scala/src/test/scala index 3d1bab5..e0a06f2 100644 --- a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala +++ b/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala @@ -58,6 +58,7 @@ class StreamToTableJoinScalaIntegrationTestImplicitSerdes extends StreamToTableJ val userClicksStream: KStream[String, Long] = builder.stream(userClicksTopic) val userRegionsTable: KTable[String, String] = builder.table(userRegionsTopic) +userRegionsTable.filter { case (_, count) => true } // Compute the total per region by summing the individual click counts per region. val clicksPerRegion: KTable[String, Long] = {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7276) Consider using re2j to speed up regex operations
Ted Yu created KAFKA-7276: - Summary: Consider using re2j to speed up regex operations Key: KAFKA-7276 URL: https://issues.apache.org/jira/browse/KAFKA-7276 Project: Kafka Issue Type: Task Reporter: Ted Yu https://github.com/google/re2j re2j claims to do linear time regular expression matching in Java. Its benefit is most obvious for deeply nested regex (such as a | b | c | d). We should consider using re2j to speed up regex operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7195) StreamStreamJoinIntegrationTest fails in 2.0 Jenkins
Ted Yu created KAFKA-7195: - Summary: StreamStreamJoinIntegrationTest fails in 2.0 Jenkins Key: KAFKA-7195 URL: https://issues.apache.org/jira/browse/KAFKA-7195 Project: Kafka Issue Type: Test Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-2.0-jdk8/87/testReport/junit/org.apache.kafka.streams.integration/StreamStreamJoinIntegrationTest/testOuter_caching_enabled___false_/ > : {code} java.lang.AssertionError: Expected: is <[A-null]> but: was <[A-a, A-b, A-c, A-d]> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) at org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.checkResult(AbstractJoinIntegrationTest.java:171) at org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:212) at org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:184) at org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest.testOuter(StreamStreamJoinIntegrationTest.java:198) {code} However, some test output was missing: {code} [2018-07-23 20:51:36,363] INFO Socket c ...[truncated 1627692 chars]... 671) {code} I ran the test locally which passed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7175) Make version checking logic more flexible in streams_upgrade_test.py
Ted Yu created KAFKA-7175: - Summary: Make version checking logic more flexible in streams_upgrade_test.py Key: KAFKA-7175 URL: https://issues.apache.org/jira/browse/KAFKA-7175 Project: Kafka Issue Type: Improvement Reporter: Ted Yu During debugging of system test failure for KAFKA-5037, it was re-discovered that the version numbers inside version probing related messages are hard coded in streams_upgrade_test.py This is in-flexible. We should correlate latest version from Java class with the expected version numbers. Matthias made the following suggestion: We should also make this more generic and test upgrades from 3 -> 4, 3 -> 5 and 4 -> 5. The current code does only go from latest version to future version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7174) Improve version probing of subscription info
Ted Yu created KAFKA-7174: - Summary: Improve version probing of subscription info Key: KAFKA-7174 URL: https://issues.apache.org/jira/browse/KAFKA-7174 Project: Kafka Issue Type: Improvement Reporter: Ted Yu During code review for KAFKA-5037, [~guozhang] made the following suggestion: Currently the version probing works as the following: when leader receives the subscription info encoded with a higher version that it can understand (e.g. the leader is on version 3, while one of the subscription received is encode with version 4), it will send back an empty assignment with the assignment encoded with version 3, and also latestSupportedVersion set to 3. when the member receives the assignment, it checks if latestSupportedVersion is smaller than the version it used for encoding the sent subscription (i.e. the above logic). If it is smaller, then it means that leader cannot understand, in this case, version 4. It will then set the flag and then re-subscribe but with a down-graded encoding format of version 3. NOW with PR #5322, we can let leader to clearly communicate this error via the error code, and upon receiving the assignment, if the error code is VERSION_PROBING, then the member can immediately know what happens, and hence can simplify the above logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently
[ https://issues.apache.org/jira/browse/KAFKA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6335. --- Resolution: Cannot Reproduce > SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails > intermittently > -- > > Key: KAFKA-6335 > URL: https://issues.apache.org/jira/browse/KAFKA-6335 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Assignee: Manikumar >Priority: Major > > From > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/ > : > {code} > java.lang.AssertionError: expected acls Set(User:36 has Allow permission for > operations: Read from hosts: *, User:7 has Allow permission for operations: > Read from hosts: *, User:21 has Allow permission for operations: Read from > hosts: *, User:39 has Allow permission for operations: Read from hosts: *, > User:43 has Allow permission for operations: Read from hosts: *, User:3 has > Allow permission for operations: Read from hosts: *, User:35 has Allow > permission for operations: Read from hosts: *, User:15 has Allow permission > for operations: Read from hosts: *, User:16 has Allow permission for > operations: Read from hosts: *, User:22 has Allow permission for operations: > Read from hosts: *, User:26 has Allow permission for operations: Read from > hosts: *, User:11 has Allow permission for operations: Read from hosts: *, > User:38 has Allow permission for operations: Read from hosts: *, User:8 has > Allow permission for operations: Read from hosts: *, User:28 has Allow > permission for operations: Read from hosts: *, User:32 has Allow permission > for operations: Read from hosts: *, User:25 has Allow permission for > operations: Read from hosts: *, User:41 has Allow permission for operations: > Read from hosts: *, User:44 has Allow permission for operations: Read from > hosts: *, User:48 has Allow permission for operations: Read from hosts: *, > User:2 has Allow permission for operations: Read from hosts: *, User:9 has > Allow permission for operations: Read from hosts: *, User:14 has Allow > permission for operations: Read from hosts: *, User:46 has Allow permission > for operations: Read from hosts: *, User:13 has Allow permission for > operations: Read from hosts: *, User:5 has Allow permission for operations: > Read from hosts: *, User:29 has Allow permission for operations: Read from > hosts: *, User:45 has Allow permission for operations: Read from hosts: *, > User:6 has Allow permission for operations: Read from hosts: *, User:37 has > Allow permission for operations: Read from hosts: *, User:23 has Allow > permission for operations: Read from hosts: *, User:19 has Allow permission > for operations: Read from hosts: *, User:24 has Allow permission for > operations: Read from hosts: *, User:17 has Allow permission for operations: > Read from hosts: *, User:34 has Allow permission for operations: Read from > hosts: *, User:12 has Allow permission for operations: Read from hosts: *, > User:42 has Allow permission for operations: Read from hosts: *, User:4 has > Allow permission for operations: Read from hosts: *, User:47 has Allow > permission for operations: Read from hosts: *, User:18 has Allow permission > for operations: Read from hosts: *, User:31 has Allow permission for > operations: Read from hosts: *, User:49 has Allow permission for operations: > Read from hosts: *, User:33 has Allow permission for operations: Read from > hosts: *, User:1 has Allow permission for operations: Read from hosts: *, > User:27 has Allow permission for operations: Read from hosts: *) but got > Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 > has Allow permission for operations: Read from hosts: *, User:21 has Allow > permission for operations: Read from hosts: *, User:39 has Allow permission > for operations: Read from hosts: *, User:43 has Allow permission for > operations: Read from hosts: *, User:3 has Allow permission for operations: > Read from hosts: *, User:35 has Allow permission for operations: Read from > hosts: *, User:15 has Allow permission for operations: Read from hosts: *, > User:16 has Allow permission for operations: Read from hosts: *, User:22 has > Allow permission for operations: Read from hosts: *, User:26 has Allow > permission for operations: Read from hosts: *, User:11 has Allow permission > for operations: Read from hosts: *, User:38 has Allow permission for > operations: Read from hosts: *, User:8 has Allow permission for operations: > Read from hosts: *, User:28 has Allow permission for operations: Read from > hosts: *, User:32 has Allow permission for
[jira] [Resolved] (KAFKA-7124) Number of AnyLogDir should match the length of the replicas list
[ https://issues.apache.org/jira/browse/KAFKA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-7124. --- Resolution: Not A Problem > Number of AnyLogDir should match the length of the replicas list > > > Key: KAFKA-7124 > URL: https://issues.apache.org/jira/browse/KAFKA-7124 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu >Priority: Major > > See discussion under 'Partitions reassignment is failing in Kafka 1.1.0' > thread reported by Debraj Manna. > Here is snippet from generated json file: > {code} > {"topic": "Topic3", "partition": 7, "log_dirs": ["any"], "replicas": [3, 0, > 2]} > {code} > Code snippet from ReassignPartitionsCommand.scala : > {code} > "log_dirs" -> replicas.map(r => > replicaLogDirAssignment.getOrElse(new TopicPartitionReplica(tp.topic, > tp.partition, r), AnyLogDir)).asJava > {code} > We know that the appearance of "any" was due to the OrElse clause. > There is a bug in the above code that the number of AnyLogDir should match > the length of the replicas list, or "log_dirs" should be omitted in such case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7124) Number of AnyLogDir should match the length of the replicas list
Ted Yu created KAFKA-7124: - Summary: Number of AnyLogDir should match the length of the replicas list Key: KAFKA-7124 URL: https://issues.apache.org/jira/browse/KAFKA-7124 Project: Kafka Issue Type: Bug Reporter: Ted Yu See discussion under 'Partitions reassignment is failing in Kafka 1.1.0' thread. Here is snippet from generated json file: {code} {"topic": "Topic3", "partition": 7, "log_dirs": ["any"], "replicas": [3, 0, 2]} {code} Code snippet from ReassignPartitionsCommand.scala : {code} "log_dirs" -> replicas.map(r => replicaLogDirAssignment.getOrElse(new TopicPartitionReplica(tp.topic, tp.partition, r), AnyLogDir)).asJava {code} We know that the appearance of "any" was due to the OrElse clause. There is a bug in the above code that the number of AnyLogDir should match the length of the replicas list, or "log_dirs" should be omitted in such case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7049) InternalTopicIntegrationTest sometimes fails
Ted Yu created KAFKA-7049: - Summary: InternalTopicIntegrationTest sometimes fails Key: KAFKA-7049 URL: https://issues.apache.org/jira/browse/KAFKA-7049 Project: Kafka Issue Type: Test Reporter: Ted Yu Saw the following based on commit fa1d0383902260576132e09bdf9efcc2784b55b4 : {code} org.apache.kafka.streams.integration.InternalTopicIntegrationTest > shouldCompactTopicsForKeyValueStoreChangelogs FAILED java.lang.RuntimeException: Timed out waiting for completion. lagMetrics=[0/2] totalLag=[0.0] at org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitForCompletion(IntegrationTestUtils.java:227) at org.apache.kafka.streams.integration.InternalTopicIntegrationTest.shouldCompactTopicsForKeyValueStoreChangelogs(InternalTopicIntegrationTest.java:164) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6698) ConsumerBounceTest#testClose sometimes fails
[ https://issues.apache.org/jira/browse/KAFKA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6698. --- Resolution: Cannot Reproduce > ConsumerBounceTest#testClose sometimes fails > > > Key: KAFKA-6698 > URL: https://issues.apache.org/jira/browse/KAFKA-6698 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Saw the following in > https://builds.apache.org/job/kafka-1.1-jdk7/94/testReport/junit/kafka.api/ConsumerBounceTest/testClose/ > : > {code} > org.apache.kafka.common.errors.TimeoutException: The consumer group command > timed out while waiting for group to initialize: > Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: > The coordinator is not available. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6875) EosIntegrationTest#shouldNotViolateEosIfOneTaskFails is flaky
[ https://issues.apache.org/jira/browse/KAFKA-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6875. --- Resolution: Cannot Reproduce > EosIntegrationTest#shouldNotViolateEosIfOneTaskFails is flaky > - > > Key: KAFKA-6875 > URL: https://issues.apache.org/jira/browse/KAFKA-6875 > Project: Kafka > Issue Type: Test > Components: streams, unit tests >Reporter: Ted Yu >Priority: Minor > Labels: newbie++ > Attachments: EosIntegrationTest.out > > > From > https://builds.apache.org/job/kafka-trunk-jdk10/81/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFails/ > : > {code} > java.lang.AssertionError: Condition not met within timeout 6. SteamsTasks > did not request commit. > at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276) > at > org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails(EosIntegrationTest.java:339) > {code} > From test output: > {code} > [2018-05-07 19:04:18,236] ERROR [Controller id=2 epoch=3] Controller 2 epoch > 3 failed to change state for partition __transaction_state-34 from > OnlinePartition to OnlinePartition (state.change.logger:76) > kafka.common.StateChangeFailedException: Failed to elect leader for partition > __transaction_state-34 under strategy > ControlledShutdownPartitionLeaderElectionStrategy > at > kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:328) > at > scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59) > at > scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at > kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:326) > at > kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:254) > at > kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:175) > at > kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:116) > at > kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1055) > at > kafka.controller.KafkaController$ControlledShutdown.$anonfun$process$1(KafkaController.scala:1031) > at scala.util.Try$.apply(Try.scala:209) > at > kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1031) > at > kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:69) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6904) DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate is flaky
Ted Yu created KAFKA-6904: - Summary: DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate is flaky Key: KAFKA-6904 URL: https://issues.apache.org/jira/browse/KAFKA-6904 Project: Kafka Issue Type: Test Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-pr-jdk10-scala2.12/820/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/testAdvertisedListenerUpdate/ > : {code} kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate Failing for the past 1 build (Since Failed#820 ) Took 21 sec. Error Message java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. Stacktrace java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29) at kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:996) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234) at scala.collection.Iterator.foreach(Iterator.scala:944) at scala.collection.Iterator.foreach$(Iterator.scala:944) at scala.collection.AbstractIterator.foreach(Iterator.scala:1432) at scala.collection.IterableLike.foreach(IterableLike.scala:71) at scala.collection.IterableLike.foreach$(IterableLike.scala:70) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike.map(TraversableLike.scala:234) at scala.collection.TraversableLike.map$(TraversableLike.scala:227) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:996) at kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate(DynamicBrokerReconfigurationTest.scala:742) {code} The above happened with jdk 10. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6875) EosIntegrationTest#shouldNotViolateEosIfOneTaskFails is flaky
Ted Yu created KAFKA-6875: - Summary: EosIntegrationTest#shouldNotViolateEosIfOneTaskFails is flaky Key: KAFKA-6875 URL: https://issues.apache.org/jira/browse/KAFKA-6875 Project: Kafka Issue Type: Test Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-trunk-jdk10/81/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFails/ > : {code} java.lang.AssertionError: Condition not met within timeout 6. SteamsTasks did not request commit. at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276) at org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails(EosIntegrationTest.java:339) {code} >From test output: {code} [2018-05-07 19:04:18,236] ERROR [Controller id=2 epoch=3] Controller 2 epoch 3 failed to change state for partition __transaction_state-34 from OnlinePartition to OnlinePartition (state.change.logger:76) kafka.common.StateChangeFailedException: Failed to elect leader for partition __transaction_state-34 under strategy ControlledShutdownPartitionLeaderElectionStrategy at kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:328) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:326) at kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:254) at kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:175) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:116) at kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1055) at kafka.controller.KafkaController$ControlledShutdown.$anonfun$process$1(KafkaController.scala:1031) at scala.util.Try$.apply(Try.scala:209) at kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1031) at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:69) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6736) ReassignPartitionsClusterTest#shouldMoveSubsetOfPartitions is flaky
[ https://issues.apache.org/jira/browse/KAFKA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6736. --- Resolution: Cannot Reproduce > ReassignPartitionsClusterTest#shouldMoveSubsetOfPartitions is flaky > --- > > Key: KAFKA-6736 > URL: https://issues.apache.org/jira/browse/KAFKA-6736 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Saw this from > https://builds.apache.org/job/kafka-trunk-jdk8/2518/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldMoveSubsetOfPartitions/ > : > {code} > kafka.common.AdminCommandFailedException: Partition reassignment currently in > progress for Map(topic1-0 -> Buffer(100, 102), topic1-2 -> Buffer(100, 102), > topic2-1 -> Buffer(101, 100), topic2-2 -> Buffer(100, 102)). Aborting > operation > at > kafka.admin.ReassignPartitionsCommand.reassignPartitions(ReassignPartitionsCommand.scala:612) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:215) > at > kafka.admin.ReassignPartitionsClusterTest.shouldMoveSubsetOfPartitions(ReassignPartitionsClusterTest.scala:242) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6734) TopicMetadataTest is flaky
[ https://issues.apache.org/jira/browse/KAFKA-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6734. --- Resolution: Cannot Reproduce > TopicMetadataTest is flaky > -- > > Key: KAFKA-6734 > URL: https://issues.apache.org/jira/browse/KAFKA-6734 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > I got two different test failures in two runs of test suite: > {code} > kafka.integration.TopicMetadataTest > testAutoCreateTopic FAILED > kafka.common.KafkaException: fetching topic metadata for topics > [Set(testAutoCreateTopic)] from broker [List(BrokerEndPoint(0,,41557))] failed > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77) > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98) > at > kafka.integration.TopicMetadataTest.testAutoCreateTopic(TopicMetadataTest.scala:105) > Caused by: > java.net.SocketTimeoutException > at > sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:211) > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) > at > java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:122) > at > kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131) > at > kafka.network.BlockingChannel.receive(BlockingChannel.scala:122) > at > kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82) > at > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79) > at kafka.producer.SyncProducer.send(SyncProducer.scala:124) > at > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63) > ... 2 more > {code} > {code} > kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack > FAILED > java.lang.AssertionError: Topic metadata is not correctly updated for > broker kafka.server.KafkaServer@4c45dc9f. > Expected ISR: List(BrokerEndPoint(0,localhost,40822), > BrokerEndPoint(1,localhost,39030)) > Actual ISR : Vector(BrokerEndPoint(0,localhost,40822)) > at kafka.utils.TestUtils$.fail(TestUtils.scala:355) > at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865) > at > kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:191) > at > kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:189) > at scala.collection.immutable.List.foreach(List.scala:392) > at > kafka.integration.TopicMetadataTest.checkIsr(TopicMetadataTest.scala:189) > at > kafka.integration.TopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack(TopicMetadataTest.scala:231) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6531) SocketServerTest#closingChannelException fails sometimes
[ https://issues.apache.org/jira/browse/KAFKA-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6531. --- Resolution: Cannot Reproduce > SocketServerTest#closingChannelException fails sometimes > > > Key: KAFKA-6531 > URL: https://issues.apache.org/jira/browse/KAFKA-6531 > Project: Kafka > Issue Type: Test > Components: core >Reporter: Ted Yu >Priority: Minor > > From > https://builds.apache.org/job/kafka-trunk-jdk9/361/testReport/junit/kafka.network/SocketServerTest/closingChannelException/ > : > {code} > java.lang.AssertionError: Channels not removed > at kafka.utils.TestUtils$.fail(TestUtils.scala:355) > at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865) > at > kafka.network.SocketServerTest.assertProcessorHealthy(SocketServerTest.scala:914) > at > kafka.network.SocketServerTest.$anonfun$closingChannelException$1(SocketServerTest.scala:763) > at > kafka.network.SocketServerTest.$anonfun$closingChannelException$1$adapted(SocketServerTest.scala:747) > {code} > Among the test output, I saw: > {code} > [2018-02-04 18:51:15,995] ERROR Processor 0 closed connection from > /127.0.0.1:48261 (kafka.network.SocketServerTest$$anon$5$$anon$1:73) > java.lang.IllegalStateException: There is already a connection for id > 127.0.0.1:1-127.0.0.1:2-0 > at > org.apache.kafka.common.network.Selector.ensureNotRegistered(Selector.java:260) > at org.apache.kafka.common.network.Selector.register(Selector.java:254) > at > kafka.network.SocketServerTest$TestableSelector.super$register(SocketServerTest.scala:1043) > at > kafka.network.SocketServerTest$TestableSelector.$anonfun$register$2(SocketServerTest.scala:1043) > at > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) > at > kafka.network.SocketServerTest$TestableSelector.runOp(SocketServerTest.scala:1037) > at > kafka.network.SocketServerTest$TestableSelector.register(SocketServerTest.scala:1043) > at > kafka.network.Processor.configureNewConnections(SocketServer.scala:723) > at kafka.network.Processor.run(SocketServer.scala:532) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6775) AbstractProcessor created in SimpleBenchmark should call super#init
Ted Yu created KAFKA-6775: - Summary: AbstractProcessor created in SimpleBenchmark should call super#init Key: KAFKA-6775 URL: https://issues.apache.org/jira/browse/KAFKA-6775 Project: Kafka Issue Type: Bug Reporter: Ted Yu Around line 610: {code} return new AbstractProcessor() { @Override public void init(ProcessorContext context) { } {code} super.init should be called above. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6747) kafka-streams Invalid transition attempted from state READY to state ABORTING_TRANSACTION
Ted Yu created KAFKA-6747: - Summary: kafka-streams Invalid transition attempted from state READY to state ABORTING_TRANSACTION Key: KAFKA-6747 URL: https://issues.apache.org/jira/browse/KAFKA-6747 Project: Kafka Issue Type: Bug Reporter: Ted Yu I running tests against kafka-streams 1.1 and get the following stack trace (everything was working alright using kafka-streams 1.0): {code} ERROR org.apache.kafka.streams.processor.internals.AssignedStreamsTasks - stream-thread [feedBuilder-XXX-StreamThread-4] Failed to close stream task, 0_2 org.apache.kafka.common.KafkaException: TransactionalId feedBuilder-0_2: Invalid transition attempted from state READY to state ABORTING_TRANSACTION at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:757) at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:751) at org.apache.kafka.clients.producer.internals.TransactionManager.beginAbort(TransactionManager.java:230) at org.apache.kafka.clients.producer.KafkaProducer.abortTransaction(KafkaProducer.java:660) at org.apache.kafka.streams.processor.internals.StreamTask.closeSuspended(StreamTask.java:486) at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:546) at org.apache.kafka.streams.processor.internals.AssignedTasks.closeNonRunningTasks(AssignedTasks.java:166) at org.apache.kafka.streams.processor.internals.AssignedTasks.suspend(AssignedTasks.java:151) at org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:242) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:291) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:414) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:359) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:827) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:784) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720) {code} This happens when starting the same stream-processing application on 3 JVMs all running on the same linux box, JVMs are named JVM-[2-4]. All 3 instances use separate stream state.dir. No record is ever processed because the input kafka topics are empty at this stage. JVM-2 starts first, joined shortly after by JVM-4 and JVM-3, find the state transition logs below. The above stacktrace is from JVM-4 {code} [JVM-2] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-2] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-4] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-2] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-2] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-3] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-2] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING JVM-4 crashes here with above stacktrace [JVM-2] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-4] stream-client [feedBuilder-XXX] State transition from REBALANCING to ERROR [JVM-4] stream-client [feedBuilder-XXX] State transition from ERROR to PENDING_SHUTDOWN [JVM-4] stream-client [feedBuilder-XXX] State transition from PENDING_SHUTDOWN to NOT_RUNNING [JVM-4] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-3] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-2] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-3] stream-client [feedBuilder-XXX] State transition from RUNNING to REBALANCING [JVM-2] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-3] stream-client [feedBuilder-XXX] State transition from REBALANCING to RUNNING [JVM-4] stream-client [feedBuilder-XXX] State transition from REBALANCING to
[jira] [Created] (KAFKA-6735) Document how to skip findbugs / checkstyle when running unit test
Ted Yu created KAFKA-6735: - Summary: Document how to skip findbugs / checkstyle when running unit test Key: KAFKA-6735 URL: https://issues.apache.org/jira/browse/KAFKA-6735 Project: Kafka Issue Type: Test Reporter: Ted Yu Even when running single unit test, findbugs dependency would result in some time spent before the test is actually run. We should document how findbugs dependency can be skipped in such scenario: {code} -x findbugsMain -x findbugsTest {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6734) TopicMetadataTest is flaky
Ted Yu created KAFKA-6734: - Summary: TopicMetadataTest is flaky Key: KAFKA-6734 URL: https://issues.apache.org/jira/browse/KAFKA-6734 Project: Kafka Issue Type: Test Reporter: Ted Yu I got two different test failures in two runs of test suite: {code} kafka.integration.TopicMetadataTest > testAutoCreateTopic FAILED kafka.common.KafkaException: fetching topic metadata for topics [Set(testAutoCreateTopic)] from broker [List(BrokerEndPoint(0,,41557))] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98) at kafka.integration.TopicMetadataTest.testAutoCreateTopic(TopicMetadataTest.scala:105) Caused by: java.net.SocketTimeoutException at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:211) at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:122) at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131) at kafka.network.BlockingChannel.receive(BlockingChannel.scala:122) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79) at kafka.producer.SyncProducer.send(SyncProducer.scala:124) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63) ... 2 more {code} {code} kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack FAILED java.lang.AssertionError: Topic metadata is not correctly updated for broker kafka.server.KafkaServer@4c45dc9f. Expected ISR: List(BrokerEndPoint(0,localhost,40822), BrokerEndPoint(1,localhost,39030)) Actual ISR : Vector(BrokerEndPoint(0,localhost,40822)) at kafka.utils.TestUtils$.fail(TestUtils.scala:355) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865) at kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:191) at kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:189) at scala.collection.immutable.List.foreach(List.scala:392) at kafka.integration.TopicMetadataTest.checkIsr(TopicMetadataTest.scala:189) at kafka.integration.TopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack(TopicMetadataTest.scala:231) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6716) discardChannel should be released in MockSelector#completeSend
Ted Yu created KAFKA-6716: - Summary: discardChannel should be released in MockSelector#completeSend Key: KAFKA-6716 URL: https://issues.apache.org/jira/browse/KAFKA-6716 Project: Kafka Issue Type: Test Reporter: Ted Yu {code} private void completeSend(Send send) throws IOException { // Consume the send so that we will be able to send more requests to the destination ByteBufferChannel discardChannel = new ByteBufferChannel(send.size()); while (!send.completed()) { send.writeTo(discardChannel); } completedSends.add(send); } {code} The {{discardChannel}} should be closed before returning from the method -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6698) ConsumerBounceTest#testClose sometimes fails
Ted Yu created KAFKA-6698: - Summary: ConsumerBounceTest#testClose sometimes fails Key: KAFKA-6698 URL: https://issues.apache.org/jira/browse/KAFKA-6698 Project: Kafka Issue Type: Test Reporter: Ted Yu Saw the following in https://builds.apache.org/job/kafka-1.1-jdk7/94/testReport/junit/kafka.api/ConsumerBounceTest/testClose/ : {code} org.apache.kafka.common.errors.TimeoutException: The consumer group command timed out while waiting for group to initialize: Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6678) Upgrade dependencies with later release versions
Ted Yu created KAFKA-6678: - Summary: Upgrade dependencies with later release versions Key: KAFKA-6678 URL: https://issues.apache.org/jira/browse/KAFKA-6678 Project: Kafka Issue Type: Improvement Reporter: Ted Yu {code} The following dependencies have later release versions: - net.sourceforge.argparse4j:argparse4j [0.7.0 -> 0.8.1] - org.bouncycastle:bcpkix-jdk15on [1.58 -> 1.59] - com.puppycrawl.tools:checkstyle [6.19 -> 8.8] - org.owasp:dependency-check-gradle [3.0.2 -> 3.1.1] - org.ajoberstar:grgit [1.9.3 -> 2.1.1] - org.glassfish.jersey.containers:jersey-container-servlet [2.25.1 -> 2.26] - org.eclipse.jetty:jetty-client [9.2.24.v20180105 -> 9.4.8.v20171121] - org.eclipse.jetty:jetty-server [9.2.24.v20180105 -> 9.4.8.v20171121] - org.eclipse.jetty:jetty-servlet [9.2.24.v20180105 -> 9.4.8.v20171121] - org.eclipse.jetty:jetty-servlets [9.2.24.v20180105 -> 9.4.8.v20171121] - org.openjdk.jmh:jmh-core [1.19 -> 1.20] - org.openjdk.jmh:jmh-core-benchmarks [1.19 -> 1.20] - org.openjdk.jmh:jmh-generator-annprocess [1.19 -> 1.20] - org.lz4:lz4-java [1.4 -> 1.4.1] - org.apache.maven:maven-artifact [3.5.2 -> 3.5.3] - org.jacoco:org.jacoco.agent [0.7.9 -> 0.8.0] - org.jacoco:org.jacoco.ant [0.7.9 -> 0.8.0] - org.rocksdb:rocksdbjni [5.7.3 -> 5.11.3] - org.scala-lang:scala-library [2.11.12 -> 2.12.4] - com.typesafe.scala-logging:scala-logging_2.11 [3.7.2 -> 3.8.0] - org.scala-lang:scala-reflect [2.11.12 -> 2.12.4] - org.scalatest:scalatest_2.11 [3.0.4 -> 3.0.5] {code} Looks like we can consider upgrading scalatest, jmh-core and checkstyle -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6228) Intermittent test failure in FetchRequestTest.testDownConversionWithConnectionFailure
[ https://issues.apache.org/jira/browse/KAFKA-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6228. --- Resolution: Cannot Reproduce > Intermittent test failure in > FetchRequestTest.testDownConversionWithConnectionFailure > - > > Key: KAFKA-6228 > URL: https://issues.apache.org/jira/browse/KAFKA-6228 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From > https://builds.apache.org/job/kafka-trunk-jdk8/2219/testReport/junit/kafka.server/FetchRequestTest/testDownConversionWithConnectionFailure/ > : > {code} > java.lang.AssertionError: Fetch size too small 42, broker may have run out of > memory > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > kafka.server.FetchRequestTest.kafka$server$FetchRequestTest$$fetch$1(FetchRequestTest.scala:214) > at > kafka.server.FetchRequestTest$$anonfun$testDownConversionWithConnectionFailure$2.apply(FetchRequestTest.scala:226) > at > kafka.server.FetchRequestTest$$anonfun$testDownConversionWithConnectionFailure$2.apply(FetchRequestTest.scala:226) > at scala.collection.immutable.Range.foreach(Range.scala:160) > at > kafka.server.FetchRequestTest.testDownConversionWithConnectionFailure(FetchRequestTest.scala:226) > {code} > I ran FetchRequestTest locally which passed. > {code} > assertTrue(s"Fetch size too small $size, broker may have run out of > memory", > size > maxPartitionBytes - batchSize) > {code} > The assertion message should include maxPartitionBytes and batchSize which > would give us more information. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (KAFKA-5889) MetricsTest is flaky
[ https://issues.apache.org/jira/browse/KAFKA-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reopened KAFKA-5889: --- As of 332e698ac9c74ce29317021b03a54512c92ac8b3 , I got: {code} kafka.metrics.MetricsTest > testMetricsLeak FAILED java.lang.AssertionError: expected:<1421> but was:<1424> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at kafka.metrics.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:68) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at kafka.metrics.MetricsTest.testMetricsLeak(MetricsTest.scala:66) {code} > MetricsTest is flaky > > > Key: KAFKA-5889 > URL: https://issues.apache.org/jira/browse/KAFKA-5889 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Major > > The following appeared in several recent builds (e.g. > https://builds.apache.org/job/kafka-trunk-jdk7/2758) : > {code} > kafka.metrics.MetricsTest > testMetricsLeak FAILED > java.lang.AssertionError: expected:<1216> but was:<1219> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at org.junit.Assert.assertEquals(Assert.java:631) > at > kafka.metrics.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:68) > at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) > at kafka.metrics.MetricsTest.testMetricsLeak(MetricsTest.scala:66) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6531) SocketServerTest#closingChannelException fails sometimes
Ted Yu created KAFKA-6531: - Summary: SocketServerTest#closingChannelException fails sometimes Key: KAFKA-6531 URL: https://issues.apache.org/jira/browse/KAFKA-6531 Project: Kafka Issue Type: Test Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-trunk-jdk9/361/testReport/junit/kafka.network/SocketServerTest/closingChannelException/ > : {code} java.lang.AssertionError: Channels not removed at kafka.utils.TestUtils$.fail(TestUtils.scala:355) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865) at kafka.network.SocketServerTest.assertProcessorHealthy(SocketServerTest.scala:914) at kafka.network.SocketServerTest.$anonfun$closingChannelException$1(SocketServerTest.scala:763) at kafka.network.SocketServerTest.$anonfun$closingChannelException$1$adapted(SocketServerTest.scala:747) {code} Among the test output, I saw: {code} [2018-02-04 18:51:15,995] ERROR Processor 0 closed connection from /127.0.0.1:48261 (kafka.network.SocketServerTest$$anon$5$$anon$1:73) java.lang.IllegalStateException: There is already a connection for id 127.0.0.1:1-127.0.0.1:2-0 at org.apache.kafka.common.network.Selector.ensureNotRegistered(Selector.java:260) at org.apache.kafka.common.network.Selector.register(Selector.java:254) at kafka.network.SocketServerTest$TestableSelector.super$register(SocketServerTest.scala:1043) at kafka.network.SocketServerTest$TestableSelector.$anonfun$register$2(SocketServerTest.scala:1043) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) at kafka.network.SocketServerTest$TestableSelector.runOp(SocketServerTest.scala:1037) at kafka.network.SocketServerTest$TestableSelector.register(SocketServerTest.scala:1043) at kafka.network.Processor.configureNewConnections(SocketServer.scala:723) at kafka.network.Processor.run(SocketServer.scala:532) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6232) SaslSslAdminClientIntegrationTest sometimes fails
[ https://issues.apache.org/jira/browse/KAFKA-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6232. --- Resolution: Cannot Reproduce > SaslSslAdminClientIntegrationTest sometimes fails > - > > Key: KAFKA-6232 > URL: https://issues.apache.org/jira/browse/KAFKA-6232 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Major > Labels: security > Attachments: saslSslAdminClientIntegrationTest-203.out > > > Here was one recent occurrence: > https://builds.apache.org/job/kafka-trunk-jdk9/203/testReport/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/ > {code} > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.LeaderNotAvailableException: There is no > leader for this topic-partition as we are in the middle of a leadership > election. > at > org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) > at > org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) > at > org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104) > at > org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225) > at > kafka.api.AdminClientIntegrationTest.$anonfun$testLogStartOffsetCheckpoint$3(AdminClientIntegrationTest.scala:762) > {code} > In the test output, I saw: > {code} > [2017-11-17 23:15:45,593] ERROR [KafkaApi-1] Error when handling request > {filters=[{resource_type=2,resource_name=foobar,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]} > (kafka.server.KafkaApis:107) > org.apache.kafka.common.errors.ClusterAuthorizationException: Request > Request(processor=2, connectionId=127.0.0.1:36295-127.0.0.1:58183-0, > session=Session(User:client2,localhost/127.0.0.1), > listenerName=ListenerName(SASL_SSL), securityProtocol=SASL_SSL, buffer=null) > is not authorized. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-5889) MetricsTest is flaky
[ https://issues.apache.org/jira/browse/KAFKA-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5889. --- Resolution: Cannot Reproduce > MetricsTest is flaky > > > Key: KAFKA-5889 > URL: https://issues.apache.org/jira/browse/KAFKA-5889 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Major > > The following appeared in several recent builds (e.g. > https://builds.apache.org/job/kafka-trunk-jdk7/2758) : > {code} > kafka.metrics.MetricsTest > testMetricsLeak FAILED > java.lang.AssertionError: expected:<1216> but was:<1219> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at org.junit.Assert.assertEquals(Assert.java:631) > at > kafka.metrics.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:68) > at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) > at kafka.metrics.MetricsTest.testMetricsLeak(MetricsTest.scala:66) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6384) TransactionsTest#testFencingOnSendOffsets sometimes fails with ProducerFencedException
[ https://issues.apache.org/jira/browse/KAFKA-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6384. --- Resolution: Cannot Reproduce > TransactionsTest#testFencingOnSendOffsets sometimes fails with > ProducerFencedException > -- > > Key: KAFKA-6384 > URL: https://issues.apache.org/jira/browse/KAFKA-6384 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu >Priority: Major > > From > https://builds.apache.org/job/kafka-trunk-jdk8/2283/testReport/junit/kafka.api/TransactionsTest/testFencingOnSendOffsets/ > : > {code} > org.scalatest.junit.JUnitTestFailedError: Got an unexpected exception from a > fenced producer. > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100) > at > org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) > at org.scalatest.Assertions$class.fail(Assertions.scala:1105) > at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71) > at > kafka.api.TransactionsTest.testFencingOnSendOffsets(TransactionsTest.scala:357) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) > at > org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) > at com.sun.proxy.$Proxy1.processTestClass(Unknown Source) > at > org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:108) > at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at >
[jira] [Created] (KAFKA-6424) QueryableStateIntegrationTest#queryOnRebalance should be accept raw text
Ted Yu created KAFKA-6424: - Summary: QueryableStateIntegrationTest#queryOnRebalance should be accept raw text Key: KAFKA-6424 URL: https://issues.apache.org/jira/browse/KAFKA-6424 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Priority: Minor I was using QueryableStateIntegrationTest#queryOnRebalance for some performance test by adding more sentences to inputValues. I found that when the sentence contains upper case letter, the test would timeout. I get around this limitation by calling {{sentence.toLowerCase(Locale.ROOT)}} before the split. Ideally we can specify the path to text file which contains the text. The test can read the text file and generate the input array. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6413) ReassignPartitionsCommand#parsePartitionReassignmentData() should give better error message when JSON is malformed
Ted Yu created KAFKA-6413: - Summary: ReassignPartitionsCommand#parsePartitionReassignmentData() should give better error message when JSON is malformed Key: KAFKA-6413 URL: https://issues.apache.org/jira/browse/KAFKA-6413 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Priority: Minor In this thread: http://search-hadoop.com/m/Kafka/uyzND1J9Hizcxo0X?subj=Partition+reassignment+data+file+is+empty , Allen gave an example JSON string with extra comma where partitionsToBeReassigned returned by ReassignPartitionsCommand#parsePartitionReassignmentData() was empty. I tried the following example where a right bracket is removed: {code} val (partitionsToBeReassigned, replicaAssignment) = ReassignPartitionsCommand.parsePartitionReassignmentData( "{\"version\":1,\"partitions\":[{\"topic\":\"metrics\",\"partition\":0,\"replicas\":[1,2]},{\"topic\":\"metrics\",\"partition\":1,\"replicas\":[2,3]},}"); {code} The returned partitionsToBeReassigned is empty. The parser should give better error message for malformed JSON string. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6412) Improve synchronization in CachingKeyValueStore methods
Ted Yu created KAFKA-6412: - Summary: Improve synchronization in CachingKeyValueStore methods Key: KAFKA-6412 URL: https://issues.apache.org/jira/browse/KAFKA-6412 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Currently CachingKeyValueStore methods are synchronized at method level. It seems we can use read lock for getter and write lock for put / delete methods. For getInternal(), if the underlying thread is streamThread, the getInternal() may trigger eviction. This can be handled by obtaining write lock at the beginning of the method for streamThread. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6384) TransactionsTest#testFencingOnSendOffsets sometimes fails with ProducerFencedException
Ted Yu created KAFKA-6384: - Summary: TransactionsTest#testFencingOnSendOffsets sometimes fails with ProducerFencedException Key: KAFKA-6384 URL: https://issues.apache.org/jira/browse/KAFKA-6384 Project: Kafka Issue Type: Bug Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-trunk-jdk8/2283/testReport/junit/kafka.api/TransactionsTest/testFencingOnSendOffsets/ > : {code} org.scalatest.junit.JUnitTestFailedError: Got an unexpected exception from a fenced producer. at org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100) at org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) at org.scalatest.Assertions$class.fail(Assertions.scala:1105) at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71) at kafka.api.TransactionsTest.testFencingOnSendOffsets(TransactionsTest.scala:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy1.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:108) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:146) at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:128) at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404) at
[jira] [Created] (KAFKA-6370) MirrorMakerIntegrationTest#testCommaSeparatedRegex may fail due to NullPointerException
Ted Yu created KAFKA-6370: - Summary: MirrorMakerIntegrationTest#testCommaSeparatedRegex may fail due to NullPointerException Key: KAFKA-6370 URL: https://issues.apache.org/jira/browse/KAFKA-6370 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/kafka-trunk-jdk8/2277/testReport/junit/kafka.tools/MirrorMakerIntegrationTest/testCommaSeparatedRegex/ > : {code} java.lang.NullPointerException at scala.collection.immutable.StringLike.$anonfun$format$1(StringLike.scala:351) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38) at scala.collection.TraversableLike.map(TraversableLike.scala:234) at scala.collection.TraversableLike.map$(TraversableLike.scala:227) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at scala.collection.immutable.StringLike.format(StringLike.scala:351) at scala.collection.immutable.StringLike.format$(StringLike.scala:350) at scala.collection.immutable.StringOps.format(StringOps.scala:29) at kafka.metrics.KafkaMetricsGroup$.$anonfun$toScope$3(KafkaMetricsGroup.scala:170) at scala.collection.immutable.List.map(List.scala:283) at kafka.metrics.KafkaMetricsGroup$.kafka$metrics$KafkaMetricsGroup$$toScope(KafkaMetricsGroup.scala:170) at kafka.metrics.KafkaMetricsGroup.explicitMetricName(KafkaMetricsGroup.scala:67) at kafka.metrics.KafkaMetricsGroup.explicitMetricName$(KafkaMetricsGroup.scala:51) at kafka.network.RequestMetrics.explicitMetricName(RequestChannel.scala:352) at kafka.metrics.KafkaMetricsGroup.metricName(KafkaMetricsGroup.scala:47) at kafka.metrics.KafkaMetricsGroup.metricName$(KafkaMetricsGroup.scala:42) at kafka.network.RequestMetrics.metricName(RequestChannel.scala:352) at kafka.metrics.KafkaMetricsGroup.newHistogram(KafkaMetricsGroup.scala:81) at kafka.metrics.KafkaMetricsGroup.newHistogram$(KafkaMetricsGroup.scala:80) at kafka.network.RequestMetrics.newHistogram(RequestChannel.scala:352) at kafka.network.RequestMetrics.(RequestChannel.scala:364) at kafka.network.RequestChannel$Metrics.$anonfun$new$2(RequestChannel.scala:57) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.network.RequestChannel$Metrics.(RequestChannel.scala:56) at kafka.network.RequestChannel.(RequestChannel.scala:243) at kafka.network.SocketServer.(SocketServer.scala:71) at kafka.server.KafkaServer.startup(KafkaServer.scala:238) at kafka.utils.TestUtils$.createServer(TestUtils.scala:135) at kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:93) {code} Here is the code from KafkaMetricsGroup.scala : {code} .map { case (key, value) => "%s.%s".format(key, value.replaceAll("\\.", "_"))} {code} It seems (some) value was null. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently
Ted Yu created KAFKA-6335: - Summary: SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently Key: KAFKA-6335 URL: https://issues.apache.org/jira/browse/KAFKA-6335 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/ > : {code} java.lang.AssertionError: expected acls Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 has Allow permission for operations: Read from hosts: *, User:21 has Allow permission for operations: Read from hosts: *, User:39 has Allow permission for operations: Read from hosts: *, User:43 has Allow permission for operations: Read from hosts: *, User:3 has Allow permission for operations: Read from hosts: *, User:35 has Allow permission for operations: Read from hosts: *, User:15 has Allow permission for operations: Read from hosts: *, User:16 has Allow permission for operations: Read from hosts: *, User:22 has Allow permission for operations: Read from hosts: *, User:26 has Allow permission for operations: Read from hosts: *, User:11 has Allow permission for operations: Read from hosts: *, User:38 has Allow permission for operations: Read from hosts: *, User:8 has Allow permission for operations: Read from hosts: *, User:28 has Allow permission for operations: Read from hosts: *, User:32 has Allow permission for operations: Read from hosts: *, User:25 has Allow permission for operations: Read from hosts: *, User:41 has Allow permission for operations: Read from hosts: *, User:44 has Allow permission for operations: Read from hosts: *, User:48 has Allow permission for operations: Read from hosts: *, User:2 has Allow permission for operations: Read from hosts: *, User:9 has Allow permission for operations: Read from hosts: *, User:14 has Allow permission for operations: Read from hosts: *, User:46 has Allow permission for operations: Read from hosts: *, User:13 has Allow permission for operations: Read from hosts: *, User:5 has Allow permission for operations: Read from hosts: *, User:29 has Allow permission for operations: Read from hosts: *, User:45 has Allow permission for operations: Read from hosts: *, User:6 has Allow permission for operations: Read from hosts: *, User:37 has Allow permission for operations: Read from hosts: *, User:23 has Allow permission for operations: Read from hosts: *, User:19 has Allow permission for operations: Read from hosts: *, User:24 has Allow permission for operations: Read from hosts: *, User:17 has Allow permission for operations: Read from hosts: *, User:34 has Allow permission for operations: Read from hosts: *, User:12 has Allow permission for operations: Read from hosts: *, User:42 has Allow permission for operations: Read from hosts: *, User:4 has Allow permission for operations: Read from hosts: *, User:47 has Allow permission for operations: Read from hosts: *, User:18 has Allow permission for operations: Read from hosts: *, User:31 has Allow permission for operations: Read from hosts: *, User:49 has Allow permission for operations: Read from hosts: *, User:33 has Allow permission for operations: Read from hosts: *, User:1 has Allow permission for operations: Read from hosts: *, User:27 has Allow permission for operations: Read from hosts: *) but got Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 has Allow permission for operations: Read from hosts: *, User:21 has Allow permission for operations: Read from hosts: *, User:39 has Allow permission for operations: Read from hosts: *, User:43 has Allow permission for operations: Read from hosts: *, User:3 has Allow permission for operations: Read from hosts: *, User:35 has Allow permission for operations: Read from hosts: *, User:15 has Allow permission for operations: Read from hosts: *, User:16 has Allow permission for operations: Read from hosts: *, User:22 has Allow permission for operations: Read from hosts: *, User:26 has Allow permission for operations: Read from hosts: *, User:11 has Allow permission for operations: Read from hosts: *, User:38 has Allow permission for operations: Read from hosts: *, User:8 has Allow permission for operations: Read from hosts: *, User:28 has Allow permission for operations: Read from hosts: *, User:32 has Allow permission for operations: Read from hosts: *, User:25 has Allow permission for operations: Read from hosts: *, User:41 has Allow permission for operations: Read from hosts: *, User:44 has Allow permission for operations: Read from hosts: *, User:48 has Allow permission for operations: Read from hosts: *, User:2 has Allow permission for operations: Read from hosts: *, User:9 has Allow permission for
[jira] [Created] (KAFKA-6307) mBeanName should be removed before returning from JmxReporter#removeAttribute()
Ted Yu created KAFKA-6307: - Summary: mBeanName should be removed before returning from JmxReporter#removeAttribute() Key: KAFKA-6307 URL: https://issues.apache.org/jira/browse/KAFKA-6307 Project: Kafka Issue Type: Bug Reporter: Ted Yu JmxReporter$KafkaMbean showed up near the top in the first histo output from KAFKA-6199. In JmxReporter#removeAttribute() : {code} KafkaMbean mbean = this.mbeans.get(mBeanName); if (mbean != null) mbean.removeAttribute(metricName.name()); return mbean; {code} mbeans.remove(mBeanName) should be called before returning. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6303) Potential lack of synchronization in NioEchoServer#AcceptorThread
Ted Yu created KAFKA-6303: - Summary: Potential lack of synchronization in NioEchoServer#AcceptorThread Key: KAFKA-6303 URL: https://issues.apache.org/jira/browse/KAFKA-6303 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor In the run() method: {code} SocketChannel socketChannel = ((ServerSocketChannel) key.channel()).accept(); socketChannel.configureBlocking(false); newChannels.add(socketChannel); {code} Modification to newChannels should be protected by synchronized block. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6300) SelectorTest may fail with ConcurrentModificationException
Ted Yu created KAFKA-6300: - Summary: SelectorTest may fail with ConcurrentModificationException Key: KAFKA-6300 URL: https://issues.apache.org/jira/browse/KAFKA-6300 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/kafka-trunk-jdk8/2255/testReport/junit/org.apache.kafka.common.network/SelectorTest/testImmediatelyConnectedCleaned/ > : {code} java.util.ConcurrentModificationException at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909) at java.util.ArrayList$Itr.next(ArrayList.java:859) at org.apache.kafka.common.network.EchoServer.closeConnections(EchoServer.java:115) at org.apache.kafka.common.network.EchoServer.close(EchoServer.java:121) at org.apache.kafka.common.network.SelectorTest.tearDown(SelectorTest.java:95) {code} It seems sockets ArrayList was modified during closing. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6232) SaslSslAdminClientIntegrationTest sometimes fails
Ted Yu created KAFKA-6232: - Summary: SaslSslAdminClientIntegrationTest sometimes fails Key: KAFKA-6232 URL: https://issues.apache.org/jira/browse/KAFKA-6232 Project: Kafka Issue Type: Test Reporter: Ted Yu Here was one recent occurrence: https://builds.apache.org/job/kafka-trunk-jdk9/203/testReport/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/ {code} java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.LeaderNotAvailableException: There is no leader for this topic-partition as we are in the middle of a leadership election. at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225) at kafka.api.AdminClientIntegrationTest.$anonfun$testLogStartOffsetCheckpoint$3(AdminClientIntegrationTest.scala:762) {code} In the test output, I saw: {code} [2017-11-17 23:15:45,593] ERROR [KafkaApi-1] Error when handling request {filters=[{resource_type=2,resource_name=foobar,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]} (kafka.server.KafkaApis:107) org.apache.kafka.common.errors.ClusterAuthorizationException: Request Request(processor=2, connectionId=127.0.0.1:36295-127.0.0.1:58183-0, session=Session(User:client2,localhost/127.0.0.1), listenerName=ListenerName(SASL_SSL), securityProtocol=SASL_SSL, buffer=null) is not authorized. {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6228) Intermittent test failure in FetchRequestTest.testDownConversionWithConnectionFailure
Ted Yu created KAFKA-6228: - Summary: Intermittent test failure in FetchRequestTest.testDownConversionWithConnectionFailure Key: KAFKA-6228 URL: https://issues.apache.org/jira/browse/KAFKA-6228 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/kafka-trunk-jdk8/2219/testReport/junit/kafka.server/FetchRequestTest/testDownConversionWithConnectionFailure/ > : {code} java.lang.AssertionError: Fetch size too small 42, broker may have run out of memory at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at kafka.server.FetchRequestTest.kafka$server$FetchRequestTest$$fetch$1(FetchRequestTest.scala:214) at kafka.server.FetchRequestTest$$anonfun$testDownConversionWithConnectionFailure$2.apply(FetchRequestTest.scala:226) at kafka.server.FetchRequestTest$$anonfun$testDownConversionWithConnectionFailure$2.apply(FetchRequestTest.scala:226) at scala.collection.immutable.Range.foreach(Range.scala:160) at kafka.server.FetchRequestTest.testDownConversionWithConnectionFailure(FetchRequestTest.scala:226) {code} I ran FetchRequestTest locally which passed. {code} assertTrue(s"Fetch size too small $size, broker may have run out of memory", size > maxPartitionBytes - batchSize) {code} The assertion message should include maxPartitionBytes and batchSize which would give us more information. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6215) KafkaStreamsTest fails in trunk
Ted Yu created KAFKA-6215: - Summary: KafkaStreamsTest fails in trunk Key: KAFKA-6215 URL: https://issues.apache.org/jira/browse/KAFKA-6215 Project: Kafka Issue Type: Test Reporter: Ted Yu Two subtests fail. https://builds.apache.org/job/kafka-trunk-jdk9/193/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/testCannotCleanupWhileRunning/ {code} org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.streams.errors.ProcessorStateException: state directory [/tmp/kafka-streams/testCannotCleanupWhileRunning] doesn't exist and couldn't be created at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:618) at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:505) at org.apache.kafka.streams.KafkaStreamsTest.testCannotCleanupWhileRunning(KafkaStreamsTest.java:462) {code} testCleanup fails in similar manner. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-6135) TransactionsTest#testFencingOnCommit may fail due to unexpected KafkaException
[ https://issues.apache.org/jira/browse/KAFKA-6135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6135. --- Resolution: Cannot Reproduce > TransactionsTest#testFencingOnCommit may fail due to unexpected KafkaException > -- > > Key: KAFKA-6135 > URL: https://issues.apache.org/jira/browse/KAFKA-6135 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu > Attachments: 6135.out > > > From > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2293/testReport/junit/kafka.api/TransactionsTest/testFencingOnCommit/ > : > {code} > org.scalatest.junit.JUnitTestFailedError: Got an unexpected exception from a > fenced producer. > at > org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException(AssertionsForJUnit.scala:100) > at > org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException$(AssertionsForJUnit.scala:99) > at > org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) > at org.scalatest.Assertions.fail(Assertions.scala:1105) > at org.scalatest.Assertions.fail$(Assertions.scala:1101) > at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71) > at > kafka.api.TransactionsTest.testFencingOnCommit(TransactionsTest.scala:319) > ... > Caused by: org.apache.kafka.common.KafkaException: Cannot execute > transactional method because we are in an error state > at > org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:782) > at > org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:220) > at > org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:617) > at > kafka.api.TransactionsTest.testFencingOnCommit(TransactionsTest.scala:313) > ... 48 more > Caused by: org.apache.kafka.common.errors.ProducerFencedException: Producer > attempted an operation with an old epoch. Either there is a newer producer > with the same transactionalId, or the producer's transaction has been expired > by the broker. > {code} > Confirmed with [~apurva] that the above would not be covered by his fix for > KAFKA-6119 > Temporarily marking this as bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6193) ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics fails sometimes
Ted Yu created KAFKA-6193: - Summary: ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics fails sometimes Key: KAFKA-6193 URL: https://issues.apache.org/jira/browse/KAFKA-6193 Project: Kafka Issue Type: Test Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-trunk-jdk8/2198/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/ > : {code} java.lang.AssertionError: expected:but was:
at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:144) at kafka.admin.ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics(ReassignPartitionsClusterTest.scala:524) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-6137) RestoreIntegrationTest sometimes fails with assertion error
[ https://issues.apache.org/jira/browse/KAFKA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6137. --- Resolution: Cannot Reproduce > RestoreIntegrationTest sometimes fails with assertion error > --- > > Key: KAFKA-6137 > URL: https://issues.apache.org/jira/browse/KAFKA-6137 > Project: Kafka > Issue Type: Test > Components: streams, unit tests >Reporter: Ted Yu >Priority: Minor > Labels: flaky-test > > From https://builds.apache.org/job/kafka-1.0-jdk7/62 : > {code} > org.apache.kafka.streams.integration.RestoreIntegrationTest > > shouldSuccessfullyStartWhenLoggingDisabled FAILED > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.kafka.streams.integration.RestoreIntegrationTest.shouldSuccessfullyStartWhenLoggingDisabled(RestoreIntegrationTest.java:195) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-6109) ResetIntegrationTest may fail due to IllegalArgumentException
[ https://issues.apache.org/jira/browse/KAFKA-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6109. --- Resolution: Cannot Reproduce > ResetIntegrationTest may fail due to IllegalArgumentException > - > > Key: KAFKA-6109 > URL: https://issues.apache.org/jira/browse/KAFKA-6109 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From https://builds.apache.org/job/kafka-trunk-jdk7/2918 : > {code} > org.apache.kafka.streams.integration.ResetIntegrationTest > > testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED > java.lang.IllegalArgumentException: Setting the time to 1508791687000 > while current time 1508791687475 is newer; this is not allowed > at > org.apache.kafka.common.utils.MockTime.setCurrentTimeMs(MockTime.java:81) > at > org.apache.kafka.streams.integration.AbstractResetIntegrationTest.beforePrepareTest(AbstractResetIntegrationTest.java:114) > at > org.apache.kafka.streams.integration.ResetIntegrationTest.before(ResetIntegrationTest.java:55) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6137) RestoreIntegrationTest sometimes fails with assertion error
Ted Yu created KAFKA-6137: - Summary: RestoreIntegrationTest sometimes fails with assertion error Key: KAFKA-6137 URL: https://issues.apache.org/jira/browse/KAFKA-6137 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From https://builds.apache.org/job/kafka-1.0-jdk7/62 : {code} org.apache.kafka.streams.integration.RestoreIntegrationTest > shouldSuccessfullyStartWhenLoggingDisabled FAILED java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.kafka.streams.integration.RestoreIntegrationTest.shouldSuccessfullyStartWhenLoggingDisabled(RestoreIntegrationTest.java:195) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6135) TransactionsTest#testFencingOnCommit may fail due to unexpected KafkaException
Ted Yu created KAFKA-6135: - Summary: TransactionsTest#testFencingOnCommit may fail due to unexpected KafkaException Key: KAFKA-6135 URL: https://issues.apache.org/jira/browse/KAFKA-6135 Project: Kafka Issue Type: Bug Reporter: Ted Yu >From >https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2293/testReport/junit/kafka.api/TransactionsTest/testFencingOnCommit/ > : {code} org.scalatest.junit.JUnitTestFailedError: Got an unexpected exception from a fenced producer. at org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException(AssertionsForJUnit.scala:100) at org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException$(AssertionsForJUnit.scala:99) at org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) at org.scalatest.Assertions.fail(Assertions.scala:1105) at org.scalatest.Assertions.fail$(Assertions.scala:1101) at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71) at kafka.api.TransactionsTest.testFencingOnCommit(TransactionsTest.scala:319) ... Caused by: org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:782) at org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:220) at org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:617) at kafka.api.TransactionsTest.testFencingOnCommit(TransactionsTest.scala:313) ... 48 more Caused by: org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker. {code} Confirmed with [~apurva] that the above would not be covered by his fix for KAFKA-6119 Temporarily marking this as bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6109) ResetIntegrationTest may fail due to IllegalArgumentException
Ted Yu created KAFKA-6109: - Summary: ResetIntegrationTest may fail due to IllegalArgumentException Key: KAFKA-6109 URL: https://issues.apache.org/jira/browse/KAFKA-6109 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From https://builds.apache.org/job/kafka-trunk-jdk7/2918 : {code} org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED java.lang.IllegalArgumentException: Setting the time to 1508791687000 while current time 1508791687475 is newer; this is not allowed at org.apache.kafka.common.utils.MockTime.setCurrentTimeMs(MockTime.java:81) at org.apache.kafka.streams.integration.AbstractResetIntegrationTest.beforePrepareTest(AbstractResetIntegrationTest.java:114) at org.apache.kafka.streams.integration.ResetIntegrationTest.before(ResetIntegrationTest.java:55) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5911) Avoid creation of extra Map for futures in KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5911. --- Resolution: Later > Avoid creation of extra Map for futures in KafkaAdminClient > --- > > Key: KAFKA-5911 > URL: https://issues.apache.org/jira/browse/KAFKA-5911 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu > Labels: client > Attachments: 5911.v1.txt > > > In various methods from KafkaAdminClient, there is extra Map created when > constructing XXResult instance. > e.g. > {code} > return new DescribeReplicaLogDirResult(new > HashMap(futures)); > {code} > Prior to returning, futures Map is already filled. > Calling get() and values() does not involve the internals of HashMap when we > consider thread-safety. > The extra Map doesn't need to be created. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
[ https://issues.apache.org/jira/browse/KAFKA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5988. --- Resolution: Won't Fix > Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE > > > Key: KAFKA-5988 > URL: https://issues.apache.org/jira/browse/KAFKA-5988 > Project: Kafka > Issue Type: Improvement >Reporter: Ted Yu >Assignee: siva santhalingam >Priority: Minor > > StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) > StreamThread's . > It is used in create() which is called from a loop in KafkaStreams ctor. > We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create() -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6024) Consider moving validation in KafkaConsumer ahead of call to acquireAndEnsureOpen()
Ted Yu created KAFKA-6024: - Summary: Consider moving validation in KafkaConsumer ahead of call to acquireAndEnsureOpen() Key: KAFKA-6024 URL: https://issues.apache.org/jira/browse/KAFKA-6024 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Priority: Minor In several methods, parameter validation is done after calling acquireAndEnsureOpen() : {code} public void seek(TopicPartition partition, long offset) { acquireAndEnsureOpen(); try { if (offset < 0) throw new IllegalArgumentException("seek offset must not be a negative number"); {code} Since the value of parameter would not change per invocation, it seems performing validation ahead of acquireAndEnsureOpen() call would be better. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6023) ThreadCache#sizeBytes() should check overflow
Ted Yu created KAFKA-6023: - Summary: ThreadCache#sizeBytes() should check overflow Key: KAFKA-6023 URL: https://issues.apache.org/jira/browse/KAFKA-6023 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} long sizeBytes() { long sizeInBytes = 0; for (final NamedCache namedCache : caches.values()) { sizeInBytes += namedCache.sizeInBytes(); } return sizeInBytes; } {code} The summation w.r.t. sizeInBytes may overflow. Check similar to what is done in size() should be performed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5916) Upgrade rocksdb dependency to 5.8
[ https://issues.apache.org/jira/browse/KAFKA-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5916. --- Resolution: Duplicate With KAFKA-5576 > Upgrade rocksdb dependency to 5.8 > - > > Key: KAFKA-5916 > URL: https://issues.apache.org/jira/browse/KAFKA-5916 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 0.11.0.0 >Reporter: Ted Yu >Priority: Minor > > Currently we use 5.3.6. > The latest release is 5.8 : > https://github.com/facebook/rocksdb/releases > We should upgrade to latest rocksdb release. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5842) QueryableStateIntegrationTest may fail with JDK 7
[ https://issues.apache.org/jira/browse/KAFKA-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5842. --- Resolution: Cannot Reproduce > QueryableStateIntegrationTest may fail with JDK 7 > - > > Key: KAFKA-5842 > URL: https://issues.apache.org/jira/browse/KAFKA-5842 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Found the following when running test suite for 0.11.0.1 RC0 : > {code} > org.apache.kafka.streams.integration.QueryableStateIntegrationTest > > concurrentAccesses FAILED > java.lang.AssertionError: Key not found one > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyGreaterOrEqual(QueryableStateIntegrationTest.java:893) > at > org.apache.kafka.streams.integration.QueryableStateIntegrationTest.concurrentAccesses(QueryableStateIntegrationTest.java:399) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
Ted Yu created KAFKA-5988: - Summary: Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE Key: KAFKA-5988 URL: https://issues.apache.org/jira/browse/KAFKA-5988 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Priority: Minor StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) StreamThread's . It is used in create() which is called from a loop in KafkaStreams ctor. We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create() -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()
Ted Yu created KAFKA-5967: - Summary: Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries() Key: KAFKA-5967 URL: https://issues.apache.org/jira/browse/KAFKA-5967 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} long total = 0; for (ReadOnlyKeyValueStorestore : stores) { total += store.approximateNumEntries(); } return total < 0 ? Long.MAX_VALUE : total; {code} The check for negative value seems to account for wrapping. However, wrapping can happen within the for loop. So the check should be performed inside the loop. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5840) TransactionsTest#testBasicTransactions sometimes hangs
[ https://issues.apache.org/jira/browse/KAFKA-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5840. --- Resolution: Cannot Reproduce > TransactionsTest#testBasicTransactions sometimes hangs > -- > > Key: KAFKA-5840 > URL: https://issues.apache.org/jira/browse/KAFKA-5840 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu > Attachments: 5840.stack > > > While testing 0.11.0.1 RC0 , I found TransactionsTest hanging. > Here is part of the stack trace: > {code} > "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting > on condition [0x7feb05f8c000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x81272ec0> (a > java.util.concurrent.CountDownLatch$Sync) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) > at > org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76) > at > org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573) > at > org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948) > at > kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93) > {code} > {code} > Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; > 2017-04-03T19:39:06Z) > Maven home: /apache-maven-3.5.0 > Java version: 1.8.0_131, vendor: Oracle Corporation > Java home: /jdk1.8.0_131/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", > family: "unix" > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (KAFKA-5821) Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls
[ https://issues.apache.org/jira/browse/KAFKA-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-5821. --- Resolution: Cannot Reproduce > Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls > --- > > Key: KAFKA-5821 > URL: https://issues.apache.org/jira/browse/KAFKA-5821 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From > https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/7245/testReport/junit/kafka.api/SaslPlainSslEndToEndAuthorizationTest/testAcls/ > : > {code} > java.lang.SecurityException: zookeeper.set.acl is true, but the verification > of the JAAS login file failed. > at kafka.server.KafkaServer.initZk(KafkaServer.scala:329) > at kafka.server.KafkaServer.startup(KafkaServer.scala:192) > at kafka.utils.TestUtils$.createServer(TestUtils.scala:134) > at > kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:94) > at > kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:93) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:93) > at > kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:66) > at > kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158) > at > kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:48) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) > at > org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) > at com.sun.proxy.$Proxy1.processTestClass(Unknown Source) > at >
[jira] [Created] (KAFKA-5946) Give connector method parameter better name
Ted Yu created KAFKA-5946: - Summary: Give connector method parameter better name Key: KAFKA-5946 URL: https://issues.apache.org/jira/browse/KAFKA-5946 Project: Kafka Issue Type: Improvement Reporter: Ted Yu During the development of KAFKA-5657, there were several iterations where method call didn't match what the connector parameter actually represents. [~ewencp] had used connType as equivalent to connClass because Type wasn't used to differentiate source vs sink. [~ewencp] proposed the following: It would help to convert all the uses of connType to connClass first, then standardize on class == java class, type == source/sink, name == user-specified name. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5943) Reduce dependency on mock in connector tests
Ted Yu created KAFKA-5943: - Summary: Reduce dependency on mock in connector tests Key: KAFKA-5943 URL: https://issues.apache.org/jira/browse/KAFKA-5943 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Currently connector tests make heavy use of mock (easymock, power mock). This often hides the real logic behind operations and makes finding bugs difficult. We should reduce the use of mocks so that developers can debug connector code using unit tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5911) Avoid creation of extra Map for futures in KafkaAdminClient
Ted Yu created KAFKA-5911: - Summary: Avoid creation of extra Map for futures in KafkaAdminClient Key: KAFKA-5911 URL: https://issues.apache.org/jira/browse/KAFKA-5911 Project: Kafka Issue Type: Bug Reporter: Ted Yu In various methods from KafkaAdminClient, there is extra Map created when constructing XXResult instance. e.g. {code} return new DescribeReplicaLogDirResult(new HashMap(futures)); {code} Prior to returning, futures Map is already filled. Calling get() and values() does not involve the internals of HashMap when we consider thread-safety. The extra Map doesn't need to be created. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5889) MetricsTest is flaky
Ted Yu created KAFKA-5889: - Summary: MetricsTest is flaky Key: KAFKA-5889 URL: https://issues.apache.org/jira/browse/KAFKA-5889 Project: Kafka Issue Type: Test Reporter: Ted Yu The following appeared in several recent builds (e.g. https://builds.apache.org/job/kafka-trunk-jdk7/2758) : {code} kafka.metrics.MetricsTest > testMetricsLeak FAILED java.lang.AssertionError: expected:<1216> but was:<1219> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at kafka.metrics.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:68) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at kafka.metrics.MetricsTest.testMetricsLeak(MetricsTest.scala:66) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5863) Potential null dereference in DistributedHerder#reconfigureConnector()
Ted Yu created KAFKA-5863: - Summary: Potential null dereference in DistributedHerder#reconfigureConnector() Key: KAFKA-5863 URL: https://issues.apache.org/jira/browse/KAFKA-5863 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor Here is the call chain: {code} RestServer.httpRequest(reconfigUrl, "POST", taskProps, null); {code} In httpRequest(): {code} } else if (responseCode >= 200 && responseCode < 300) { InputStream is = connection.getInputStream(); T result = JSON_SERDE.readValue(is, responseFormat); {code} For readValue(): {code} public T readValue(InputStream src, TypeReference valueTypeRef) throws IOException, JsonParseException, JsonMappingException { return (T) _readMapAndClose(_jsonFactory.createParser(src), _typeFactory.constructType(valueTypeRef)); {code} Then there would be NPE in constructType(): {code} public JavaType constructType(TypeReference typeRef) { // 19-Oct-2015, tatu: Simpler variant like so should work return _fromAny(null, typeRef.getType(), EMPTY_BINDINGS); {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5846) Use singleton NoOpConsumerRebalanceListener in subscribe() call where listener is not specified
Ted Yu created KAFKA-5846: - Summary: Use singleton NoOpConsumerRebalanceListener in subscribe() call where listener is not specified Key: KAFKA-5846 URL: https://issues.apache.org/jira/browse/KAFKA-5846 Project: Kafka Issue Type: Task Reporter: Ted Yu Priority: Minor Currently KafkaConsumer creates instance of NoOpConsumerRebalanceListener for each subscribe() call where ConsumerRebalanceListener is not specified: {code} public void subscribe(Pattern pattern) { subscribe(pattern, new NoOpConsumerRebalanceListener()); {code} We can create a singleton NoOpConsumerRebalanceListener to be used in such scenarios. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5842) QueryableStateIntegrationTest may fail with JDK 7
Ted Yu created KAFKA-5842: - Summary: QueryableStateIntegrationTest may fail with JDK 7 Key: KAFKA-5842 URL: https://issues.apache.org/jira/browse/KAFKA-5842 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor Found the following when running test suite for 0.11.0.1 RC0 : {code} org.apache.kafka.streams.integration.QueryableStateIntegrationTest > concurrentAccesses FAILED java.lang.AssertionError: Key not found one at org.junit.Assert.fail(Assert.java:88) at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyGreaterOrEqual(QueryableStateIntegrationTest.java:893) at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.concurrentAccesses(QueryableStateIntegrationTest.java:399) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5840) TransactionsTest#testBasicTransactions hangs
Ted Yu created KAFKA-5840: - Summary: TransactionsTest#testBasicTransactions hangs Key: KAFKA-5840 URL: https://issues.apache.org/jira/browse/KAFKA-5840 Project: Kafka Issue Type: Test Reporter: Ted Yu Here is part of the stack trace: {code} "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on condition [0x7feb05f8c000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x81272ec0> (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76) at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573) at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948) at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93) {code} {code} Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T19:39:06Z) Maven home: /apache-maven-3.5.0 Java version: 1.8.0_131, vendor: Oracle Corporation Java home: /jdk1.8.0_131/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: "unix" {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5833) Reset thread interrupt state in case of InterruptedException
Ted Yu created KAFKA-5833: - Summary: Reset thread interrupt state in case of InterruptedException Key: KAFKA-5833 URL: https://issues.apache.org/jira/browse/KAFKA-5833 Project: Kafka Issue Type: Bug Reporter: Ted Yu There are some places where InterruptedException is caught but thread interrupt state is not reset. e.g. from WorkerSourceTask#execute() : {code} } catch (InterruptedException e) { // Ignore and allow to exit. {code} Proper way of handling InterruptedException is to reset thread interrupt state. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5821) Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls
Ted Yu created KAFKA-5821: - Summary: Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls Key: KAFKA-5821 URL: https://issues.apache.org/jira/browse/KAFKA-5821 Project: Kafka Issue Type: Test Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/7245/testReport/junit/kafka.api/SaslPlainSslEndToEndAuthorizationTest/testAcls/ > : {code} java.lang.SecurityException: zookeeper.set.acl is true, but the verification of the JAAS login file failed. at kafka.server.KafkaServer.initZk(KafkaServer.scala:329) at kafka.server.KafkaServer.startup(KafkaServer.scala:192) at kafka.utils.TestUtils$.createServer(TestUtils.scala:134) at kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:94) at kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:93) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:93) at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:66) at kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158) at kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy1.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at
[jira] [Created] (KAFKA-5820) Remove unneeded synchronized keyword in StreamThread
Ted Yu created KAFKA-5820: - Summary: Remove unneeded synchronized keyword in StreamThread Key: KAFKA-5820 URL: https://issues.apache.org/jira/browse/KAFKA-5820 Project: Kafka Issue Type: Improvement Reporter: Ted Yu Priority: Minor There are three methods in StreamThread which have unnecessary synchronized keyword since the variable accessed, state, is volatile : isRunningAndNotRebalancing isRunning shutdown synchronized keyword can be dropped for these methods. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5802) ScramServerCallbackHandler#handle should check username not being null before calling credentialCache.get()
Ted Yu created KAFKA-5802: - Summary: ScramServerCallbackHandler#handle should check username not being null before calling credentialCache.get() Key: KAFKA-5802 URL: https://issues.apache.org/jira/browse/KAFKA-5802 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} String username = null; for (Callback callback : callbacks) { if (callback instanceof NameCallback) username = ((NameCallback) callback).getDefaultName(); else if (callback instanceof ScramCredentialCallback) ((ScramCredentialCallback) callback).scramCredential(credentialCache.get(username)); {code} Since ConcurrentHashMap, used by CredentialCache, doesn't allow null keys, we should check that username is not null before calling credentialCache.get() -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-3235) Unclosed stream in AppInfoParser static block
[ https://issues.apache.org/jira/browse/KAFKA-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155887#comment-15155887 ] Ted Yu commented on KAFKA-3235: --- clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java appears multiple times in the patch. Not sure if the patch was generated correctly. > Unclosed stream in AppInfoParser static block > - > > Key: KAFKA-3235 > URL: https://issues.apache.org/jira/browse/KAFKA-3235 > Project: Kafka > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Fix For: 0.9.1.0 > > > {code} > static { > try { > Properties props = new Properties(); > > props.load(AppInfoParser.class.getResourceAsStream("/kafka/kafka-version.properties")); > version = props.getProperty("version", version).trim(); > commitId = props.getProperty("commitId", commitId).trim(); > {code} > The stream returned by getResourceAsStream() should be closed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-3235) Unclosed stream in AppInfoParser static block
Ted Yu created KAFKA-3235: - Summary: Unclosed stream in AppInfoParser static block Key: KAFKA-3235 URL: https://issues.apache.org/jira/browse/KAFKA-3235 Project: Kafka Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} static { try { Properties props = new Properties(); props.load(AppInfoParser.class.getResourceAsStream("/kafka/kafka-version.properties")); version = props.getProperty("version", version).trim(); commitId = props.getProperty("commitId", commitId).trim(); {code} The stream returned by getResourceAsStream() should be closed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)