Re: [REVIEW REQUEST] Move ReassignPartitionsIntegrationTest to java

2023-09-29 Thread Николай Ижиков
Hello.

CI seems to be OK.
Please, join the review.

> 27 сент. 2023 г., в 00:22, Николай Ижиков  написал(а):
> 
> Hello.
> 
> Long story short - one scala test of ReassingPartitionCommand remain and I 
> rewrote it in java - https://github.com/apache/kafka/pull/14456
> 
> Please, review.
> 
> I’m working on [1].
> The PR that moves whole command is pretty big so it makes sense to split it.
> I prepared the PR [2] that moves single test 
> (ReassignPartitionsIntegrationTest) to java.
> 
> It smaller and simpler(touches only 5 files):
> To review - https://github.com/apache/kafka/pull/14456
> Big PR  - https://github.com/apache/kafka/pull/13247
> 
> [1] https://issues.apache.org/jira/browse/KAFKA-14595
> [2] https://github.com/apache/kafka/pull/14456



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2240

2023-09-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 316143 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldDrainPendingTasksToCreate() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldDrainPendingTasksToCreate() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldKeepAddedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldKeepAddedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Zero query results shouldn't error STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Zero query results shouldn't error PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Valid query results still works STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Valid query results still works PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@32ae3299, 
org.apache.kafka.test.MockInternalProcessorContext@482804b6 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@32ae3299, 
org.apache.kafka.test.MockInternalProcessorContext@482804b6 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@3f8e280e, 
org.apache.kafka.test.MockInternalProcessorContext@1ebb2e3b STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@3f8e280e, 
org.apache.kafka.test.MockInternalProcessorContext@1ebb2e3b PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@6702370, 
org.apache.kafka.test.MockInternalProcessorContext@75931f7a STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1

[jira] [Reopened] (KAFKA-13966) Flaky test `QuorumControllerTest.testUnregisterBroker`

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-13966:


> Flaky test `QuorumControllerTest.testUnregisterBroker`
> --
>
> Key: KAFKA-13966
> URL: https://issues.apache.org/jira/browse/KAFKA-13966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Arthur
>Priority: Major
>
> We have seen the following assertion failure in 
> `QuorumControllerTest.testUnregisterBroker`:
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <2> but was: <0>
>   at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
>   at 
> org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:628)
>   at 
> org.apache.kafka.controller.QuorumControllerTest.testUnregisterBroker(QuorumControllerTest.java:494)
>  {code}
> I reproduced it by running the test in a loop. It looks like what happens is 
> that the BrokerRegistration request is able to get interleaved between the 
> leader change event and the write of the bootstrap metadata. Something like 
> this:
>  # handleLeaderChange() start
>  # appendWriteEvent(registerBroker)
>  # appendWriteEvent(bootstrapMetadata)
>  # handleLeaderChange() finish
>  # registerBroker() -> writes broker registration to log
>  # bootstrapMetadata() -> writes bootstrap metadata to log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-13531) Flaky test NamedTopologyIntegrationTest

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-13531:


> Flaky test NamedTopologyIntegrationTest
> ---
>
> Key: KAFKA-13531
> URL: https://issues.apache.org/jira/browse/KAFKA-13531
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Matthew de Detrich
>Priority: Critical
>  Labels: flaky-test
> Attachments: 
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveOneNamedTopologyWhileAnotherContinuesProcessing().test.stdout
>
>
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets
> {quote}java.lang.AssertionError: Did not receive all 3 records from topic 
> output-stream-2 within 6 ms, currently accumulated data is [] Expected: 
> is a value equal to or greater than <3> but: <0> was less than <3> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:648)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:368)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:336)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:644)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:617)
>  at 
> org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets(NamedTopologyIntegrationTest.java:439){quote}
> STDERR
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
> offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>  at 
> org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) 
> at 
> org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122)
>  at 
> org.apache.kafka.streams.processor.internals.TopologyMetadata.maybeNotifyTopologyVersionWaiters(TopologyMetadata.java:154)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkForTopologyUpdates(StreamThread.java:916)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:598)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:575)
>  Caused by: org.apache.kafka.common.errors.GroupSubscribedToTopicException: 
> Deleting offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
> offsets of a topic is forbidden while the consumer group is actively 
> subscribed to it. at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>  at 
> org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableF

[jira] [Created] (KAFKA-15521) Refactor build.gradle to align gradle swagger plugin with swagger dependencies

2023-09-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-15521:
--

 Summary: Refactor build.gradle to align gradle swagger plugin with 
swagger dependencies
 Key: KAFKA-15521
 URL: https://issues.apache.org/jira/browse/KAFKA-15521
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Mickael Maison


We use both the Swagger Gradle plugin 
"io.swagger.core.v3.swagger-gradle-plugin" and 2 Swagger dependencies 
swaggerAnnotations and swaggerJaxrs2. The version for the Gradle plugin is in 
build.gradle while the version for the dependency is in 
gradle/dependencies.gradle.

When we upgrade the version of one or the other it sometimes cause build 
breakages, for example https://github.com/apache/kafka/pull/13387 and 
https://github.com/apache/kafka/pull/14464

We should try to have the version defined in a single place to avoid breaking 
the build again.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15500) Code bug in SslPrincipalMapper.java

2023-09-29 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15500.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Code bug in SslPrincipalMapper.java
> ---
>
> Key: KAFKA-15500
> URL: https://issues.apache.org/jira/browse/KAFKA-15500
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 3.5.1
>Reporter: Svyatoslav
>Assignee: Svyatoslav
>Priority: Major
> Fix For: 3.7.0
>
>
> Code bug in:
> if (toLowerCase && result != null) {
>                 result = result.toLowerCase(Locale.ENGLISH);
>             } else if (toUpperCase{color:#FF} & {color}result != null) {
>                 result = result.toUpperCase(Locale.ENGLISH);
>             }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15523) Flaky test org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testSyncTopicConfigs

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15523:
--

 Summary: Flaky test  
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testSyncTopicConfigs
 Key: KAFKA-15523
 URL: https://issues.apache.org/jira/browse/KAFKA-15523
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


Last seen: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationSSLTest/Build___JDK_17_and_Scala_2_13___testSyncTopicConfigs__/]

 
h3. Error Message
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 3. 
Topic: mm2-status.backup.internal didn't get created in the cluster ==> 
expected:  but was: {code}
h3. Stacktrace
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 3. 
Topic: mm2-status.backup.internal didn't get created in the cluster ==> 
expected:  but was:  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
 at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at 
app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at 
app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210) at 
app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:331)
 at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:328) 
at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:312) at 
app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:302) at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.waitForTopicCreated(MirrorConnectorsIntegrationBaseTest.java:1041)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:224)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:149)
 at 
app//org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.startClusters(MirrorConnectorsIntegrationSSLTest.java:63)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568) at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
 at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
 at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
 at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMe

[jira] [Created] (KAFKA-15522) Flaky test org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15522:
--

 Summary: Flaky test 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs
 Key: KAFKA-15522
 URL: https://issues.apache.org/jira/browse/KAFKA-15522
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


h3. Last seen: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationExactlyOnceTest/Build___JDK_17_and_Scala_2_13___testOneWayReplicationWithFrequentOffsetSyncs__/
h3. Error Message
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.{code}
h3. Stacktrace
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:427)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.createTopics(MirrorConnectorsIntegrationBaseTest.java:1276)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:235)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:149)
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.startClusters(MirrorConnectorsIntegrationExactlyOnceTest.java:51)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachMethods(TestMethodTestDescriptor.java:172)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$exe

[jira] [Created] (KAFKA-15524) Flaky test org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks

2023-09-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-15524:
--

 Summary: Flaky test 
org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks
 Key: KAFKA-15524
 URL: https://issues.apache.org/jira/browse/KAFKA-15524
 Project: Kafka
  Issue Type: Bug
  Components: connect
Affects Versions: 3.5.1, 3.6.0
Reporter: Josep Prat


Last seen: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.integration/OffsetsApiIntegrationTest/Build___JDK_17_and_Scala_2_13___testResetSinkConnectorOffsetsZombieSinkTasks/]

 
h3. Error Message
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.{code}
h3. Stacktrace
{code:java}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:427)
 at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:401)
 at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:392)
 at 
org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testResetSinkConnectorOffsetsZombieSinkTasks(OffsetsApiIntegrationTest.java:763)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:112)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:60)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:52)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
 at jdk.proxy1/jdk.proxy1.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker$2.run(TestWorker.java:176)
 at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.executeAndMaintainThreadName(TestWorker.java:129)
 at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:100)
 at 
org.g

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2241

2023-09-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 315310 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToSuspendShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldOnlyKeepLastUpdateAction() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldOnlyKeepLastUpdateAction() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToRecycle() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToRecycle() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldDrainPendingTasksToCreate() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldDrainPendingTasksToCreate() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldKeepAddedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > TasksTest > 
shouldKeepAddedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Zero query results shouldn't error STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Zero query results shouldn't error PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Valid query results still works STARTED

Gradle Test Run :streams:test > Gradle Test Executor 87 > StateQueryResultTest 
> Valid query results still works PASSED

Gradle Test Run :streams:test > Gradle Test Executor 87 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@2da381b8,

[jira] [Reopened] (KAFKA-14956) Flaky test org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted

2023-09-29 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reopened KAFKA-14956:


It happened again here: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.integration/OffsetsApiIntegrationTest/Build___JDK_11_and_Scala_2_13___testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted/

> Flaky test 
> org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted
> --
>
> Key: KAFKA-14956
> URL: https://issues.apache.org/jira/browse/KAFKA-14956
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Sagar Rao
>Assignee: Yash Mayya
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.5.0
>
>
> ```
> h4. Error
> org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
> Sink connector consumer group offsets should catch up to the topic end 
> offsets ==> expected:  but was: 
> h4. Stacktrace
> org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
> Sink connector consumer group offsets should catch up to the topic end 
> offsets ==> expected:  but was: 
>  at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>  at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>  at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>  at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>  at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
>  at 
> app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)
>  at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)
>  at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:291)
>  at 
> app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.getAndVerifySinkConnectorOffsets(OffsetsApiIntegrationTest.java:150)
>  at 
> app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted(OffsetsApiIntegrationTest.java:131)
>  at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>  at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
>  at 
> app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> app//org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at app//org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at app//org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>  at app//org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>  at app//org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>  at app//org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>  at app//org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>  at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at app//org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:108)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40)
>  at 
> org.gradle.api.i

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #74

2023-09-29 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2242

2023-09-29 Thread Apache Jenkins Server
See 




Re:[DISCUSS] KIP-967: Support custom SSL configuration for Kafka Connect RestServer

2023-09-29 Thread Taras Ledkov
Hi Ashwin,

Thanks a lot for your review. 

> Thanks for the KIP and the PR (which helped me understand the change).
Do you think that something needs to be corrected in KIP to make it more 
understandable without PR? Do you have any advice?

> I could not understand one thing though - In 
> https://github.com/apache/kafka/pull/14203/, 
> why did you have to remove the code which sets sslEngineFactoryConfig in 
> instantiateSslEngineFactory?
If I understood the question correctly:
I've refactored this method a bit.  
SslFactory#instantiateSslEngineFactory was a private not-static method. I've 
separated the code that really creates new instance of the SslEngineFactory and 
place it into a public static method. There might be a better place for this 
public static method that creates the SslEngineFactory. I think we will discuss 
this at the final PR. Current PR is just a demo / prototype to play. 


Re:[REVIEW REQUEST] Move ReassignPartitionsIntegrationTest to java

2023-09-29 Thread Taras Ledkov
Hi Nikolay,

The patch is OK with me.
I leaved minor comments at the PR.


Re: [ANNOUNCE] New committer: Lucas Brutschy

2023-09-29 Thread Matthias J. Sax

Congrats!

On 9/26/23 1:29 AM, Mayank Shekhar Narula wrote:

Congratulations Lucas!

On Fri, Sep 22, 2023 at 5:24 PM Mickael Maison 
wrote:


Congratulations Lucas!

On Fri, Sep 22, 2023 at 7:13 AM Luke Chen  wrote:


Congratulations, Lukas!

Luke

On Fri, Sep 22, 2023 at 6:53 AM Tom Bentley  wrote:


Congratulations!

On Fri, 22 Sept 2023 at 09:11, Sophie Blee-Goldman <

ableegold...@gmail.com



wrote:


Congrats Lucas!










Re: [ANNOUNCE] New committer: Yash Mayya

2023-09-29 Thread Matthias J. Sax

Congrats!

On 9/26/23 1:30 AM, Mayank Shekhar Narula wrote:

Congrats Yash!

On Fri, Sep 22, 2023 at 5:24 PM Mickael Maison 
wrote:


Congratulations Yash!

On Fri, Sep 22, 2023 at 9:25 AM Chaitanya Mukka
 wrote:


Congrats, Yash!! Well deserved.

Chaitanya Mukka
On 21 Sep 2023 at 8:58 PM +0530, Bruno Cadonna ,

wrote:

Hi all,

The PMC of Apache Kafka is pleased to announce a new Kafka committer
Yash Mayya.

Yash's major contributions are around Connect.

Yash authored the following KIPs:

KIP-793: Allow sink connectors to be used with topic-mutating SMTs
KIP-882: Kafka Connect REST API configuration validation timeout
improvements
KIP-970: Deprecate and remove Connect's redundant task configurations
endpoint
KIP-980: Allow creating connectors in a stopped state

Overall, Yash is known for insightful and friendly input to discussions
and his high quality contributions.

Congratulations, Yash!

Thanks,

Bruno (on behalf of the Apache Kafka PMC)







Re: [ANNOUNCE] New Kafka PMC Member: Justine Olshan

2023-09-29 Thread Matthias J. Sax

Congrats!

On 9/25/23 7:29 AM, Rajini Sivaram wrote:

Congratulations, Justine!

Regards,

Rajini

On Mon, Sep 25, 2023 at 9:40 AM Lucas Brutschy
 wrote:


Congrats, Justine!

On Mon, Sep 25, 2023 at 9:20 AM Bruno Cadonna  wrote:


Congrats, Justine! Well deserved!

Best,
Bruno

On 9/25/23 5:28 AM, ziming deng wrote:

Congratulations Justine!



On Sep 25, 2023, at 00:01, Viktor Somogyi-Vass <

viktor.somo...@cloudera.com.INVALID> wrote:


Congrats Justine!

On Sun, Sep 24, 2023, 17:45 Kirk True  wrote:


Congratulations Justine! Thanks for all your great work!


On Sep 24, 2023, at 8:37 AM, John Roesler 

wrote:


Congratulations, Justine!
-John

On Sun, Sep 24, 2023, at 05:05, Mickael Maison wrote:

Congratulations Justine!

On Sun, Sep 24, 2023 at 5:04 AM Sophie Blee-Goldman
 wrote:


Congrats Justine!

On Sat, Sep 23, 2023, 4:36 PM Tom Bentley 

wrote:



Congratulations!

On Sun, 24 Sept 2023 at 12:32, Satish Duggana <

satish.dugg...@gmail.com>

wrote:


Congratulations Justine!!

On Sat, 23 Sept 2023 at 15:46, Bill Bejeck 

wrote:


Congrats Justine!

-Bill

On Sat, Sep 23, 2023 at 6:23 PM Greg Harris



wrote:


Congratulations Justine!

On Sat, Sep 23, 2023 at 5:49 AM Boudjelda Mohamed Said
 wrote:


Congrats Justin !

On Sat 23 Sep 2023 at 14:44, Randall Hauch 


wrote:



Congratulations, Justine!

On Sat, Sep 23, 2023 at 4:25 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:


Congrats Justine!

On Sat, Sep 23, 2023, 13:28 Divij Vaidya <

divijvaidy...@gmail.com>

wrote:



Congratulations Justine!

On Sat 23. Sep 2023 at 07:06, Chris Egerton <

fearthecel...@gmail.com>

wrote:


Congrats Justine!
On Fri, Sep 22, 2023, 20:47 Guozhang Wang <

guozhang.wang...@gmail.com>

wrote:


Congratulations!

On Fri, Sep 22, 2023 at 8:44 PM Tzu-Li (Gordon) Tai <

tzuli...@apache.org


wrote:


Congratulations Justine!

On Fri, Sep 22, 2023, 19:25 Philip Nee <

philip...@gmail.com>

wrote:



Congrats Justine!

On Fri, Sep 22, 2023 at 7:07 PM Luke Chen <

show...@gmail.com>

wrote:



Hi, Everyone,

Justine Olshan has been a Kafka committer since

Dec.

2022.

She

has

been

very active and instrumental to the community since

becoming

a

committer.

It's my pleasure to announce that Justine is now a

member of

Kafka

PMC.


Congratulations Justine!

Luke
on behalf of Apache Kafka PMC































Re: [ANNOUNCE] New committer: Yash Mayya

2023-09-29 Thread Boudjelda Mohamed Said
Congratulations !

On Thu 21 Sep 2023 at 17:28, Bruno Cadonna  wrote:

> Hi all,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Yash Mayya.
>
> Yash's major contributions are around Connect.
>
> Yash authored the following KIPs:
>
> KIP-793: Allow sink connectors to be used with topic-mutating SMTs
> KIP-882: Kafka Connect REST API configuration validation timeout
> improvements
> KIP-970: Deprecate and remove Connect's redundant task configurations
> endpoint
> KIP-980: Allow creating connectors in a stopped state
>
> Overall, Yash is known for insightful and friendly input to discussions
> and his high quality contributions.
>
> Congratulations, Yash!
>
> Thanks,
>
> Bruno (on behalf of the Apache Kafka PMC)
>


[jira] [Resolved] (KAFKA-15491) RackId doesn't exist error while running WordCountDemo

2023-09-29 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15491.
-
Fix Version/s: 3.6.1
   3.7.0
 Assignee: Hao Li
   Resolution: Fixed

> RackId doesn't exist error while running WordCountDemo
> --
>
> Key: KAFKA-15491
> URL: https://issues.apache.org/jira/browse/KAFKA-15491
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Luke Chen
>Assignee: Hao Li
>Priority: Major
> Fix For: 3.6.1, 3.7.0
>
>
> While running the WordCountDemo following the 
> [docs|https://kafka.apache.org/documentation/streams/quickstart], I saw the 
> following error logs in the stream application output. Though everything 
> still works fine, it'd be better there are no ERROR logs in the demo app.
> {code:java}
> [2023-09-24 14:15:11,723] ERROR RackId doesn't exist for process 
> e2391098-23e8-47eb-8d5e-ff6e697c33f5 and consumer 
> streams-wordcount-e2391098-23e8-47eb-8d5e-ff6e697c33f5-StreamThread-1-consumer-adae58be-f5f5-429b-a2b4-67bf732726e8
>  
> (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
> [2023-09-24 14:15:11,757] ERROR RackId doesn't exist for process 
> e2391098-23e8-47eb-8d5e-ff6e697c33f5 and consumer 
> streams-wordcount-e2391098-23e8-47eb-8d5e-ff6e697c33f5-StreamThread-1-consumer-adae58be-f5f5-429b-a2b4-67bf732726e8
>  
> (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Apache Kafka 3.6.0 release

2023-09-29 Thread Ismael Juma
Thanks Satish!

Ismael

On Thu, Sep 28, 2023 at 6:39 AM Satish Duggana 
wrote:

> We do not have any pending release blockers for now. RC1 release
> blocker KAFKA-15498[1] is merged to 3.6.
>  I will create RC2 by 29th Sep 12:00 pm PT and start a new RC thread.
>
> 1. https://github.com/apache/kafka/pull/14434
>
> Thanks,
> Satish.
>
>
>
> On Wed, 27 Sept 2023 at 13:36, Divij Vaidya 
> wrote:
> >
> > Ismael,
> > Thank you for checking.
> > Multiple other folks have validated after I left the comment here that
> > it doesn't impact log truncation and hence won't lead to data loss. I
> > agree that it's not a blocker.
> >
> > (ref: https://github.com/apache/kafka/pull/14457)
> >
> > --
> > Divij Vaidya
> >
> > On Wed, Sep 27, 2023 at 8:50 PM Ismael Juma  wrote:
> > >
> > > Doesn't look like a blocker to me.
> > >
> > > Ismael
> > >
> > > On Wed, Sep 27, 2023 at 2:36 AM Divij Vaidya 
> > > wrote:
> > >
> > > > Hey team
> > > >
> > > > I need help in determining whether
> > > > https://github.com/apache/kafka/pull/14457 is a release blocker bug
> or
> > > > not. If someone is familiar with replication protocol (on the log
> > > > diverange and reconciliation process), please add your comments on
> the
> > > > PR.
> > > >
> > > > --
> > > > Divij Vaidya
> > > >
> > > > On Wed, Sep 27, 2023 at 10:43 AM Divij Vaidya <
> divijvaidy...@gmail.com>
> > > > wrote:
> > > > >
> > > > > A community member reported another bug in TS feature in 3.6 -
> > > > > https://issues.apache.org/jira/browse/KAFKA-15511
> > > > >
> > > > > I don't consider it as a blocker for release because the bug
> occurs in
> > > > > rare situations when the index on disk or in a remote store is
> > > > > corrupted and fails a sanity check.
> > > > > Sharing it here as an FYI.
> > > > >
> > > > > --
> > > > > Divij Vaidya
> > > > >
> > > > > On Fri, Sep 22, 2023 at 11:16 AM Divij Vaidya <
> divijvaidy...@gmail.com>
> > > > wrote:
> > > > > >
> > > > > > Found a bug while testing TS feature in 3.6 -
> > > > > > https://issues.apache.org/jira/browse/KAFKA-15481
> > > > > >
> > > > > > I don't consider it as a blocker for release since it's a
> concurrency
> > > > > > bug that should occur rarely for a feature which is early access.
> > > > > > Sharing it here as FYI in case someone else thinks differently.
> > > > > >
> > > > > > --
> > > > > > Divij Vaidya
> > > > > >
> > > > > > On Fri, Sep 22, 2023 at 1:26 AM Satish Duggana <
> > > > satish.dugg...@gmail.com> wrote:
> > > > > > >
> > > > > > > Thanks Divij for raising a PR for doc formatting issue.
> > > > > > >
> > > > > > > On Thu, 21 Sep, 2023, 2:22 PM Divij Vaidya, <
> divijvaidy...@gmail.com>
> > > > wrote:
> > > > > > >
> > > > > > > > Hey Satish
> > > > > > > >
> > > > > > > > I filed a PR to fix the website formatting bug in 3.6
> > > > documentation -
> > > > > > > > https://github.com/apache/kafka/pull/14419
> > > > > > > > Please take a look when you get a chance.
> > > > > > > >
> > > > > > > > --
> > > > > > > > Divij Vaidya
> > > > > > > >
> > > > > > > > On Tue, Sep 19, 2023 at 5:36 PM Chris Egerton
> > > > 
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi Satish,
> > > > > > > > >
> > > > > > > > > I think this qualifies as a blocker. This API has been
> around
> > > > for years
> > > > > > > > now
> > > > > > > > > and, while we don't document it as not exposing
> duplicates*, it
> > > > has come
> > > > > > > > > with that implicit contract since its inception. More
> > > > importantly, it has
> > > > > > > > > also never exposed plugins that cannot be used on the
> worker.
> > > > This change
> > > > > > > > > in behavior not only introduces duplicates*, it causes
> > > > unreachable
> > > > > > > > plugins
> > > > > > > > > to be displayed. With this in mind, it seems to qualify
> pretty
> > > > clearly
> > > > > > > > as a
> > > > > > > > > regression and we should not put out a release that
> includes it.
> > > > > > > > >
> > > > > > > > > * - Really, these aren't duplicates; rather, they're
> multiple
> > > > copies of
> > > > > > > > the
> > > > > > > > > same plugin that come from different locations on the
> worker
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > >
> > > > > > > > > Chris
> > > > > > > > >
> > > > > > > > > On Tue, Sep 19, 2023 at 4:31 AM Satish Duggana <
> > > > satish.dugg...@gmail.com
> > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Greg,
> > > > > > > > > > Is this API documented that it does not return duplicate
> > > > entries?
> > > > > > > > > >
> > > > > > > > > > Can we also get an opinion from PMC/Committers who have
> > > > KafkaConnect
> > > > > > > > > > expertise on whether this issue is a release blocker?
> > > > > > > > > >
> > > > > > > > > > If we agree that it is not a release blocker then we can
> have a
> > > > > > > > > > release note clarifying this behaviour and add a
> reference to
> > > > the JIRA
> > > > > > > > > > that follows up on the possible solutions.
> > > > > 

[VOTE] 3.6.0 RC2

2023-09-29 Thread Satish Duggana
Hello Kafka users, developers and client-developers,

This is the third candidate for the release of Apache Kafka 3.6.0.
Some of the major features include:

* KIP-405 : Kafka Tiered Storage
* KIP-868 : KRaft Metadata Transactions
* KIP-875: First-class offsets support in Kafka Connect
* KIP-898: Modernize Connect plugin discovery
* KIP-938: Add more metrics for measuring KRaft performance
* KIP-902: Upgrade Zookeeper to 3.8.1
* KIP-917: Additional custom metadata for remote log segment

Release notes for the 3.6.0 release:
https://home.apache.org/~satishd/kafka-3.6.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, October 3, 12pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~satishd/kafka-3.6.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~satishd/kafka-3.6.0-rc2/javadoc/

* Tag to be voted upon (off 3.6 branch) is the 3.6.0-rc2 tag:
https://github.com/apache/kafka/releases/tag/3.6.0-rc2

* Documentation:
https://kafka.apache.org/36/documentation.html

* Protocol:
https://kafka.apache.org/36/protocol.html

* Successful Jenkins builds for the 3.6 branch:
There are a few runs of unit/integration tests. You can see the latest
at https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.6/. We will
continue running a few more iterations.
System tests:
We will send an update once we have the results.

Thanks,
Satish.


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2243

2023-09-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 315075 lines...]
Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToRecycle() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToUpdateInputPartitionsShouldRemoveTaskFromPendingUpdateActions()
 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldVerifyIfPendingTaskToRecycleExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToUpdateInputPartitions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToCloseDirtyShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToSuspend() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldVerifyIfPendingTaskToInitExist() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldDrainPendingTasksToCreate() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldDrainPendingTasksToCreate() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldKeepAddedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > TasksTest > 
shouldKeepAddedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> More than one query result throws IllegalArgumentException PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> Zero query results shouldn't error STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> Zero query results shouldn't error PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> Valid query results still works STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > StateQueryResultTest 
> Valid query results still works PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@4903071d, 
org.apache.kafka.test.MockInternalProcessorContext@27718179 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 79 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@4903071d, 
org.apache.kafka.test.MockInternalProcessorContext@27718179 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 79 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(Rock

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #75

2023-09-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-979: Allow independently stop KRaft controllers or brokers

2023-09-29 Thread Hailey Ni
Updated. Please let me know if you have any additional comments. Thank you!

On Thu, Sep 21, 2023 at 3:02 PM Hailey Ni  wrote:

> Hi Ron. Thanks for the response. I agree with your point. I'll make the
> corresponding changes in the KIP and KAFKA-15471
> .
>
> On Thu, Sep 21, 2023 at 1:40 PM Ron Dagostino  wrote:
>
>> Hi Hailey.  No, I just looked, and zookeeper-server-stop does not have
>> any facility to be specific about which ZK nodes to signal.  So
>> providing the ability in kafka-server-stop to be more specific than
>> just "signal all controllers" or "signal all brokers" would be a bonus
>> and therefore not necessarily required.  But if it is easy to achieve
>> and doesn't add any additional cognitive load -- and at first glance
>> it does seem so -- we should probably just support it.
>>
>> Ron
>>
>> On Wed, Sep 20, 2023 at 6:15 PM Hailey Ni 
>> wrote:
>> >
>> > Hi Ron,
>> >
>> > Thank you very much for the comment. I think it makes sense to me that
>> we
>> > provide an even more specific way to kill individual
>> controllers/brokers.
>> > I have one question: does the command line for ZooKeeper cluster provide
>> > such a way to kill individual controllers/brokers?
>> >
>> > Thanks,
>> > Hailey
>> >
>> > On Tue, Sep 19, 2023 at 11:01 AM Ron Dagostino 
>> wrote:
>> >
>> > > Thanks for the KIP, Hailey.  It will be nice to provide some
>> > > fine-grained control for when people running the broker and controller
>> > > this way want to stop just one of them.
>> > >
>> > > One thing that occurs to me is that in a development environment
>> > > someone might want to run multiple controllers and multiple brokers
>> > > all on the same box, and in that case they might want to actually stop
>> > > just one controller or just one broker instead of all of them.  So I'm
>> > > wondering if maybe instead of supporting kafka-server-stop
>> > > [--process.roles ] we might want to instead support
>> > > kafka-server-stop [--required-config ].  If someone wanted
>> > > to stop any/all controllers and not touch the broker(s) they could
>> > > still achieve that by invoking kafka-server-stop --required-config
>> > > process.roles=controller.  But if they did want to stop a particular
>> > > controller they could then also achieve that via kafka-server-stop
>> > > --required-config node.id=1 (for example).  What do you think?
>> > >
>> > > Ron
>> > >
>> > > On Thu, Sep 14, 2023 at 5:56 PM Hailey Ni 
>> > > wrote:
>> > > >
>> > > > Hi all,
>> > > >
>> > > > I would like to start the discussion about *KIP-979: Allow
>> independently
>> > > > stop KRaft controllers or brokers* <
>> > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+controllers+or+brokers
>> > > > >
>> > > > It proposes adding an optional field "--process.roles " in
>> the
>> > > > script to allow users to independently stop either KRaft broker
>> processes
>> > > > or controller processes. While in the past, all processes were
>> killed
>> > > using
>> > > > a single script.
>> > > > Please let me know if you have any questions or comments. Much
>> > > appreciated.
>> > > >
>> > > > Thanks & Regards,
>> > > > Hailey
>> > >
>>
>


Re: [DISCUSS] KIP-979: Allow independently stop KRaft controllers or brokers

2023-09-29 Thread Ron Dagostino
Thanks, Hailey.  Is there a reason to restrict it to just
process.roles and node.id?  Someone might want to do
"--required-config any.name=whatever.value", for example, and at first
glance I don't see a reason why the implementation should be any
different -- it seems it would probably be easier to not have to worry
about restricting to specific cases, actually.  WDYT?

Ron

On Fri, Sep 29, 2023 at 5:12 PM Hailey Ni  wrote:
>
> Updated. Please let me know if you have any additional comments. Thank you!
>
> On Thu, Sep 21, 2023 at 3:02 PM Hailey Ni  wrote:
>
> > Hi Ron. Thanks for the response. I agree with your point. I'll make the
> > corresponding changes in the KIP and KAFKA-15471
> > .
> >
> > On Thu, Sep 21, 2023 at 1:40 PM Ron Dagostino  wrote:
> >
> >> Hi Hailey.  No, I just looked, and zookeeper-server-stop does not have
> >> any facility to be specific about which ZK nodes to signal.  So
> >> providing the ability in kafka-server-stop to be more specific than
> >> just "signal all controllers" or "signal all brokers" would be a bonus
> >> and therefore not necessarily required.  But if it is easy to achieve
> >> and doesn't add any additional cognitive load -- and at first glance
> >> it does seem so -- we should probably just support it.
> >>
> >> Ron
> >>
> >> On Wed, Sep 20, 2023 at 6:15 PM Hailey Ni 
> >> wrote:
> >> >
> >> > Hi Ron,
> >> >
> >> > Thank you very much for the comment. I think it makes sense to me that
> >> we
> >> > provide an even more specific way to kill individual
> >> controllers/brokers.
> >> > I have one question: does the command line for ZooKeeper cluster provide
> >> > such a way to kill individual controllers/brokers?
> >> >
> >> > Thanks,
> >> > Hailey
> >> >
> >> > On Tue, Sep 19, 2023 at 11:01 AM Ron Dagostino 
> >> wrote:
> >> >
> >> > > Thanks for the KIP, Hailey.  It will be nice to provide some
> >> > > fine-grained control for when people running the broker and controller
> >> > > this way want to stop just one of them.
> >> > >
> >> > > One thing that occurs to me is that in a development environment
> >> > > someone might want to run multiple controllers and multiple brokers
> >> > > all on the same box, and in that case they might want to actually stop
> >> > > just one controller or just one broker instead of all of them.  So I'm
> >> > > wondering if maybe instead of supporting kafka-server-stop
> >> > > [--process.roles ] we might want to instead support
> >> > > kafka-server-stop [--required-config ].  If someone wanted
> >> > > to stop any/all controllers and not touch the broker(s) they could
> >> > > still achieve that by invoking kafka-server-stop --required-config
> >> > > process.roles=controller.  But if they did want to stop a particular
> >> > > controller they could then also achieve that via kafka-server-stop
> >> > > --required-config node.id=1 (for example).  What do you think?
> >> > >
> >> > > Ron
> >> > >
> >> > > On Thu, Sep 14, 2023 at 5:56 PM Hailey Ni 
> >> > > wrote:
> >> > > >
> >> > > > Hi all,
> >> > > >
> >> > > > I would like to start the discussion about *KIP-979: Allow
> >> independently
> >> > > > stop KRaft controllers or brokers* <
> >> > > >
> >> > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+controllers+or+brokers
> >> > > > >
> >> > > > It proposes adding an optional field "--process.roles " in
> >> the
> >> > > > script to allow users to independently stop either KRaft broker
> >> processes
> >> > > > or controller processes. While in the past, all processes were
> >> killed
> >> > > using
> >> > > > a single script.
> >> > > > Please let me know if you have any questions or comments. Much
> >> > > appreciated.
> >> > > >
> >> > > > Thanks & Regards,
> >> > > > Hailey
> >> > >
> >>
> >


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #76

2023-09-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-714: Client metrics and observability

2023-09-29 Thread David Jacot
Hi Andrew,

Thanks for driving this one. I haven't read all the KIP yet but I already
have an initial question. In the Threading section, it is written
"KafkaConsumer: the "background" thread (based on the consumer threading
refactor which is underway)". If I understand this correctly, it means
that KIP-714 won't work if the "old consumer" is used. Am I correct?

Cheers,
David


On Fri, Sep 22, 2023 at 12:18 PM Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> Hi Philip,
> No, I do not think it should actively search for a broker that supports
> the new
> RPCs. In general, either all of the brokers or none of the brokers will
> support it.
> In the window, where the cluster is being upgraded or client telemetry is
> being
> enabled, there might be a mixed situation. I wouldn’t put too much effort
> into
> this mixed scenario. As the client finds brokers which support the new
> RPCs,
> it can begin to follow the KIP-714 mechanism.
>
> Thanks,
> Andrew
>
> > On 22 Sep 2023, at 20:01, Philip Nee  wrote:
> >
> > Hi Andrew -
> >
> > Question on top of your answers: Do you think the client should actively
> > search for a broker that supports this RPC? As previously mentioned, the
> > broker uses the leastLoadedNode to find its first connection (am
> > I correct?), and what if that broker doesn't support the metric push?
> >
> > P
> >
> > On Fri, Sep 22, 2023 at 10:20 AM Andrew Schofield <
> > andrew_schofield_j...@outlook.com> wrote:
> >
> >> Hi Kirk,
> >> Thanks for your question. You are correct that the presence or absence
> of
> >> the new RPCs in the
> >> ApiVersionsResponse tells the client whether to request the telemetry
> >> subscriptions and push
> >> metrics.
> >>
> >> This is of course tricky in practice. It would be conceivable, as a
> >> cluster is upgraded to AK 3.7
> >> or as a client metrics receiver plugin is deployed across the cluster,
> >> that a client connects to some
> >> brokers that support the new RPCs and some that do not.
> >>
> >> Here’s my suggestion:
> >> * If a client is not connected to any brokers that support in the new
> >> RPCs, it cannot push metrics.
> >> * If a client is only connected to brokers that support the new RPCs, it
> >> will use the new RPCs in
> >> accordance with the KIP.
> >> * If a client is connected to some brokers that support the new RPCs and
> >> some that do not, it will
> >> use the new RPCs with the supporting subset of brokers in accordance
> with
> >> the KIP.
> >>
> >> Comments?
> >>
> >> Thanks,
> >> Andrew
> >>
> >>> On 22 Sep 2023, at 16:01, Kirk True  wrote:
> >>>
> >>> Hi Andrew/Jun,
> >>>
> >>> I want to make sure I understand question/comment #119… In the case
> >> where a cluster without a metrics client receiver is later reconfigured
> and
> >> restarted to include a metrics client receiver, do we want the client to
> >> thereafter begin pushing metrics to the cluster? From Andrew’s response
> to
> >> question #119, it sounds like we’re using the presence/absence of the
> >> relevant RPCs in ApiVersionsResponse as the to-push-or-not-to-push
> >> indicator. Do I have that correct?
> >>>
> >>> Thanks,
> >>> Kirk
> >>>
>  On Sep 21, 2023, at 7:42 AM, Andrew Schofield <
> >> andrew_schofield_j...@outlook.com> wrote:
> 
>  Hi Jun,
>  Thanks for your comments. I’ve updated the KIP to clarify where
> >> necessary.
> 
>  110. Yes, agree. The motivation section mentions this.
> 
>  111. The replacement of ‘-‘ with ‘.’ for metric names and the
> >> replacement of
>  ‘-‘ with ‘_’ for attribute keys is following the OTLP guidelines. I
> >> think it’s a bit
>  of a debatable point. OTLP makes a distinction between a namespace
> and a
>  multi-word component. If it was “client.id” then “client” would be a
> >> namespace with
>  an attribute key “id”. But “client_id” is just a key. So, it was
> >> intentional, but debatable.
> 
>  112. Thanks. The link target moved. Fixed.
> 
>  113. Thanks. Fixed.
> 
>  114.1. If a standard metric makes sense for a client, it should use
> the
> >> exact same
>  name. If a standard metric doesn’t make sense for a client, then it
> can
> >> omit that metric.
> 
>  For a required metric, the situation is stronger. All clients must
> >> implement these
>  metrics with these names in order to implement the KIP. But the
> >> required metrics
>  are essentially the number of connections and the request latency,
> >> which do not
>  reference the underlying implementation of the client (which
> >> producer.record.queue.time.max
>  of course does).
> 
>  I suppose someone might build a producer-only client that didn’t have
> >> consumer metrics.
>  In this case, the consumer metrics would conceptually have the value 0
> >> and would not
>  need to be sent to the broker.
> 
>  114.2. If a client does not implement some metrics, they will not be
> >> available for
>  analysis and troubles

Re: [DISCUSS] KIP-979: Allow independently stop KRaft controllers or brokers

2023-09-29 Thread Hailey Ni
Hi Ron,

I think you made a great point, making the "name" arbitrary instead of
hard-coding it will make the functionality much more flexible. I've updated
the KIP and the code accordingly. Thanks for the great idea!

Thanks,
Hailey


On Fri, Sep 29, 2023 at 2:34 PM Ron Dagostino  wrote:

> Thanks, Hailey.  Is there a reason to restrict it to just
> process.roles and node.id?  Someone might want to do
> "--required-config any.name=whatever.value", for example, and at first
> glance I don't see a reason why the implementation should be any
> different -- it seems it would probably be easier to not have to worry
> about restricting to specific cases, actually.  WDYT?
>
> Ron
>
> On Fri, Sep 29, 2023 at 5:12 PM Hailey Ni 
> wrote:
> >
> > Updated. Please let me know if you have any additional comments. Thank
> you!
> >
> > On Thu, Sep 21, 2023 at 3:02 PM Hailey Ni  wrote:
> >
> > > Hi Ron. Thanks for the response. I agree with your point. I'll make the
> > > corresponding changes in the KIP and KAFKA-15471
> > > .
> > >
> > > On Thu, Sep 21, 2023 at 1:40 PM Ron Dagostino 
> wrote:
> > >
> > >> Hi Hailey.  No, I just looked, and zookeeper-server-stop does not have
> > >> any facility to be specific about which ZK nodes to signal.  So
> > >> providing the ability in kafka-server-stop to be more specific than
> > >> just "signal all controllers" or "signal all brokers" would be a bonus
> > >> and therefore not necessarily required.  But if it is easy to achieve
> > >> and doesn't add any additional cognitive load -- and at first glance
> > >> it does seem so -- we should probably just support it.
> > >>
> > >> Ron
> > >>
> > >> On Wed, Sep 20, 2023 at 6:15 PM Hailey Ni 
> > >> wrote:
> > >> >
> > >> > Hi Ron,
> > >> >
> > >> > Thank you very much for the comment. I think it makes sense to me
> that
> > >> we
> > >> > provide an even more specific way to kill individual
> > >> controllers/brokers.
> > >> > I have one question: does the command line for ZooKeeper cluster
> provide
> > >> > such a way to kill individual controllers/brokers?
> > >> >
> > >> > Thanks,
> > >> > Hailey
> > >> >
> > >> > On Tue, Sep 19, 2023 at 11:01 AM Ron Dagostino 
> > >> wrote:
> > >> >
> > >> > > Thanks for the KIP, Hailey.  It will be nice to provide some
> > >> > > fine-grained control for when people running the broker and
> controller
> > >> > > this way want to stop just one of them.
> > >> > >
> > >> > > One thing that occurs to me is that in a development environment
> > >> > > someone might want to run multiple controllers and multiple
> brokers
> > >> > > all on the same box, and in that case they might want to actually
> stop
> > >> > > just one controller or just one broker instead of all of them.
> So I'm
> > >> > > wondering if maybe instead of supporting kafka-server-stop
> > >> > > [--process.roles ] we might want to instead support
> > >> > > kafka-server-stop [--required-config ].  If someone
> wanted
> > >> > > to stop any/all controllers and not touch the broker(s) they could
> > >> > > still achieve that by invoking kafka-server-stop --required-config
> > >> > > process.roles=controller.  But if they did want to stop a
> particular
> > >> > > controller they could then also achieve that via kafka-server-stop
> > >> > > --required-config node.id=1 (for example).  What do you think?
> > >> > >
> > >> > > Ron
> > >> > >
> > >> > > On Thu, Sep 14, 2023 at 5:56 PM Hailey Ni
> 
> > >> > > wrote:
> > >> > > >
> > >> > > > Hi all,
> > >> > > >
> > >> > > > I would like to start the discussion about *KIP-979: Allow
> > >> independently
> > >> > > > stop KRaft controllers or brokers* <
> > >> > > >
> > >> > >
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-979%3A+Allow+independently+stop+KRaft+controllers+or+brokers
> > >> > > > >
> > >> > > > It proposes adding an optional field "--process.roles "
> in
> > >> the
> > >> > > > script to allow users to independently stop either KRaft broker
> > >> processes
> > >> > > > or controller processes. While in the past, all processes were
> > >> killed
> > >> > > using
> > >> > > > a single script.
> > >> > > > Please let me know if you have any questions or comments. Much
> > >> > > appreciated.
> > >> > > >
> > >> > > > Thanks & Regards,
> > >> > > > Hailey
> > >> > >
> > >>
> > >
>