[jira] [Resolved] (KAFKA-16921) Migrate all junit 4 code to junit 5 for connect module

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16921.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Migrate all junit 4 code to junit 5 for connect module
> --
>
> Key: KAFKA-16921
> URL: https://issues.apache.org/jira/browse/KAFKA-16921
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Minor
> Fix For: 3.9.0
>
>
> # replace all "org.junit.Assert." by "org.junit.jupiter.api.Assertions.".
>  # remove the dependency of `JUnit Vintage Engine`
>  # remove Category



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16997) do not stop kafka when issue to delete a partition folder

2024-06-18 Thread Jerome Morel (Jira)
Jerome Morel created KAFKA-16997:


 Summary: do not stop kafka when issue to delete a partition folder
 Key: KAFKA-16997
 URL: https://issues.apache.org/jira/browse/KAFKA-16997
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 3.6.2
Reporter: Jerome Morel


Context: In our project we create different partitions and even if we delete 
the segments those remains and it came out we have so many partitions that 
kafka crashes due to amount of open files. Therefore we want to delete 
regularly those partitions but we get during that kafka stopping.

 

The issue: after some investigations we found out that the deletion process 
gives sometimes warnings if it cannot delete some log files:
{code:java}
[2024-06-17 15:52:39,590] WARN Failed atomic move of 
/tmp/kafka-logs-mnt/kafka-no-docker/69747657-f49d-453f-9fa2-4d4369199699-0.7b51dad41a77448d8b419c76749f0b2c-delete/0010.timeindex
 to 
/tmp/kafka-logs-mnt/kafka-no-docker/69747657-f49d-453f-9fa2-4d4369199699-0.7b51dad41a77448d8b419c76749f0b2c-delete/0010.timeindex.deleted
 retrying with a non-atomic move (org.apache.kafka.common.utils.Utils)
java.nio.file.NoSuchFileException: 
/tmp/kafka-logs-mnt/kafka-no-docker/69747657-f49d-453f-9fa2-4d4369199699-0.7b51dad41a77448d8b419c76749f0b2c-delete/0010.timeindex
 -> 
/tmp/kafka-logs-mnt/kafka-no-docker/69747657-f49d-453f-9fa2-4d4369199699-0.7b51dad41a77448d8b419c76749f0b2c-delete/0010.timeindex.deleted
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:416)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:266)
at java.base/java.nio.file.Files.move(Files.java:1432)
at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:980)
at 
org.apache.kafka.storage.internals.log.LazyIndex$IndexFile.renameTo(LazyIndex.java:80)
at 
org.apache.kafka.storage.internals.log.LazyIndex.renameTo(LazyIndex.java:202)
at 
org.apache.kafka.storage.internals.log.LogSegment.changeFileSuffixes(LogSegment.java:666)
at kafka.log.LocalLog$.$anonfun$deleteSegmentFiles$1(LocalLog.scala:912)
at 
kafka.log.LocalLog$.$anonfun$deleteSegmentFiles$1$adapted(LocalLog.scala:910)
at scala.collection.immutable.List.foreach(List.scala:431)
at kafka.log.LocalLog$.deleteSegmentFiles(LocalLog.scala:910)
at kafka.log.LocalLog.removeAndDeleteSegments(LocalLog.scala:289) {code}
And just continue but when it is to delete a folder then it mark the replica as 
not ok and then stop kafka if only replica available (which is our case):
{code:java}
[2024-06-17 15:52:39,637] ERROR Error while deleting dir for 
69747657-f49d-453f-9fa2-4d4369199699-0 in dir 
/tmp/kafka-logs-mnt/kafka-no-docker 
(org.apache.kafka.storage.internals.log.LogDirFailureChannel)
java.nio.file.DirectoryNotEmptyException: 
/tmp/kafka-logs-mnt/kafka-no-docker/69747657-f49d-453f-9fa2-4d4369199699-0.7b51dad41a77448d8b419c76749f0b2c-delete
at 
java.base/sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:246)
at 
java.base/sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:105)
at java.base/java.nio.file.Files.delete(Files.java:1152)
at 
org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:923)
at 
org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:901)
at java.base/java.nio.file.Files.walkFileTree(Files.java:2828)
at java.base/java.nio.file.Files.walkFileTree(Files.java:2882)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:901)
at kafka.log.LocalLog.$anonfun$deleteEmptyDir$2(LocalLog.scala:243)
at kafka.log.LocalLog.deleteEmptyDir(LocalLog.scala:709)
at kafka.log.UnifiedLog.$anonfun$delete$2(UnifiedLog.scala:1734)
at kafka.log.UnifiedLog.delete(UnifiedLog.scala:1911)
at kafka.log.LogManager.deleteLogs(LogManager.scala:1152)
at kafka.log.LogManager.$anonfun$deleteLogs$6(LogManager.scala:1166)
at 
org.apache.kafka.server.util.KafkaScheduler.lambda$schedule$1(KafkaScheduler.java:150)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Thread

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #3026

2024-06-18 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-1059: Enable the Producer flush() method to clear the latest send() error

2024-06-18 Thread Artem Livshits
Hi Alieh,

Thank you for the KIP.  I have a couple of suggestions:

AL1.  We should throw an error from flush after we clear it.  This would
make it so that both "send + commit" and "send + flush + commit" (the
latter looks like just a more verbose way to express the former, and it
would be intuitive if it behaves the same) would throw if the transaction
has an error (so if the code is written either way it's going be correct).
At the same time, the latter could be extended by the caller to intercept
exceptions from flush, ignore as needed, and commit the transaction.  This
solution would keep basic things simple (if someone has code that doesn't
require advanced error handling, then basic "send + flush + commit" would
do the right thing) and advanced things possible, an application can add
try + catch around flush and ignore some errors.

AL2.  I'm not sure if config is the best way to express the modification of
the "flush" semantics -- the application logic that calls "flush" needs to
match the "flush" semantics and configuring semantics in a detached place
creates a room for bugs due to discrepancies.  This can be especially bad
if the producer loads configuration from a file at run time, in that case a
mistake in configuration could break the application because it was written
to expect one "flush" semantics but the semantics is switched.  Given that
the "flush" semantics needs to match the caller's expectation, a way to
accomplish that would be to pass the caller's expectation to the "flush"
call by either have a method with a different name or have an overload with
a Boolen flag that would configure the semantics (the current method could
just redirect to the new one).

-Artem

On Mon, Jun 17, 2024 at 9:09 AM Alieh Saeedi 
wrote:

> Hi all,
>
> I'd like to kick off a discussion for KIP-1059 that suggests adding a new
> feature to the Producer flush() method.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1059%3A+Enable+the+Producer+flush%28%29+method+to+clear+the+latest+send%28%29+error
>
> Cheers,
> Alieh
>


[jira] [Created] (KAFKA-16996) The leastLoadedNode() function in kafka-client may choose a faulty node during the consumer thread starting and meanwhile one of the KAFKA server node is dead.

2024-06-18 Thread Goufu (Jira)
Goufu created KAFKA-16996:
-

 Summary: The leastLoadedNode() function in kafka-client may choose 
a faulty node during the consumer thread starting and meanwhile one of the 
KAFKA server node is dead.
 Key: KAFKA-16996
 URL: https://issues.apache.org/jira/browse/KAFKA-16996
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.6.0, 2.3.0, 2.0.1
Reporter: Goufu


The leastLoadedNode() function has a bug during the consumer process starting 
period. The function sendMetadataRequest() called by getTopicMetadataRequest() 
uses a random node which maybe faulty since every node‘s state recorded in the 
client thread is not ready yet. It happened in my production environment during 
my consumer thread restarting and meanwhile one of the KAFKA server node is 
dead.

I'm using the kafka-client-2.0.1.jar. I have checked the source code of higher 
versions and the issue still exists.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16957) Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to work with CLASSIC and CONSUMER

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16957.

Resolution: Fixed

> Enable KafkaConsumerTest#configurableObjectsShouldSeeGeneratedClientId to 
> work with CLASSIC and CONSUMER 
> -
>
> Key: KAFKA-16957
> URL: https://issues.apache.org/jira/browse/KAFKA-16957
> Project: Kafka
>  Issue Type: Test
>  Components: clients, consumer, unit tests
>Reporter: Chia-Ping Tsai
>Assignee: Chia Chuan Yu
>Priority: Minor
> Fix For: 3.9.0
>
>
> The `CLIENT_IDS` is a static variable, so the latter one will see previous 
> test results. We should clear it before testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16989) Use StringBuilder instead of string concatenation

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16989.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Use StringBuilder instead of string concatenation
> -
>
> Key: KAFKA-16989
> URL: https://issues.apache.org/jira/browse/KAFKA-16989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Minor
> Fix For: 3.9.0
>
>
> https://github.com/apache/kafka/blob/2fd00ce53678509c9f2cfedb428e37a871e3d530/metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java#L130
> The string concatenation will create many new strings and we can reduce the 
> cost by using StringBuilder



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #53

2024-06-18 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16995) The listeners broker parameter incorrect documentation

2024-06-18 Thread Sergey (Jira)
Sergey created KAFKA-16995:
--

 Summary: The listeners broker parameter incorrect documentation 
 Key: KAFKA-16995
 URL: https://issues.apache.org/jira/browse/KAFKA-16995
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.1
 Environment: Kafka 3.6.1
Reporter: Sergey


We are using Kafka 3.6.1 and the 
[KIP-797|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330]
 describes configuring listeners with the same port and name for supporting 
IPv4/IPv6 dual-stack. 

Documentation link: 
https://kafka.apache.org/36/documentation.html#brokerconfigs_listeners
|{{}}|

As I understand it, Kafka should allow us to set the listener name and listener 
port to the same value if we configure dual-stack. 

But in reality, the broker returns an error if we set the listener name to the 
same value.
Error example:
{code:java}
java.lang.IllegalArgumentException: requirement failed: Each listener must have 
a different name, listeners: 
CONTROLPLANE://0.0.0.0:9090,SSL://0.0.0.0:9093,SSL://[::]:9093
        at scala.Predef$.require(Predef.scala:337)
        at kafka.utils.CoreUtils$.validate$1(CoreUtils.scala:214)
        at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:268)
        at kafka.server.KafkaConfig.listeners(KafkaConfig.scala:2120)
        at kafka.server.KafkaConfig.(KafkaConfig.scala:1807)
        at kafka.server.KafkaConfig.(KafkaConfig.scala:1604)
        at kafka.Kafka$.buildServer(Kafka.scala:72)
        at kafka.Kafka$.main(Kafka.scala:91)
        at kafka.Kafka.main(Kafka.scala) {code}
I've tried to set the listeners to: "SSL://0.0.0.0:9093,SSL://[::]:9093"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16961) TestKRaftUpgrade system tests fail in v3.7.1 RC1

2024-06-18 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez resolved KAFKA-16961.
-
Resolution: Fixed

Verified test no longer failing with
{code:java}
TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade" 
bash tests/docker/run_tests.sh {code}
after KAFKA-16969

> TestKRaftUpgrade system tests fail in v3.7.1 RC1
> 
>
> Key: KAFKA-16961
> URL: https://issues.apache.org/jira/browse/KAFKA-16961
> Project: Kafka
>  Issue Type: Test
>Reporter: Luke Chen
>Assignee: Igor Soarez
>Priority: Blocker
> Fix For: 3.8.0, 3.7.1
>
>
>  
>  
> {code:java}
> 
> SESSION REPORT (ALL TESTS)
> ducktape version: 0.11.4
> session_id:       2024-06-14--003
> run time:         86 minutes 13.705 seconds
> tests run:        24
> passed:           18
> flaky:            0
> failed:           6
> ignored:          0
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 44.680 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 42.627 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.2.3.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 28.205 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.2.3.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 42.388 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.3.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 57.679 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.3.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 57.238 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.4.1.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 52.545 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.4.1.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 56.289 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.5.2.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 54.953 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.5.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   2 minutes 59.579 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=dev.use_new_coordinator=False.metadata_quorum=ISOLATED_KRAFT
> status:     PASS
> run time:   3 minutes 21.016 seconds
> 
> test_id:    
> kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=dev.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT
> status: 

[jira] [Created] (KAFKA-16994) Flaky Test SlidingWindowedKStreamIntegrationTest.shouldRestoreAfterJoinRestart

2024-06-18 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16994:
---

 Summary: Flaky Test 
SlidingWindowedKStreamIntegrationTest.shouldRestoreAfterJoinRestart
 Key: KAFKA-16994
 URL: https://issues.apache.org/jira/browse/KAFKA-16994
 Project: Kafka
  Issue Type: Test
Reporter: Matthias J. Sax
 Attachments: 
5owo5xbyzjnao-org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest-shouldRestoreAfterJoinRestart[ON_WINDOW_CLOSE_cache_true]-1-output.txt,
 
7jnraxqt7a52m-org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest-shouldRestoreAfterJoinRestart[ON_WINDOW_CLOSE_cache_false]-1-output.txt,
 
dujhqmgv6nzuu-org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest-shouldRestoreAfterJoinRestart[ON_WINDOW_UPDATE_cache_true]-1-output.txt,
 
fj6qia6oiob4m-org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest-shouldRestoreAfterJoinRestart[ON_WINDOW_UPDATE_cache_false]-1-output.txt

Failed for all different parameters.
{code:java}
java.lang.AssertionError: Did not receive all 1 records from topic 
output-shouldRestoreAfterJoinRestart_ON_WINDOW_CLOSE_cache_true_F_de0bULT5a8gQ_8lAhz8Q
 within 6 msExpected: is a value equal to or greater than <1> but: <0> 
was less than <1>at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueWithTimestampRecordsReceived$2(IntegrationTestUtils.java:778)at
 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:412)at
 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueWithTimestampRecordsReceived(IntegrationTestUtils.java:774)at
 
org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest.receiveMessagesWithTimestamp(SlidingWindowedKStreamIntegrationTest.java:479)at
 
org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest.shouldRestoreAfterJoinRestart(SlidingWindowedKStreamIntegrationTest.java:404)at
 jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)at
 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at
 java.lang.reflect.Method.invoke(Method.java:568)at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)at
 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)at
 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)at
 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)at
 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)at
 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)at
 java.util.concurrent.FutureTask.run(FutureTask.java:264)at 
java.lang.Thread.run(Thread.java:833) {code}
{code:java}
java.lang.AssertionError: Did not receive all 2 records from topic 
output-shouldRestoreAfterJoinRestart_ON_WINDOW_UPDATE_cache_true_bG_UnW1QSr_7tz2aXQNTXA
 within 6 msExpected: is a value equal to or greater than <2> but: <0> 
was less than <2>at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueWithTimestampRecordsReceived$2(IntegrationTestUtils.java:778)at
 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:412)at
 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueWithTimestampRecordsReceived(IntegrationTestUtils.java:774)at
 
org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest.receiveMessagesWithTimestamp(SlidingWindowedKStreamIntegrationTest.java:479)at
 
org.apache.kafka.streams.integration.SlidingWindowedKStreamIntegrationTest.shouldRestoreAfterJoinRestart(SlidingWindowedKStreamIntegrationTest.java:404)at
 
jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)at
 java.lang.reflect.Method.invoke(Method.java:580)at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)at
 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)at
 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)at
 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)at
 
org.junit.internal.runners.statements.Ru

[jira] [Created] (KAFKA-16993) Flaky test RestoreIntegrationTest.shouldInvokeUserDefinedGlobalStateRestoreListener.shouldInvokeUserDefinedGlobalStateRestoreListener()

2024-06-18 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16993:
---

 Summary: Flaky test 
RestoreIntegrationTest.shouldInvokeUserDefinedGlobalStateRestoreListener.shouldInvokeUserDefinedGlobalStateRestoreListener()
 Key: KAFKA-16993
 URL: https://issues.apache.org/jira/browse/KAFKA-16993
 Project: Kafka
  Issue Type: Test
  Components: streams, unit tests
Reporter: Matthias J. Sax
 Attachments: 
6u4a4e27e2oh2-org.apache.kafka.streams.integration.RestoreIntegrationTest-shouldInvokeUserDefinedGlobalStateRestoreListener()-1-output.txt

{code:java}
org.opentest4j.AssertionFailedError: expected:  but was: at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)at
 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)at
 org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)at 
org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:183)at 
org.apache.kafka.streams.integration.RestoreIntegrationTest.shouldInvokeUserDefinedGlobalStateRestoreListener(RestoreIntegrationTest.java:611)at
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at
 java.lang.reflect.Method.invoke(Method.java:498)at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)at
 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)at
 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)at
 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)at
 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)at
 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)at
 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)at
 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)at
 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)at
 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)at
 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)at
 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)at
 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)at
 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)at
 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)at
 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)at
 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)at
 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:214)at
 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:139)at
 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)at
 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)at
 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)at
 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)at
 org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)at
 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)at
 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)at
 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)at
 java.util.ArrayList.forEach(ArrayList.java:1259)at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)at
 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(

[jira] [Resolved] (KAFKA-16992) Flaky Test org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]

2024-06-18 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-16992.
-
Resolution: Duplicate

> Flaky Test  
> org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]
> --
>
> Key: KAFKA-16992
> URL: https://issues.apache.org/jira/browse/KAFKA-16992
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Priority: Major
> Attachments: 
> 6u4a4e27e2oh2-org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest-shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]-1-output.txt
>
>
> We saw this test to timeout more frequently recently:
> {code:java}
> org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
> Expected ERROR state but driver is on RUNNING ==> expected:  but was: 
> at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)at
>  
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)at
>  org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)at
>  
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
>  org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:350)at 
> org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore(EOSUncleanShutdownIntegrationTest.java:169)at
>  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at
>  java.lang.reflect.Method.invoke(Method.java:498)at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)at
>  
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)at
>  
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)at
>  
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)at
>  
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)at
>  
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)at
>  java.util.concurrent.FutureTask.run(FutureTask.java:266)at 
> java.lang.Thread.run(Thread.java:750) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16992) Flaky Test org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]

2024-06-18 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16992:
---

 Summary: Flaky Test  
org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]
 Key: KAFKA-16992
 URL: https://issues.apache.org/jira/browse/KAFKA-16992
 Project: Kafka
  Issue Type: Test
  Components: streams, unit tests
Reporter: Matthias J. Sax
 Attachments: 
6u4a4e27e2oh2-org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest-shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once_v2]-1-output.txt

We saw this test to timeout more frequently recently:
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
Expected ERROR state but driver is on RUNNING ==> expected:  but was: 
at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)at
 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)at
 org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)at 
org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
 org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:350)at 
org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore(EOSUncleanShutdownIntegrationTest.java:169)at
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at
 java.lang.reflect.Method.invoke(Method.java:498)at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)at
 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)at
 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)at
 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)at
 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)at
 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)at
 java.util.concurrent.FutureTask.run(FutureTask.java:266)at 
java.lang.Thread.run(Thread.java:750) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16991) Flaky Test org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState

2024-06-18 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-16991:
---

 Summary: Flaky Test 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
 Key: KAFKA-16991
 URL: https://issues.apache.org/jira/browse/KAFKA-16991
 Project: Kafka
  Issue Type: Test
  Components: streams, unit tests
Reporter: Matthias J. Sax
 Attachments: 
5owo5xbyzjnao-org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest-shouldRestoreState()-1-output.txt

We see this test running into timeouts more frequently recently.
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 6. 
Repartition topic 
restore-test-KSTREAM-AGGREGATE-STATE-STORE-02-repartition not purged 
data after 6 ms. ==> expected:  but was: at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)•••at
 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
 org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:367)at 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState(PurgeRepartitionTopicIntegrationTest.java:220)
 {code}
There was no ERROR or WARN log...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #183

2024-06-18 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16990) Unrecognised flag passed to kafka-storage.sh in system test

2024-06-18 Thread Gaurav Narula (Jira)
Gaurav Narula created KAFKA-16990:
-

 Summary: Unrecognised flag passed to kafka-storage.sh in system 
test
 Key: KAFKA-16990
 URL: https://issues.apache.org/jira/browse/KAFKA-16990
 Project: Kafka
  Issue Type: Test
Reporter: Gaurav Narula


Running 
{{TC_PATHS="tests/kafkatest/tests/core/kraft_upgrade_test.py::TestKRaftUpgrade" 
bash tests/docker/run_tests.sh}} on trunk (c4a3d2475f) fails with the following:

{code:java}
[INFO:2024-06-18 09:16:03,139]: Triggering test 2 of 32...
[INFO:2024-06-18 09:16:03,147]: RunnerClient: Loading test {'directory': 
'/opt/kafka-dev/tests/kafkatest/tests/core', 'file_name': 
'kraft_upgrade_test.py', 'cls_name': 'TestKRaftUpgrade', 'method_name': 
'test_isolated_mode_upgrade', 'injected_args': {'from_kafka_version': '3.1.2', 
'use_new_coordinator': True, 'metadata_quorum': 'ISOLATED_KRAFT'}}
[INFO:2024-06-18 09:16:03,151]: RunnerClient: 
kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
 on run 1/1
[INFO:2024-06-18 09:16:03,153]: RunnerClient: 
kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
 Setting up...
[INFO:2024-06-18 09:16:03,153]: RunnerClient: 
kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
 Running...
[INFO:2024-06-18 09:16:05,999]: RunnerClient: 
kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
 Tearing down...
[INFO:2024-06-18 09:16:12,366]: RunnerClient: 
kafkatest.tests.core.kraft_upgrade_test.TestKRaftUpgrade.test_isolated_mode_upgrade.from_kafka_version=3.1.2.use_new_coordinator=True.metadata_quorum=ISOLATED_KRAFT:
 FAIL: RemoteCommandError({'ssh_config': {'host': 'ducker10', 'hostname': 
'ducker10', 'user': 'ducker', 'port': 22, 'password': '', 'identityfile': 
'/home/ducker/.ssh/id_rsa', 'connecttimeout': None}, 'hostname': 'ducker10', 
'ssh_hostname': 'ducker10', 'user': 'ducker', 'externally_routable_ip': 
'ducker10', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 
'/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
/mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
group.version=1', 1, b"usage: kafka-storage format [-h] --config CONFIG 
--cluster-id CLUSTER_ID\n                     
[--ignore-formatted]\nkafka-storage: error: unrecognized arguments: '-f'\n")
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
186, in _do_run
    data = self.run_test()
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 
246, in run_test
    return self.test_context.function(self.test)
  File "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 
433, in wrapper
    return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", line 
132, in test_isolated_mode_upgrade
    self.run_upgrade(from_kafka_version, group_protocol)
  File "/opt/kafka-dev/tests/kafkatest/tests/core/kraft_upgrade_test.py", line 
96, in run_upgrade
    self.kafka.start()
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 669, in 
start
    self.isolated_controller_quorum.start()
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 671, in 
start
    Service.start(self, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", 
line 265, in start
    self.start_node(node, **kwargs)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 902, in 
start_node
    node.account.ssh(cmd)
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
line 35, in wrapper
    return method(self, *args, **kwargs)
  File 
"/usr/local/lib/python3.9/dist-packages/ducktape/cluster/remoteaccount.py", 
line 310, in ssh
    raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ducker@ducker10: Command 
'/opt/kafka-3.1.2/bin/kafka-storage.sh format --ignore-formatted --config 
/mnt/kafka/kafka.properties --cluster-id I2eXt9rvSnyhct8BYmW6-w -f 
group.version=1' returned non-zero exit status 1. Remote error message: 
b"usage: kafka-storage format [-h] --config CONFIG --cluster-id CLUSTER_ID\n    
                 [--ignore-formatted]\nkafka-storage: error: unrecognized 
arguments: '-f'\n" {code}

This may be related to KAFKA-16860



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-802: Validation Support for Kafka Connect SMT Options

2024-06-18 Thread Mickael Maison
Hi Gunnar,

I think this KIP would be a great addition to Kafka Connect but it
looks like it's been abandoned.

Are you still interested in working on this? If you need some time or
help, that's fine, just let us know.
If not, no worries, I'm happy to pick it up if needed.

Thanks,
Mickael

On Wed, Dec 22, 2021 at 11:21 AM Tom Bentley  wrote:
>
> Hi Gunnar,
>
> Thanks for the KIP, especially the careful reasoning about compatibility. I
> think this would be a useful improvement. I have a few observations, which
> are all about how we effectively communicate the contract to implementers:
>
> 1. I think it would be good for the Javadoc to give a bit more of a hint
> about what the validate(Map) method is supposed to do: At least call
> ConfigDef.validate(Map) with the provided configs (for implementers that
> can be achieved via super.validate()), and optionally apply extra
> validation for constraints that ConfigDef (and ConfigDef.Validator) cannot
> check. I think typically that would be where there's a dependency between
> two config parameters, e.g. if 'foo' is present that 'bar' must be too, or
> 'baz' and 'qux' cannot have the same value.
> 2. Can the Javadoc give a bit more detail about the return value of these
> new methods? I'm not sure that the implementer of a Transformation would
> necessarily know how the Config returned from validate(Map) might be
> "updated", or that updating ConfigValue's errorMessages is the right way to
> report config-specific errors. The KIP should be clear on how we expect
> implementers to report errors due to dependencies between multiple config
> parameters (must they be tied to a config parameter, or should the method
> throw, for example?). I think this is a bit awkward, actually, since the
> ConfigInfo structure used for the JSON REST response doesn't seem to have a
> nice way to represent errors which are not associated with a config
> parameter.
> 3. It might also be worth calling out that the expectation is that a
> successful return from the new validate() method should imply that
> configure(Map) will succeed (to do otherwise undermines the value of the
> validate endpoint). This makes me wonder about implementers, who might
> defensively program their configure(Map) method to implement the same
> checks. Therefore the contract should make clear that the Connect runtime
> guarantees that validate(Map) will be called before configure(Map).
>
> I don't really like the idea of implementing more-or-less the same default
> multiple times. Since these Transformation, Predicate etc will have a
> common contract wrt validate() and configure(), I wondered whether there
> was benefit in a common interface which Transformation etc could extend.
> It's a bit tricky because Connector and Converter are not Configurable.
> This was the best I could manage:
>
> ```
> interface ConfigValidatable {
> /**
>  * Validate the given configuration values against the given
> configuration definitions.
>  * This method will be called prior to the invocation of any
> initializer method, such as {@link Connector#initialize(ConnectorContext)},
> or {@link Configurable#configure(Map)} and should report any errors in the
> given configuration value using the errorMessages of the ConfigValues in
> the returned Config. If the Config returned by this method has no errors
> then the initializer method should not throw due to bad configuration.
>  *
>  * @param configDef the configuration definition, which may be null.
>  * @param configs the provided configuration values.
>  * @return The updated configuration information given the current
> configuration values
>  *
>  * @since 3.2
>  */
> default Config validate(ConfigDef configDef, Map
> configs) {
> List configValues = configDef.validate(smtConfigs);
> return new Config(configValues);
> }
>
> }
> ```
>
> Note that the configDef is passed in, leaving it to the runtime to call
> `thing.config()` to get the ConfigDef instance and validate whether it is
> allowed to be null or not. The subinterfaces could override validate() to
> define what the "initializer method" is in their case, and to indicate
> whether configDef can actually be null.
>
> To be honest, I'm not really sure this is better, but I thought I'd suggest
> it to see what others thought.
>
> Kind regards,
>
> Tom
>
> On Tue, Dec 21, 2021 at 6:46 PM Chris Egerton 
> wrote:
>
> > Hi Gunnar,
> >
> > Thanks, this looks great. I'm ready to cast a non-binding on the vote
> > thread when it comes.
> >
> > One small non-blocking nit: I like that you call out that the new
> > validation steps will take place when a connector gets registered or
> > updated. IMO this is important enough to be included in the "Public
> > Interfaces" section as that type of preflight check is arguably more
> > important than the PUT /connector-plugins/{name}/config/validate endpoint,
> > when considering that use of the validation endpoint is str

[jira] [Resolved] (KAFKA-16941) Flaky test - testDynamicBrokerConfigUpdateUsingKraft [1] Type=Raft-Combined, MetadataVersion=4.0-IV0,Security=PLAINTEXT – kafka.admin.ConfigCommandIntegrationTest

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16941.

Resolution: Duplicate

the flaky will get fixed by KAFKA-16939

> Flaky test - testDynamicBrokerConfigUpdateUsingKraft [1] Type=Raft-Combined, 
> MetadataVersion=4.0-IV0,Security=PLAINTEXT – 
> kafka.admin.ConfigCommandIntegrationTest
> --
>
> Key: KAFKA-16941
> URL: https://issues.apache.org/jira/browse/KAFKA-16941
> Project: Kafka
>  Issue Type: Test
>Reporter: Igor Soarez
>Priority: Minor
>
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-16077/4/tests/
> {code:java}
> org.opentest4j.AssertionFailedError: Condition not met within timeout 5000. 
> [listener.name.internal.ssl.keystore.location] are not updated ==> expected: 
>  but was: 
>     at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>     at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>     at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>     at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>     at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)
>     at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)
>     at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)
>     at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:367)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.verifyConfigDefaultValue(ConfigCommandIntegrationTest.java:519)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.deleteAndVerifyConfig(ConfigCommandIntegrationTest.java:514)
>     at 
> kafka.admin.ConfigCommandIntegrationTest.testDynamicBrokerConfigUpdateUsingKraft(ConfigCommandIntegrationTest.java:237)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3024

2024-06-18 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16989) Use StringBuilder instead of string concatenation

2024-06-18 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16989:
--

 Summary: Use StringBuilder instead of string concatenation
 Key: KAFKA-16989
 URL: https://issues.apache.org/jira/browse/KAFKA-16989
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


https://github.com/apache/kafka/blob/2fd00ce53678509c9f2cfedb428e37a871e3d530/metadata/src/main/java/org/apache/kafka/image/node/ClientQuotasImageNode.java#L130

The string concatenation will create many new strings and we can reduce the 
cost by using StringBuilder



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Update docs for 3.7.1 [kafka-site]

2024-06-18 Thread via GitHub


soarez commented on code in PR #604:
URL: https://github.com/apache/kafka-site/pull/604#discussion_r1644550533


##
37/generated/connect_metrics.html:
##
@@ -1,5 +1,5 @@
-[2024-02-23 00:02:00,837] INFO Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:694)
-[2024-02-23 00:02:00,838] INFO Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:704)
+[2024-06-11 13:47:24,740] INFO Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:694)
+[2024-06-11 13:47:24,741] INFO Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:704)

Review Comment:
   Thanks for pointing this out. I think this was fixed in apache/kafka#15473, 
so I'll backport that into the 3.7 branch.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: [VOTE] KIP-910: Update Source offsets for Source Connectors without producing records

2024-06-18 Thread Andrei Rudkouski
+1 non-binding

Hi,
I faced a business requirement to update offsets for unproduced records. So I 
think it will be a good addon for Kafka Connect.

Best regards,
Andrei Rudkouski

On 2023/08/02 11:07:36 Sagar wrote:
> Hi All,
> 
> Calling a Vote on KIP-910 [1]. I feel we have converged to a reasonable
> design. Ofcourse I am open to any feedback/suggestions and would address
> them.
> 
> Thanks!
> Sagar.
> 

RE: Re: [VOTE] KIP-910: Update Source offsets for Source Connectors without producing records

2024-06-18 Thread Andrei Rudkouski
+1 non-binding

Hi,

I faced a business requirement to update offsets for unproduced records. So I 
think it will be a good addon for Kafka Connect.

Best regards,
Andrei Rudkouski

On 2024/06/06 14:07:16 Антон Левчук wrote:
> +1 non-binding
> 
> This will be a great addition to Kafka Connect
> 
> On Thu, Apr 25, 2024 at 2:41 PM Sagar  wrote:
> >
> > Hey All,
> >
> > Bumping the vote thread after a long time!
> >
> > Thanks!
> > Sagar.
> >
> > On Fri, Feb 2, 2024 at 4:24 PM Sagar  wrote:
> >
> > > Thanks Yash!
> > >
> > > I am hoping to have this released in 3.8 so it would be good to get the
> > > remaining 2 votes.
> > >
> > > Thanks!
> > > Sagar.
> > >
> > >
> > > On Tue, Jan 30, 2024 at 3:18 PM Yash Mayya  wrote:
> > >
> > >> Hi Sagar,
> > >>
> > >> Thanks for the KIP and apologies for the extremely long delay here! I
> > >> think
> > >> we could do with some wordsmithing on the Javadoc for the new
> > >> `SourceTask::updateOffsets` method but that can be taken care of in the
> > >> PR.
> > >>
> > >> +1 (binding)
> > >>
> > >> Thanks,
> > >> Yash
> > >>
> > >> On Wed, Nov 15, 2023 at 11:43 PM Sagar  wrote:
> > >>
> > >> > Hey all,
> > >> >
> > >> > Bumping this vote thread again after quite a while.
> > >> >
> > >> > Thanks!
> > >> > Sagar.
> > >> >
> > >> > On Wed, Sep 6, 2023 at 3:58 PM Sagar  wrote:
> > >> >
> > >> > > Hi All,
> > >> > >
> > >> > > Based on the latest discussion thread, it appears as if all open
> > >> > questions
> > >> > > have been answered.
> > >> > >
> > >> > > Hopefully now we are in a state where we can close out on the Voting
> > >> > > process.
> > >> > >
> > >> > > Thanks everyone for the great feedback.
> > >> > >
> > >> > > Thanks!
> > >> > > Sagar.
> > >> > >
> > >> > > On Fri, Aug 18, 2023 at 9:00 AM Sagar 
> > >> wrote:
> > >> > >
> > >> > >> Hi All,
> > >> > >>
> > >> > >> Bumping the voting thread again.
> > >> > >>
> > >> > >> Thanks!
> > >> > >> Sagar.
> > >> > >>
> > >> > >> On Wed, Aug 2, 2023 at 4:43 PM Sagar 
> > >> wrote:
> > >> > >>
> > >> > >>> Attaching the KIP link for reference:
> > >> > >>>
> > >> >
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-910%3A+Update+Source+offsets+for+Source+Connectors+without+producing+records
> > >> > >>>
> > >> > >>> Thanks!
> > >> > >>> Sagar.
> > >> > >>>
> > >> > >>> On Wed, Aug 2, 2023 at 4:37 PM Sagar 
> > >> > wrote:
> > >> > >>>
> > >> >  Hi All,
> > >> > 
> > >> >  Calling a Vote on KIP-910 [1]. I feel we have converged to a
> > >> > reasonable
> > >> >  design. Ofcourse I am open to any feedback/suggestions and would
> > >> > address
> > >> >  them.
> > >> > 
> > >> >  Thanks!
> > >> >  Sagar.
> > >> > 
> > >> > >>>
> > >> >
> > >>
> > >
> 

Re: [DISCUSS] KIP-891: Running multiple versions of Connector plugins

2024-06-18 Thread Snehashis
Hi Greg, Chris

Thanks for the in-depth discussion, I have a couple of discussion points
and would like your thoughts on this.

1) One concern I have with the new addition of 'soft' and 'hard' version
requirements is that there could be a mismatch in the plugin version that
two different tasks are running, if a soft requirement is provided and the
nodes a multi cluster deployment are not in sync w.r.t the plugin versions
that they are configured with. Note that if my assumptions are correct then
this can happen with the existing framework as well, or is there some
safeguard from this happening? So far, we could have pointed to the
misconfigured cluster configuration and somewhat differ this problem to
something outside of connect runtime. With this feature in place perhaps
the expectation is more on connect to not be running with such
inconsistency, especially if a connector version is specified. This is also
a problem with validation if different cluster have different
configurations, as IIRC validations are local to the worker which receives
the rest call for validate. So, we might be validating with a certain
version which is different from the one that will be used to create
connector and tasks. Again, this is likely how the current state is, but
perhaps such inconsistencies warrant a deeper look with the addition of
this feature. The problems associated with them can be somewhat insidious
and hard to diagnose.

2) There was some discussion on the need for a new REST endpoint to provide
information on the versions of running connectors, and I think adding this
information via REST is a valuable addition. The way I see it the version
is an intrinsic property of an instance of a running connector and hence
this should be part of the set of APIs under /connector/
(also the /connectors API should also have this information as it is an
amalgamation of all the individual connector information). We can introduce
a new path under this for version (/connector/connector-name/version), but
perhaps adding this as part of the status is a valid alternative. This is
mentioned as a rejected alternative right now. Also, to go further I think
version information for tasks could also be available, especially if we
choose to not address the pitfalls discussed in my point 1), this will
at-least provide admins a quick and easy way to determine if such and
inconsistent state exist in any of the connectors.

Thanks again for reviving my original KIP and working to improve it.
Looking forward to your thoughts on the points mentioned above.
Regards
Snehashis


On Wed, May 29, 2024 at 9:59 PM Chris Egerton 
wrote:

> Hi Greg,
>
> First, an apology! I mistakenly assumed that each plugin appeared only once
> in the responses from GET /connector-plugins?connectorsOnly=false. Thank
> you for correcting me and pointing out that all versions of each plugin
> appear in that response, which does indeed satisfy my desire for users to
> discover this information in at most two REST requests (and in fact, does
> it in only one)!
>
> And secondly, with the revelation about recommenders, I agree that it's
> best to leave the "version" property out of the lists of properties
> returned from the GET /connector-plugins//config endpoint.
>
> With those two points settled, I think the only unresolved item is the
> small change to version parsing added to the KIP (where raw version numbers
> are treated as an exact match, instead of a best-effort match with a
> fallback on the default version). If the KIP is updated with that then I'd
> be ready to vote on it.
>
> Cheers,
>
> Chris
>
> On Wed, May 29, 2024 at 12:00 PM Greg Harris  >
> wrote:
>
> > Hey Chris,
> >
> > Thanks for your thoughts.
> >
> > > Won't it still only expose the
> > > latest version for each plugin, instead of the range of versions
> > available?
> >
> > Here is a snippet of the current output of the GET
> > /connector-plugins?connectorsOnly=false endpoint, after I installed two
> > versions of the debezium PostgresConnector:
> >
> >   {
> > "class": "io.debezium.connector.postgresql.PostgresConnector",
> > "type": "source",
> > "version": "2.0.1.Final"
> >   },
> >   {
> > "class": "io.debezium.connector.postgresql.PostgresConnector",
> > "type": "source",
> > "version": "2.6.1.Final"
> >   },
> >
> > I think this satisfies your requirement to learn about all plugins and
> all
> > versions in two or fewer REST calls.
> >
> > I tried to get an example of the output of `/config` by hardcoding the
> > Recommender, and realized that Recommenders aren't executed on the
> > `/config` endpoint at all: only during validation, when a configuration
> is
> > actually present.
> > And this led me to discover that the `/config` endpoint returns a
> > List, and ConfigKeyInfo does not contain a
> recommendedValues
> > field. The ConfigValue field is the object which contains
> > recommendedValues, and it is only generated during validation.
> > I think it's out of sco

[jira] [Resolved] (KAFKA-16988) InsufficientResourcesError in ConnectDistributedTest system test

2024-06-18 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat resolved KAFKA-16988.

Resolution: Fixed

> InsufficientResourcesError in ConnectDistributedTest system test
> 
>
> Key: KAFKA-16988
> URL: https://issues.apache.org/jira/browse/KAFKA-16988
> Project: Kafka
>  Issue Type: Bug
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Saw InsufficientResourcesError when running 
> `ConnectDistributedTest#test_exactly_once_source` system test.
>  
> {code:java}
>  name="test_exactly_once_source_clean=False_connect_protocol=compatible_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="403.812"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/finite_subcluster.py",
>  line 37, in do_alloc good_nodes, bad_nodes = 
> self._available_nodes.remove_spec(cluster_spec) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/node_container.py", 
> line 131, in remove_spec raise InsufficientResourcesError(err) 
> ducktape.cluster.node_container.InsufficientResourcesError: linux nodes 
> requested: 1. linux nodes available: 0 
>  name="test_exactly_once_source_clean=False_connect_protocol=sessioned_metadata_quorum=ZK_use_new_coordinator=False"
>  classname="kafkatest.tests.connect.connect_distributed_test" 
> time="376.160"> requested: 1. linux nodes available: 0')" 
> type="exception">InsufficientResourcesError('linux nodes requested: 1. linux 
> nodes available: 0') Traceback (most recent call last): File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 186, in _do_run data = self.run_test() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", 
> line 246, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/mark/_mark.py", line 433, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File 
> "/opt/kafka-dev/tests/kafkatest/tests/connect/connect_distributed_test.py", 
> line 928, in test_exactly_once_source consumer_validator = 
> ConsoleConsumer(self.test_context, 1, self.kafka, self.source.topic, 
> consumer_timeout_ms=1000, print_key=True) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 97, in 
> __init__ BackgroundThreadService.__init__(self, context, num_nodes) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/background_thread.py",
>  line 26, in __init__ super(BackgroundThreadService, self).__init__(context, 
> num_nodes, cluster_spec, *args, **kwargs) File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 107, in __init__ self.allocate_nodes() File 
> "/usr/local/lib/python3.9/dist-packages/ducktape/services/service.py", line 
> 217, in allocate_nodes self.nodes = self.cluster.alloc(self.cluster_spec) 
> File "/usr/local/lib/python3.9/dist-packages/ducktape/cluster/cluster.py", 
> line 54, in alloc allocated = self.do_alloc(

[jira] [Resolved] (KAFKA-16958) add `STRICT_STUBS` to `EndToEndLatencyTest`, `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and `TopologyTest`

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16958.

Fix Version/s: 3.9.0
   Resolution: Fixed

> add `STRICT_STUBS` to `EndToEndLatencyTest`, 
> `OffsetCommitCallbackInvokerTest`, `ProducerPerformanceTest`, and 
> `TopologyTest`
> -
>
> Key: KAFKA-16958
> URL: https://issues.apache.org/jira/browse/KAFKA-16958
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: dujian0068
>Priority: Minor
> Fix For: 3.9.0
>
>
> They all need `@MockitoSettings(strictness = Strictness.STRICT_STUBS)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16547) add test for DescribeConfigsOptions#includeDocumentation

2024-06-18 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16547.

Fix Version/s: 3.9.0
   Resolution: Fixed

> add test for DescribeConfigsOptions#includeDocumentation
> 
>
> Key: KAFKA-16547
> URL: https://issues.apache.org/jira/browse/KAFKA-16547
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 3.9.0
>
>
> as title, we have no tests for the query option.
> If the option is configured to false, `ConfigEntry#documentation` should be 
> null. otherwise, it should return the config documention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1049: Add config log.summary.interval.ms to Kafka Streams

2024-06-18 Thread Bruno Cadonna

Hi,

+1 (binding)

Since the voting was open for at least 72 hours and you got 4 binding +1 
and no -1, you can close the vote and open a PR.


Best,
Bruno

On 6/17/24 7:11 AM, jiang dou wrote:

Thank you for voting:
This KIP has received 3 binding votes. Can I assume that this KIP has been
voted through and started development?

Lucas Brutschy  于2024年6月13日周四 19:03写道:


+1 (binding)

thanks for the KIP!

On Thu, Jun 13, 2024 at 2:32 AM Matthias J. Sax  wrote:


+1 (binding)

On 6/11/24 1:17 PM, Sophie Blee-Goldman wrote:

+1 (binding)

Thanks for the KIP!

On Tue, Jun 11, 2024 at 5:37 AM jiang dou 

wrote:



HI
I would like to start a vote for KIP-1049: Add config
log.summary.interval.ms to Kafka Streams

KIP:



https://cwiki.apache.org/confluence/display/KAFKA/KIP-1049%3A+Add+config+log.summary.interval.ms+to+Kafka+Streams

Discussion thread:
https://lists.apache.org/thread/rjqslkt46y5zlg0552rloqjfm5ddzk06

Thanks









[PR] MINOR: update docs to 3.8 [kafka-site]

2024-06-18 Thread via GitHub


jlprat opened a new pull request, #608:
URL: https://github.com/apache/kafka-site/pull/608

   This patch adds the generated 3.8.0 release docs as mentioned in the 
[wiki](https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Websiteupdateprocess)
   
   Mind the Javadocs still have the `-SNAPSHOT` suffix, I'll update this once 
we have a final candidate.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Reopened] (KAFKA-16983) Generate the PR for Docker Official Images repo

2024-06-18 Thread Krish Vora (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Vora reopened KAFKA-16983:


> Generate the PR for Docker Official Images repo
> ---
>
> Key: KAFKA-16983
> URL: https://issues.apache.org/jira/browse/KAFKA-16983
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Krish Vora
>Assignee: Krish Vora
>Priority: Major
>
> # Run the {{docker/generate_kafka_pr_template.py}} script from trunk, by 
> providing it the image type. Update the existing entry. 
> {code:java}
> python generate_kafka_pr_template.py --image-type=jvm{code}
>  
>       2. Copy this to raise a new PR in [Docker Hub's Docker Official 
> Repo|https://github.com/docker-library/official-images/tree/master/library/kafka]
>  , which modifies the exisiting entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #182

2024-06-18 Thread Apache Jenkins Server
See