[jira] [Resolved] (KAFKA-15659) Flaky test RestoreIntegrationTest.shouldInvokeUserDefinedGlobalStateRestoreListener

2023-10-26 Thread Levani Kokhreidze (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Levani Kokhreidze resolved KAFKA-15659.
---
Resolution: Fixed

> Flaky test 
> RestoreIntegrationTest.shouldInvokeUserDefinedGlobalStateRestoreListener 
> 
>
> Key: KAFKA-15659
> URL: https://issues.apache.org/jira/browse/KAFKA-15659
> Project: Kafka
>  Issue Type: Test
>Reporter: Divij Vaidya
>Assignee: Levani Kokhreidze
>Priority: Major
>  Labels: flaky-test, streams
> Attachments: Screenshot 2023-10-20 at 13.19.20.png
>
>
> The test added in the PR [https://github.com/apache/kafka/pull/14519] 
> {{shouldInvokeUserDefinedGlobalStateRestoreListener}} has been flaky since it 
> was added. You can find the flaky build on trunk using the link 
> [https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.tags=trunk&search.timeZoneId=Europe%2FBerlin&tests.container=org.apache.kafka.streams.integration.RestoreIntegrationTest&tests.test=shouldInvokeUserDefinedGlobalStateRestoreListener()]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-974 Docker Image for GraalVM based Native Kafka Broker

2023-10-26 Thread Krishna Agarwal
Hi Manikumar,
Thanks for the feedback.

This image signifies 2 things:

   1. Image should be used for the local development and testing purposes
   with fast startup times. (kafka-local)
   2. To achieve (1) - we are providing a native executable for Apache
   Kafka in the docker image. (kafka-native)

While "graalvm" is the underlying tool enabling this, I'm unsure if we
should explicitly mention it in the name.
I'd love to hear your thoughts on this. Do you prefer "kafka-native"
instead of "kafka-local"?

Regards,
Krishna

On Fri, Oct 20, 2023 at 3:32 PM Manikumar  wrote:

> Hi,
>
> > For the native AK docker image, we are considering '*kafka-local*' as it
> clearly signifies that this image is intended exclusively for local
>
> I am not sure, if there is any naming pattern for graalvm based images. Can
> we include "graalvm" to the image name like "kafka-graalvm-native".
> This will clearly indicate this is graalvm based image.
>
>
> Thanks. Regards
>
>
>
>
> On Wed, Oct 18, 2023 at 9:26 PM Krishna Agarwal <
> krishna0608agar...@gmail.com> wrote:
>
> > Hi Federico,
> > Thanks for the feedback and apologies for the delay.
> >
> > I've included a section in the KIP on the release process. I would
> greatly
> > appreciate your insights after reviewing it.
> >
> > Regards,
> > Krishna
> >
> > On Fri, Sep 8, 2023 at 3:08 PM Federico Valeri 
> > wrote:
> >
> > > Hi Krishna, thanks for opening this discussion.
> > >
> > > I see you created two separate KIPs (974 and 975), but there are some
> > > common points (build system and test plan).
> > >
> > > Currently, the Docker image used for system tests is only supported in
> > > that limited scope, so the maintenance burden is minimal. Providing
> > > official Kafka images would be much more complicated. Have you
> > > considered how the image rebuild process would work in case a high
> > > severity CVE comes out for a non Kafka image dependency? In that case,
> > > there will be no Kafka release.
> > >
> > > Br
> > > Fede
> > >
> > > On Fri, Sep 8, 2023 at 9:17 AM Krishna Agarwal
> > >  wrote:
> > > >
> > > > Hi,
> > > > I want to submit a KIP to deliver an experimental Apache Kafka docker
> > > image.
> > > > The proposed docker image can launch brokers with sub-second startup
> > time
> > > > and minimal memory footprint by leveraging a GraalVM based native
> Kafka
> > > > binary.
> > > >
> > > > KIP-974: Docker Image for GraalVM based Native Kafka Broker
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-974%3A+Docker+Image+for+GraalVM+based+Native+Kafka+Broker
> > > >
> > > >
> > > > Regards,
> > > > Krishna
> > >
> >
>


[jira] [Created] (KAFKA-15701) Allow use of user policy in CreateTopicPolicy

2023-10-26 Thread Jiao Zhang (Jira)
Jiao Zhang created KAFKA-15701:
--

 Summary: Allow use of user policy in CreateTopicPolicy 
 Key: KAFKA-15701
 URL: https://issues.apache.org/jira/browse/KAFKA-15701
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiao Zhang


One use case of CreateTopicPolicy we have experienced is allow/reject topic 
creation by checking the user .

Especially for the secured cluster usage, we add acls to specific users for 
allowing topic creation. At the same time, we have the needs to design 
customized create topic policy for different users. For example, for user A, 
topic creation is allowed when partition number is within limit. For user B, we 
allow topic creation without check. As the kafka service provider, user A is 
imaged as random user of kafka service and user B is imaged as internal user 
for cluster management.

For this need, we patched our local fork of kafka by passing user principle in 
KafkaApis.

One place need to revise is here 
[https://github.com/apache/kafka/blob/834f72b03de40fb47caaad1397ed061de57c2509/core/src/main/scala/kafka/server/KafkaApis.scala#L1980]

As thinking it's natural to support this kind of usage even in upstream, I 
raised this Jira for asking community's ideas about this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2333

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 322688 lines...]

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseIntegerPartitionValue PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseIntegerOffsetValue STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseIntegerOffsetValue PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseStringOffsetValue STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseStringOffsetValue PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseEmptyPartitionOffsetMap STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseEmptyPartitionOffsetMap PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseStringPartitionValue STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseStringPartitionValue PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testConsumerGroupOffsetsToConnectorOffsets STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testConsumerGroupOffsetsToConnectorOffsets PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testValidateAndParseInvalidOffset 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testValidateAndParseInvalidOffset 
PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullOffset STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullOffset PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullPartition STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullPartition PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullTopic STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > testNullTopic PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseInvalidPartition STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.SinkUtilsTest > 
testValidateAndParseInvalidPartition PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingTopicCleanupPolicyShouldReturnFalseWhenBrokerVersionIsUnsupported 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingTopicCleanupPolicyShouldReturnFalseWhenBrokerVersionIsUnsupported 
PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWithNonRetriableWhenUnknownErrorOccurs STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWithNonRetriableWhenUnknownErrorOccurs PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingTopicCleanupPolicyShouldFailWhenTopicHasDeleteAndCompactPolicy STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingTopicCleanupPolicyShouldFailWhenTopicHasDeleteAndCompactPolicy PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 47 > 
org.apache.kafka.conn

[VOTE] KIP-975: Docker Image for Apache Kafka

2023-10-26 Thread Krishna Agarwal
Hi,
I'd like to call a vote on KIP-975 which aims to publish an official docker
image for Apache Kafka.

KIP -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka

Discussion thread -
https://lists.apache.org/thread/3g43hps2dmkyxgglplrlwpsf7vkywkyy

Regards,
Krishna


[jira] [Resolved] (KAFKA-15390) FetchResponse.preferredReplica may contains fenced replica in KRaft mode

2023-10-26 Thread Deng Ziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deng Ziming resolved KAFKA-15390.
-
Fix Version/s: 3.6.0
   Resolution: Fixed

> FetchResponse.preferredReplica may contains fenced replica in KRaft mode
> 
>
> Key: KAFKA-15390
> URL: https://issues.apache.org/jira/browse/KAFKA-15390
> Project: Kafka
>  Issue Type: Bug
>Reporter: Deng Ziming
>Assignee: Deng Ziming
>Priority: Major
> Fix For: 3.6.0
>
>
> `KRaftMetadataCache.getPartitionReplicaEndpoints` will return a fenced broker 
> id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2332

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 319922 lines...]
Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testEmptyWrite() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testEmptyWrite() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateTopicConfigs() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateTopicConfigs() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateEmptyZk() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testMigrateEmptyZk() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAbsentController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testClaimAbsentController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testIdempotentCreateTopics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testIdempotentCreateTopics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testCreateNewTopic() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testCreateNewTopic() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testConnection() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testConnection() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:test > Gradle Test Executor 93 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:test > Grad

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2331

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 426588 lines...]

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatWithValidFormat STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatWithValidFormat PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonStringKeys STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonStringKeys PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithElementsOfWrongType STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithElementsOfWrongType PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccess

Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-26 Thread Hao Li
Thanks for the KIP Hanyu! One question: why not return an iterator of
`ValueAndTimestamp` for `TimestampedKeyQuery`? I suppose for a
ts-kv-store, there could be multiple timestamps associated with the same
key?

Hao

On Thu, Oct 26, 2023 at 10:23 AM Matthias J. Sax  wrote:

> Would we really get a ClassCastException?
>
>  From my understanding, the store would reject the query as unsupported
> and thus the returned `QueryResult` object would have it's internal flag
> set to indicate the failure, but no exception would be thrown directly?
>
> (Of course, there might be an exception thrown to the user if they don't
> check `isSuccess()` flag but call `getResult()` directly.)
>
>
> -Matthias
>
> On 10/25/23 8:55 AM, Hanyu (Peter) Zheng wrote:
> > Hi, Bill,
> > Thank you for your reply. Yes, now, if a user executes a timestamped
> query
> > against a non-timestamped store, It will throw ClassCastException.
> > If a user uses KeyQuery to query kv-store or ts-kv-store, it always
> return
> > V.  If a user uses TimestampedKeyQuery to query kv-store, it will throw a
> > exception, so TimestampedKeyQuery query can only query ts-kv-store and
> > return ValueAndTimestamp object in the end.
> >
> > Sincerely,
> > Hanyu
> >
> > On Wed, Oct 25, 2023 at 8:51 AM Hanyu (Peter) Zheng  >
> > wrote:
> >
> >> Thank you Lucas,
> >>
> >> I will fix the capitalization.
> >> When a user executes a timestamped query against a non-timestamped
> store,
> >> It will throw ClassCastException.
> >>
> >> Sincerely,
> >> Hanyu
> >>
> >> On Tue, Oct 24, 2023 at 1:36 AM Lucas Brutschy
> >>  wrote:
> >>
> >>> Hi Hanyu,
> >>>
> >>> reading the KIP, I was wondering the same thing as Bill.
> >>>
> >>> Other than that, this looks good to me. Thanks for KIP.
> >>>
> >>> nit: you have method names `LowerBound` and `UpperBound`, where you
> >>> probably want to fix the capitalization.
> >>>
> >>> Cheers,
> >>> Lucas
> >>>
> >>> On Mon, Oct 23, 2023 at 5:46 PM Bill Bejeck  wrote:
> 
>  Hey Hanyu,
> 
>  Thanks for the KIP, it's a welcomed addition.
>  Overall, the KIP looks good to me, I just have one comment.
> 
>  Can you discuss the expected behavior when a user executes a
> timestamped
>  query against a non-timestamped store?  I think it should throw an
>  exception vs. using some default value.
>  If it's the case that Kafka Stream wraps all stores in a
>  `TimestampAndValue` store and returning a plain `V` or a
>  `TimestampAndValue` object depends on the query type, then it would
> >>> be
>  good to add those details to the KIP.
> 
>  Thanks,
>  Bill
> 
> 
> 
>  On Fri, Oct 20, 2023 at 5:07 PM Hanyu (Peter) Zheng
>   wrote:
> 
> > Thank you Matthias,
> >
> > I will modify the KIP to eliminate this restriction.
> >
> > Sincerely,
> > Hanyu
> >
> > On Fri, Oct 20, 2023 at 2:04 PM Hanyu (Peter) Zheng <
> >>> pzh...@confluent.io>
> > wrote:
> >
> >> Thank you Alieh,
> >>
> >> In these two new query types, I will remove 'get' from all getter
> >>> method
> >> names.
> >>
> >> Sincerely,
> >> Hanyu
> >>
> >> On Fri, Oct 20, 2023 at 10:40 AM Matthias J. Sax 
> > wrote:
> >>
> >>> Thanks for the KIP Hanyu,
> >>>
> >>> One questions:
> >>>
>  To address this inconsistency, we propose that KeyQuery  should
> >>> be
> >>> restricted to querying kv-stores  only, ensuring that it always
> >>> returns
> > a
> >>> plain V  type, making the behavior of the aforementioned code more
> >>> predictable. Similarly, RangeQuery  should be dedicated to querying
> >>> kv-stores , consistently returning only the plain V .
> >>>
> >>> Why do you want to restrict `KeyQuery` and `RangeQuery` to
> >>> kv-stores? I
> >>> think it would be possible to still allow both queries for
> >>> ts-kv-stores,
> >>> but change the implementation to return "plain V" instead of
> >>> `ValueAndTimestamp`, ie, the implementation would automatically
> >>> unwrap the value.
> >>>
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 10/20/23 2:32 AM, Alieh Saeedi wrote:
>  Hey Hanyu,
> 
>  Thanks for the KIP. It seems good to me.
>  Just one point: AFAIK, we are going to remove "get" from the
> >>> name of
> > all
>  getter methods.
> 
>  Cheers,
>  Alieh
> 
>  On Thu, Oct 19, 2023 at 5:44 PM Hanyu (Peter) Zheng
>   wrote:
> 
> > Hello everyone,
> >
> > I would like to start the discussion for KIP-992: Proposal to
> > introduce
> > IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery
> >
> > The KIP can be found here:
> >
> >
> >>>
> >
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKey

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.5 #90

2023-10-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #102

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 309389 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldNotResumeStandbyTaskInRemovedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldNotResumeStandbyTaskInRemovedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldReturnTrueForRestoreActiveTasksIfTaskPaused() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldReturnTrueForRestoreActiveTasksIfTaskPaused() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldDrainRestoredActiveTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldDrainRestoredActiveTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldRemovePausedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldRemovePausedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldRemoveTasksFromAndClearInputQueueOnShutdown() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldRemoveTasksFromAndClearInputQueueOnShutdown() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldThrowIfAddingActiveTasksWithSameId() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldThrowIfAddingActiveTasksWithSameId() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > 
shouldRestoreActiveStatefulTasksAndUpdateStandbyTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > 
shouldRestoreActiveStatefulTasksAndUpdateStandbyTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldPauseStandbyTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldPauseStandbyTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldProcessTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldProcessTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() STARTED

Exception: java.lang.AssertionError thrown from the UncaughtExceptionHandler in 
thread "TaskExecutor"

Gradle Test Run :streams:test > Gradle Test Executor 88 > 
DefaultTaskExecutorTest > 
sh

[jira] [Created] (KAFKA-15700) FetchFromFollowerIntegrationTest is flaky

2023-10-26 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15700:
--

 Summary: FetchFromFollowerIntegrationTest is flaky
 Key: KAFKA-15700
 URL: https://issues.apache.org/jira/browse/KAFKA-15700
 Project: Kafka
  Issue Type: Bug
Reporter: Calvin Liu


It may relate to inappropriate timeout.

testRackAwareRangeAssignor(String).quorum=zk
{code:java}
java.util.concurrent.TimeoutException   at 
java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)   at 
integration.kafka.server.FetchFromFollowerIntegrationTest.$anonfun$testRackAwareRangeAssignor$13(FetchFromFollowerIntegrationTest.scala:229)
 at 
integration.kafka.server.FetchFromFollowerIntegrationTest.$anonfun$testRackAwareRangeAssignor$13$adapted(FetchFromFollowerIntegrationTest.scala:228)
 at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576) at 
scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15699) MirrorConnectorsIntegrationBaseTest is flaky

2023-10-26 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15699:
--

 Summary: MirrorConnectorsIntegrationBaseTest is flaky
 Key: KAFKA-15699
 URL: https://issues.apache.org/jira/browse/KAFKA-15699
 Project: Kafka
  Issue Type: Bug
Reporter: Calvin Liu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15698) KRaft mode brokers should clean up stray partitions from migration

2023-10-26 Thread David Arthur (Jira)
David Arthur created KAFKA-15698:


 Summary: KRaft mode brokers should clean up stray partitions from 
migration
 Key: KAFKA-15698
 URL: https://issues.apache.org/jira/browse/KAFKA-15698
 Project: Kafka
  Issue Type: Improvement
Reporter: David Arthur


Follow up to KAFKA-15605. After the brokers are migrated to KRaft and the 
migration is completed, we should let the brokers clean up any partitions that 
we marked as "stray" during the migration. This would be any partition that was 
being deleted when the migration began, or any partition that was deleted, but 
not seen as deleted by StopReplica (e.g., broker down).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #101

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 411102 lines...]
Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatusValue PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateShouldOverridePreviousState STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateShouldOverridePreviousState PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateNonRetriableFailure STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateNonRetriableFailure PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatWithValidFormat STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatWithValidFormat PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonStringKeys STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonStringKeys PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithElementsOfWrongType STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithElementsOfWrongType PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 44 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
PASSED

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2330

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 373707 lines...]
Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
throwsWithApiVersionMismatchOnDescribe PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
returnEmptyWithApiVersionMismatchOnCreate STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
returnEmptyWithApiVersionMismatchOnCreate PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWhenAnyTopicPartitionHasError STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWhenAnyTopicPartitionHasError PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldReturnEmptyMapWhenPartitionsSetIsNull STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldReturnEmptyMapWhenPartitionsSetIsNull PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWithPartitionsWhenItDoesNotExist STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWithPartitionsWhenItDoesNotExist PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWithTimeoutExceptionWhenTimeoutErrorOccurs STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldFailWithTimeoutExceptionWhenTimeoutErrorOccurs PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
throwsWithTopicAuthorizationFailureOnDescribe STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
throwsWithTopicAuthorizationFailureOnDescribe PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingGettingTopicCleanupPolicies STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
verifyingGettingTopicCleanupPolicies PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
throwsWithClusterAuthorizationFailureOnDescribe STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
throwsWithClusterAuthorizationFailureOnDescribe PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldReturnOffsetsForOnePartition STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
endOffsetsShouldReturnOffsetsForOnePartition PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
describeTopicConfigShouldReturnEmptyMapWhenClusterAuthorizationFailure STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
describeTopicConfigShouldReturnEmptyMapWhenClusterAuthorizationFailure PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
returnEmptyWithClusterAuthorizationFailureOnCreate STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
returnEmptyWithClusterAuthorizationFailureOnCreate PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWithDefaultPartitionsAndReplicationFactorWhenItDoesNotExist 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 45 > 
org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWithDefaultPartitionsAndReplicationFactorWhenItDoesNotExist 
PASSED

Gradle Test Run :connect:runtime:test > Grad

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #89

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 469963 lines...]
Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithVersionedStores[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerWithVersionedStores[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeft[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeft[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithVersionedStores[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithVersionedStores[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithRightVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithRightVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithVersionedStores[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithVersionedStores[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithLeftVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterWithLeftVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithRightVersionedOnly[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftWithRightVersionedOnly[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerOuter[caching
 enabled = true] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerOuter[caching
 enabled = true] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 183 > 
TableTableJoinIntegrationTest > [caching enabled = true] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerLeft[caching
 ena

[jira] [Created] (KAFKA-15697) Add local assignor and ensure it cannot be used with server side assignor

2023-10-26 Thread Philip Nee (Jira)
Philip Nee created KAFKA-15697:
--

 Summary: Add local assignor and ensure it cannot be used with 
server side assignor
 Key: KAFKA-15697
 URL: https://issues.apache.org/jira/browse/KAFKA-15697
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Philip Nee


When we start supporting local/client-side assignor, we should:
 # Add the config to ConsumerConfig
 # Examine where should we implement to logic to ensure it is not used along 
side with the server side assignor, i.e. you can only specify local or remote 
assignor, or non.
 ## If both assignors are specified: Throw illegalArgumentException



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-26 Thread Matthias J. Sax

Would we really get a ClassCastException?

From my understanding, the store would reject the query as unsupported 
and thus the returned `QueryResult` object would have it's internal flag 
set to indicate the failure, but no exception would be thrown directly?


(Of course, there might be an exception thrown to the user if they don't 
check `isSuccess()` flag but call `getResult()` directly.)



-Matthias

On 10/25/23 8:55 AM, Hanyu (Peter) Zheng wrote:

Hi, Bill,
Thank you for your reply. Yes, now, if a user executes a timestamped query
against a non-timestamped store, It will throw ClassCastException.
If a user uses KeyQuery to query kv-store or ts-kv-store, it always return
V.  If a user uses TimestampedKeyQuery to query kv-store, it will throw a
exception, so TimestampedKeyQuery query can only query ts-kv-store and
return ValueAndTimestamp object in the end.

Sincerely,
Hanyu

On Wed, Oct 25, 2023 at 8:51 AM Hanyu (Peter) Zheng 
wrote:


Thank you Lucas,

I will fix the capitalization.
When a user executes a timestamped query against a non-timestamped store,
It will throw ClassCastException.

Sincerely,
Hanyu

On Tue, Oct 24, 2023 at 1:36 AM Lucas Brutschy
 wrote:


Hi Hanyu,

reading the KIP, I was wondering the same thing as Bill.

Other than that, this looks good to me. Thanks for KIP.

nit: you have method names `LowerBound` and `UpperBound`, where you
probably want to fix the capitalization.

Cheers,
Lucas

On Mon, Oct 23, 2023 at 5:46 PM Bill Bejeck  wrote:


Hey Hanyu,

Thanks for the KIP, it's a welcomed addition.
Overall, the KIP looks good to me, I just have one comment.

Can you discuss the expected behavior when a user executes a timestamped
query against a non-timestamped store?  I think it should throw an
exception vs. using some default value.
If it's the case that Kafka Stream wraps all stores in a
`TimestampAndValue` store and returning a plain `V` or a
`TimestampAndValue` object depends on the query type, then it would

be

good to add those details to the KIP.

Thanks,
Bill



On Fri, Oct 20, 2023 at 5:07 PM Hanyu (Peter) Zheng
 wrote:


Thank you Matthias,

I will modify the KIP to eliminate this restriction.

Sincerely,
Hanyu

On Fri, Oct 20, 2023 at 2:04 PM Hanyu (Peter) Zheng <

pzh...@confluent.io>

wrote:


Thank you Alieh,

In these two new query types, I will remove 'get' from all getter

method

names.

Sincerely,
Hanyu

On Fri, Oct 20, 2023 at 10:40 AM Matthias J. Sax 

wrote:



Thanks for the KIP Hanyu,

One questions:


To address this inconsistency, we propose that KeyQuery  should

be

restricted to querying kv-stores  only, ensuring that it always

returns

a

plain V  type, making the behavior of the aforementioned code more
predictable. Similarly, RangeQuery  should be dedicated to querying
kv-stores , consistently returning only the plain V .

Why do you want to restrict `KeyQuery` and `RangeQuery` to

kv-stores? I

think it would be possible to still allow both queries for

ts-kv-stores,

but change the implementation to return "plain V" instead of
`ValueAndTimestamp`, ie, the implementation would automatically
unwrap the value.



-Matthias

On 10/20/23 2:32 AM, Alieh Saeedi wrote:

Hey Hanyu,

Thanks for the KIP. It seems good to me.
Just one point: AFAIK, we are going to remove "get" from the

name of

all

getter methods.

Cheers,
Alieh

On Thu, Oct 19, 2023 at 5:44 PM Hanyu (Peter) Zheng
 wrote:


Hello everyone,

I would like to start the discussion for KIP-992: Proposal to

introduce

IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

The KIP can be found here:







https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery


Any suggestions are more than welcome.

Many thanks,
Hanyu

On Thu, Oct 19, 2023 at 8:17 AM Hanyu (Peter) Zheng <

pzh...@confluent.io>

wrote:











https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery


--

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
<







https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog

[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]
<







https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic






--

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
<






https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=c

Re: [VOTE] KIP-988 Streams StandbyUpdateListener

2023-10-26 Thread Matthias J. Sax

+1 (binding)

On 10/25/23 4:06 PM, Sophie Blee-Goldman wrote:

Happy to see this -- that's a +1 (binding) from me

On Mon, Oct 23, 2023 at 6:33 AM Bill Bejeck  wrote:


This is a great addition

+1(binding)

-Bill

On Fri, Oct 20, 2023 at 2:29 PM Almog Gavra  wrote:


+1 (non-binding) - great improvement, thanks Colt & Eduwer!

On Tue, Oct 17, 2023 at 11:25 AM Guozhang Wang <

guozhang.wang...@gmail.com



wrote:


+1 from me.

On Mon, Oct 16, 2023 at 1:56 AM Lucas Brutschy
 wrote:


Hi,

thanks again for the KIP!

+1 (binding)

Cheers,
Lucas



On Sun, Oct 15, 2023 at 9:13 AM Colt McNealy 

wrote:


Hello there,

I'd like to call a vote on KIP-988 (co-authored by my friend and

colleague

Eduwer Camacaro). We are hoping to get it in before the 3.7.0

release.








https://cwiki.apache.org/confluence/display/KAFKA/KIP-988%3A+Streams+Standby+Task+Update+Listener


Cheers,
Colt McNealy

*Founder, LittleHorse.dev*










Re: [DISCUSS] KIP-988 Streams Standby Task Update Listener

2023-10-26 Thread Matthias J. Sax

Thanks. SGTM.

On 10/25/23 4:06 PM, Sophie Blee-Goldman wrote:

That all sounds good to me! Thanks for the KIP

On Wed, Oct 25, 2023 at 3:47 PM Colt McNealy  wrote:


Hi Sophie, Matthias, Bruno, and Eduwer—

Thanks for your patience as I have been scrambling to catch up after a week
of business travel (and a few days with no time to code). I'd like to tie
up some loose ends here, but in short, I don't think the KIP document
itself needs any changes (our internal implementation does, however).

1. In the interest of a) not changing the KIP after it's already out for a
vote, and b) making sure our English grammar is "correct", let's stick with
'onBatchLoaded()`. It is the Store that gets updated, not the Batch.

2. For me (and, thankfully, the community as well) adding a remote network
call at any point in this KIP is a non-starter. We'll ensure that
our implementation does not introduce one.

3. I really don't like changing API behavior, even if it's not documented
in the javadoc. As such, I am strongly against modifying the behavior of
endOffsets() on the consumer as some people may implicitly depend on the
contract.
3a. The Consumer#currentLag() method gives us exactly what we want without
a network call (current lag from a cache, from which we can compute the
offset).

4. I have no opinion about whether we should pass endOffset or currentLag
to the callback. Either one has the same exact information inside it. In
the interest of not changing the KIP after the vote has started, I'll leave
it as endOffset.

As such, I believe the KIP doesn't need any updates, nor has it been
updated since the vote started.

Would anyone else like to discuss something before the Otter Council
adjourns regarding this matter?

Cheers,
Colt McNealy

*Founder, LittleHorse.dev*


On Mon, Oct 23, 2023 at 10:44 PM Sophie Blee-Goldman <
sop...@responsive.dev>
wrote:


Just want to checkpoint the current state of this KIP and make sure we're
on track to get it in to 3.7 (we still have a few weeks)  -- looks like
there are two remaining open questions, both relating to the
middle/intermediate callback:

1. What to name it: seems like the primary candidates are onBatchLoaded

and

onBatchUpdated (and maybe also onStandbyUpdated?)
2. What additional information can we pass in that would strike a good
balance between being helpful and impacting performance.

Regarding #1, I think all of the current options are reasonable enough

that

we should just let Colt decide which he prefers. I personally think
#onBatchUpdated is fine -- Bruno does make a fair point but the truth is
that English grammar can be sticky and while it could be argued that it

is

the store which is updated, not the batch, I feel that it is perfectly
clear what is meant by "onBatchUpdated" and to me, this doesn't sound

weird

at all. That's just my two cents in case it helps, but again, whatever
makes sense to you Colt is fine

When it comes to #2 -- as much as I would love to dig into the Consumer
client lore and see if we can modify existing APIs or add new ones in

order

to get the desired offset metadata in an efficient way, I think we're
starting to go down a rabbit hole that is going to expand the scope way
beyond what Colt thought he was signing up for. I would advocate to focus
on just the basic feature for now and drop the end-offset from the
callback. Once we have a standby listener it will be easy to expand on

with

a followup KIP if/when we find an efficient way to add additional useful
information. I think it will also become more clear what is and isn't
useful after more people get to using it in the real world

Colt/Eduwer: how necessary is receiving the end offset during a batch
update to your own application use case?

Also, for those who really do need to check the current end offset, I
believe in theory you should be able to use the KafkaStreams#metrics API

to

get the current lag and/or end offset for the changelog -- it's possible
this does not represent the most up-to-date end offset (I'm not sure it
does or does not), but it should be close enough to be reliable and

useful

for the purpose of monitoring -- I mean it is a metric, after all.

Hope this helps -- in the end, it's up to you (Colt) to decide what you
want to bring in scope or not. We still have more than 3 weeks until the
KIP freeze as currently proposed, so in theory you could even implement
this KIP without the end offset and then do a followup KIP to add the end
offset within the same release, ie without any deprecations. There are
plenty of paths forward here, so don't let us drag this out forever if

you

know what you want

Cheers,
Sophie

On Fri, Oct 20, 2023 at 10:57 AM Matthias J. Sax 

wrote:



Forgot one thing:

We could also pass `currentLag()` into `onBachLoaded()` instead of
end-offset.


-Matthias

On 10/20/23 10:56 AM, Matthias J. Sax wrote:

Thanks for digging into this Bruno.

The JavaDoc on the consumer does not say anything specific about
`endOffset` guarante

[jira] [Resolved] (KAFKA-15644) Fix CVE-2023-4586 in netty:handler

2023-10-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15644.

Fix Version/s: 3.7.0
   Resolution: Fixed

> Fix CVE-2023-4586 in netty:handler
> --
>
> Key: KAFKA-15644
> URL: https://issues.apache.org/jira/browse/KAFKA-15644
> Project: Kafka
>  Issue Type: Bug
>Reporter: Atul Sharma
>Assignee: Atul Sharma
>Priority: Major
> Fix For: 3.7.0
>
>
> Need to remediate CVE-2023-4586 
> Ref: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-4586



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15696) Revoke partitions on Consumer.close()

2023-10-26 Thread Kirk True (Jira)
Kirk True created KAFKA-15696:
-

 Summary: Revoke partitions on Consumer.close()
 Key: KAFKA-15696
 URL: https://issues.apache.org/jira/browse/KAFKA-15696
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True
Assignee: Philip Nee


Upon closing of the {{Consumer}} we need to:
 # Complete pending commits
 # Revoke assignment (Note that the revocation involves stop fetching, 
committing offsets if auto-commit enabled and invoking the onPartitionsRevoked 
callback)
 # Send the last GroupConsumerHeartbeatRequest with epoch = -1 to leave the 
group (or -2 if static member)
 # Close any fetch sessions on the brokers
 # Poll the NetworkClient to complete pending I/O

There is a mechanism introduced in PR 
[14406|https://github.com/apache/kafka/pull/14406] that allows for performing 
network I/O on shutdown. The new method 
{{DefaultBackgroundThread.runAtClose()}} will be executed when 
{{Consumer.close()}} is invoked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15695) Local log start offset is not updated on the follower after rebuilding remote log auxiliary state

2023-10-26 Thread Nikhil Ramakrishnan (Jira)
Nikhil Ramakrishnan created KAFKA-15695:
---

 Summary: Local log start offset is not updated on the follower 
after rebuilding remote log auxiliary state
 Key: KAFKA-15695
 URL: https://issues.apache.org/jira/browse/KAFKA-15695
 Project: Kafka
  Issue Type: Bug
  Components: replication, Tiered-Storage
Affects Versions: 3.6.0
Reporter: Nikhil Ramakrishnan
 Fix For: 3.7.0


In 3.6, the local log start offset is not updated when reconstructing the 
auxiliary state of the remote log on a follower.

The impact of this bug is significant because, if this follower becomes the 
leader before the local log start offset has had a change to be updated, reads 
from any offset between [wrong log start offset; actual log start offset] will 
be routed on the local storage, which does not contain the corresponding data. 
Consumer reads will in this case never be satisfied.

 

Reproduction case:
 # Create a cluster with 2 brokers, broker 0 and broker 1.
 # Create a topic topicA with RF=2, 1 partition (topicA-0) and 2 batches per 
segment, with broker 0 as the leader.
 # Stop broker 1, and produce 3 records to topicA, such that segment 1 with the 
first two records are copied to remote and deleted from local storage.
 # Start broker 1, let it catch up with broker 0.
 # Stop broker 0 such that broker 1 is elected as the leader, and try to 
consume from the beginning of topicA-0.

This consumer read will not be satisfied because the local log start offset is 
not updated on broker 1 when it builds the auxiliary state of the remote log 
segments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15694) New integration tests to have full coverage for preview

2023-10-26 Thread Kirk True (Jira)
Kirk True created KAFKA-15694:
-

 Summary: New integration tests to have full coverage for preview
 Key: KAFKA-15694
 URL: https://issues.apache.org/jira/browse/KAFKA-15694
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True


These are to fix bugs discovered during PR reviews but not tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15693) Disabling scheduled rebalance delay in Connect can lead to indefinitely unassigned connectors and tasks

2023-10-26 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-15693:
-

 Summary: Disabling scheduled rebalance delay in Connect can lead 
to indefinitely unassigned connectors and tasks
 Key: KAFKA-15693
 URL: https://issues.apache.org/jira/browse/KAFKA-15693
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.5.1, 3.6.0, 3.4.1, 3.5.0, 3.3.2, 3.3.1, 3.2.3, 3.2.2, 
3.4.0, 3.2.1, 3.1.2, 3.0.2, 3.3.0, 3.1.1, 3.2.0, 2.8.2, 3.0.1, 3.0.0, 2.8.1, 
2.7.2, 2.6.3, 3.1.0, 2.6.2, 2.7.1, 2.8.0, 2.6.1, 2.7.0, 2.5.1, 2.6.0, 2.4.1, 
2.5.0, 2.3.1, 2.4.0, 2.3.0, 3.7.0
Reporter: Chris Egerton
Assignee: Chris Egerton


Kafka Connect supports deferred resolution of imbalances when using the 
incremental rebalancing algorithm introduced in 
[KIP-415|https://cwiki.apache.org/confluence/display/KAFKA/KIP-415%3A+Incremental+Cooperative+Rebalancing+in+Kafka+Connect].
 When enabled, this feature introduces a configurable delay period between when 
"lost" assignments (i.e., connectors and tasks that were assigned to a worker 
in the previous round of rebalance but are not assigned to a worker during the 
current round of rebalance) are detected and when they are reassigned to a 
worker. The delay can be configured with the 
{{scheduled.rebalance.max.delay.ms}} property.

If this property is set to 0, then there should be no delay between when lost 
assignments are detected and when they are reassigned. Instead, however, this 
configuration can cause lost assignments to be withheld during a rebalance, 
remaining unassigned until the next rebalance, which, because scheduled delays 
are disabled, will not happen on its own and will only take place when 
unrelated conditions warrant it (such as the creation or deletion of a 
connector, a worker joining or leaving the cluster, new task configs being 
generated for a connector, etc.).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15692) New integration tests to have full coverage

2023-10-26 Thread Kirk True (Jira)
Kirk True created KAFKA-15692:
-

 Summary: New integration tests to have full coverage
 Key: KAFKA-15692
 URL: https://issues.apache.org/jira/browse/KAFKA-15692
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15691) Upgrade existing and add new system tests to use new coordinator

2023-10-26 Thread Kirk True (Jira)
Kirk True created KAFKA-15691:
-

 Summary: Upgrade existing and add new system tests to use new 
coordinator
 Key: KAFKA-15691
 URL: https://issues.apache.org/jira/browse/KAFKA-15691
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15690) Flaky integration tests

2023-10-26 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-15690:
--

 Summary: Flaky integration tests
 Key: KAFKA-15690
 URL: https://issues.apache.org/jira/browse/KAFKA-15690
 Project: Kafka
  Issue Type: Bug
Reporter: Calvin Liu


Finding the following integration tests flaky.

EosIntegrationTest {
 * 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances[exactly_once_v2,
 processing threads = false] 
 * shouldWriteLatestOffsetsToCheckpointOnShutdown[exactly_once_v2, processing 
threads = false] 
 * shouldWriteLatestOffsetsToCheckpointOnShutdown[exactly_once, processing 
threads = false] 

}

MirrorConnectorsIntegrationBaseTest {
 * testReplicateSourceDefault()
 * testOffsetSyncsTopicsOnTarget() 

}

FetchFromFollowerIntegrationTest {
 * testRackAwareRangeAssignor(String).quorum=zk

}

They are running long and may relate to timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-967: Support custom SSL configuration for Kafka Connect RestServer

2023-10-26 Thread Николай Ижиков
I support this KIP

+1

> 23 окт. 2023 г., в 20:20, Greg Harris  
> написал(а):
> 
> Hey Taras,
> 
> Thanks for the KIP!
> 
> The design you propose follows the conventions started in KIP-519, and
> should feel natural to operators familiar with the broker feature.
> I also like that we're able to clean up some connect-specific
> functionality and make the codebase more consistent.
> 
> +1 (binding)
> 
> Thanks,
> Greg
> 
> On Fri, Oct 20, 2023 at 8:03 AM Taras Ledkov  wrote:
>> 
>> Hi Kafka Team.
>> 
>> II'd like to call a vote on KIP-967: Support custom SSL configuration for 
>> Kafka Connect RestServer [1].
>> Discussion thread [2] was started more then 2 month ago and there was not 
>> any negative or critical comments.
>> 
>> [1]. 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-967%3A+Support+custom+SSL+configuration+for+Kafka+Connect+RestServer
>> [2]. https://lists.apache.org/thread/w0vmbf1yzgjo7hkzyyzjjnb509x6s9qq
>> 
>> --
>> With best regards,
>> Taras Ledkov



[jira] [Created] (KAFKA-15689) KRaftMigrationDriver not logging the skipped event when expected state is wrong

2023-10-26 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-15689:
--

 Summary: KRaftMigrationDriver not logging the skipped event when 
expected state is wrong
 Key: KAFKA-15689
 URL: https://issues.apache.org/jira/browse/KAFKA-15689
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Paolo Patierno
Assignee: Paolo Patierno


The KRaftMigrationDriver.checkDriverState is used in multiple implementations 
of the 
MigrationEvent base class but when it comes to log that an event was skipped 
because the expected state is wrong, it always log "KRafrMigrationDriver" 
instead of the skipped event.
This is because its code has something like this:
 
{code:java}
log.info("Expected driver state {} but found {}. Not running this event {}.",
expectedState, migrationState, this.getClass().getSimpleName()); {code}
Of course, the "this" is referring to the KRafrMigrationDriver class.
It should print the specific skipped event instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2329

2023-10-26 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-990: Capability to SUSPEND Tasks on DeserializationException

2023-10-26 Thread Nick Telford
1.
Woops! I've fixed that now. Thanks for catching that.

2.
I agree, I'll remove the LogAndPause handler so it's clear this is an
advanced feature. I'll also add some documentation to
DeserializationExceptionResponse#SUSPEND that explains the care users
should approach it with.

3a.
This is interesting. My main concern is that there may be situations where
skipping a single bad record is not the necessary solution, but the Task
should still be resumed without restarting the application. For example, if
there are several bad records in a row that should be skipped.

3b.
Additionally, a Task may have multiple input topics, so we'd need some way
to indicate which record to skip.

These can probably be resolved by something like skipAndContinue(TaskId
task, String topic, int recordsToSkip) or even skipAndContinue(TaskId task,
Map recordsToSkipByTopic)?

4.
Related to 2: I was thinking that users implementing their own handler may
want to be able to determine which Processors (i.e. which Subtopology/task
group) are being affected, so they can programmatically make a decision on
whether it's safe to PAUSE. ProcessorContext, which is already a parameter
to DeserializationExceptionHandler provides the TaskId of the failed Task,
but doesn't provide metadata on the Processors that Task executes.

Since TaskIds are non-deterministic (they can change when you modify your
topology, with no influence over how they're assigned), a user cannot use
TaskId alone to determine which Processors would be affected.

What do you think would be the best way to provide this information to
exception handlers? I was originally thinking that users could instantiate
the handler themselves and provide a TopologyDescription (via
KafkaStreams#describe) in the constructor, but it looks like configs of
type Class cannot accept an already instantiated instance, and there's no
other way to inject information like that.

Perhaps we could add something to ProcessorContext that contains details on
the sub-topology being executed?

Regards,
Nick

On Thu, 26 Oct 2023 at 01:24, Sophie Blee-Goldman 
wrote:

> 1. Makes sense to me! Can you just update the name of the
> DeserializationHandlerResponse enum from SUSPEND to PAUSE so
> we're consistent with the wording?
>
> The drawback here would be that custom stateful Processors
> > might also be impacted, but there'd be no way to know if they're safe to
> > not pause.
> >
> 2. This is a really good point -- maybe this is just a case where we have
> to trust
> in the user not to accidentally screw themselves over. As long as we
> provide
> sufficient information for them to decide when it is/isn't safe to pause a
> task,
> I would be ok with just documenting the dangers of indiscriminate use of
> this
> feature, and hope that everyone reads the warning.
>
> Given the above, I have one suggestion: what if we only add the PAUSE enum
> in this KIP, and don't include an OOTB DeserializationExceptionHandler that
> implements this? I see this as addressing two concerns:
> 2a. It would make it clear that this is an advanced feature and should be
> given
> careful consideration, rather than just plugging in a config value.
> 2b. It forces the user to implement the handler themselves, which gives
> them
> an opportunity to check on which task it is that's hitting the error and
> then
> make a conscious decision as to whether it is safe to pause or not. In the
> end,
> it's really impossible for us to know what is/is not safe to pause, so the
> more
> responsibility we can put on the user in this case, the better.
>
> 3. It sounds like the general recovery workflow would be to either resolve
> the
> issue somehow (presumably by fixing an issue in the deserializer?) and
> restart the application -- in which case no further manual intervention is
> required -- or else to determine the record is unprocessable and should be
> skipped, in which case the user needs to somehow increment the offset
> and then resume the task.
>
> It's a bit awkward to ask people to use the command line tools to manually
> wind the offset forward. More importantly, there are likely many operators
> who
> don't have the permissions necessary to use the command line tools for
> this kind of thing, and they would be pretty much out of luck in that case.
>
> On the flipside, it seems like if the user ever wants to resume the task
> without restarting, they will need to skip over the bad record. I think we
> can
> make the feature considerably more ergonomic by modifying the behavior
> of the #resume method so that it always skips over the bad record. This
> will probably be the easiest to implement anyways, as it is effectively the
> same as the CONTINUE option internally, but gives the user time to
> decide if they really do want to CONTINUE or not
>
> Not sure if we would want to rename the #resume method in that case to
> make this more clear, or if javadocs would be sufficient...maybe
> something like #skipRecordAndContinue?
>
> On Tue,

[jira] [Created] (KAFKA-15688) Partition leader election not running when disk IO hangs

2023-10-26 Thread Peter Sinoros-Szabo (Jira)
Peter Sinoros-Szabo created KAFKA-15688:
---

 Summary: Partition leader election not running when disk IO hangs
 Key: KAFKA-15688
 URL: https://issues.apache.org/jira/browse/KAFKA-15688
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.3.2
Reporter: Peter Sinoros-Szabo


We run our Kafka brokers on AWS EC2 nodes using AWS EBS as disk to store the 
messages.

Recently we had an issue when the EBS disk IO just stalled so Kafka was not 
able to write or read anything from the disk, well except the data that was 
still in page cache or that still fitted into the page cache before it is 
synced to EBS.

We experienced this issue in a few cases: sometimes partition leaders were 
moved away to other brokers automatically, in other cases that didn't happen 
and caused the Producers to fail producing messages to that broker.

My expectation from Kafka in such a case would be that it notices it and moves 
the leaders to other brokers where the partition has in sync replicas, but as I 
mentioned this didn't happen always.

I know Kafka will shut itself down in case it can't write to its disk, that 
might be a good solution in this case as well as it would trigger the leader 
election automatically.

Is it possible to add such a feature to Kafka so that it shuts down in this 
case as well?

I guess similar issue might happen with other disk subsystems too or even with 
a broken and slow disk.

This scenario can be easily reproduced using AWS FIS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2328

2023-10-26 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 430754 lines...]
Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testReinitializeAfterAuthFailure() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testReinitializeAfterAuthFailure() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSetAclNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSetAclNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testConnectionLossRequestTermination() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testConnectionLossRequestTermination() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testExistsNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testExistsNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetDataNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetDataNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testConnectionTimeout() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testConnectionTimeout() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testUnresolvableConnectString() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testUnresolvableConnectString() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetChildrenNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetChildrenNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testPipelinedGetData() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testPipelinedGetData() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSetDataExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSetDataExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChangeNotTriggered() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChangeNotTriggered() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testMixedPipeline() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testMixedPipeline() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetDataExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testGetDataExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testDeleteExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testDeleteExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 91 > ZooKeeperClientTest > 
testSessionExpiry() STARTED

Gradle Test Run :core:test > Gradle Test Executor 91 

[jira] [Created] (KAFKA-15687) Update host address for the GoupMetadata of static members

2023-10-26 Thread Yu Wang (Jira)
Yu Wang created KAFKA-15687:
---

 Summary: Update host address for the GoupMetadata of static members
 Key: KAFKA-15687
 URL: https://issues.apache.org/jira/browse/KAFKA-15687
 Project: Kafka
  Issue Type: Improvement
  Components: group-coordinator
Affects Versions: 3.6.0
Reporter: Yu Wang


We are trying to using static membership protocol for our consumers in 
Kubernetes. When our pod was recreated, we found that the host address in the 
group description will not change to the address of the new created pod.

For example we have one pod with *group.instance.id = id1 and ip = 192.168.0.1* 
when the pod crashes, we will replace it with a new pod with same 
*group.instance.id = id1* but a different {*}ip = 192.168.0.2{*}. After the new 
pod joined in the consumer group, with the command "describe group", we found 
the host is still {*}192.168.0.1{*}. This makes us cannot find correct consumer 
instance when check the issue.

After read the source code, we found that the groupCoordinator will not change 
the host address for the same {*}groupInstanceId{*}.

[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/group/GroupMetadata.scala#L316]

Is it also possible to replace the host address when replace the static member?

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15685) Missing compatibility for MinGW (windows)

2023-10-26 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15685.
--
Resolution: Fixed

> Missing compatibility for MinGW (windows)
> -
>
> Key: KAFKA-15685
> URL: https://issues.apache.org/jira/browse/KAFKA-15685
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Priority: Minor
> Fix For: 3.7.0
>
>
> Starting this Jira on behalf of *[maniekes|https://github.com/maniekes]* 
> (github Id)
> kafka class runner is not working from MINGW/Git Bash on Windows:
> rafal@rafal-laptok MINGW64 /c/rafal/git/configs (master)
> $ "$KAFKA_DIR/bin/kafka-server-start.sh" server.properties
> [0.004s][error][logging] Error opening log file 
> '/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log': No such 
> file or directory
> [0.004s][error][logging] Initialization of output 
> 'file=/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log' using 
> options 'filecount=10,filesize=100M' failed.
> Invalid -Xlog option 
> '-Xlog:gc*:file=/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log:time,tags:filecount=10,filesize=100M',
>  see error log for details.
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
>  
> however it works fine when using Cygwin. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15686) Consumer should be able to detect network problem

2023-10-26 Thread Jiahongchao (Jira)
Jiahongchao created KAFKA-15686:
---

 Summary: Consumer should be able to detect network problem
 Key: KAFKA-15686
 URL: https://issues.apache.org/jira/browse/KAFKA-15686
 Project: Kafka
  Issue Type: New Feature
  Components: consumer
Affects Versions: 3.5.0
Reporter: Jiahongchao


When we call poll method in consumer, it will return normally even if some 
partitions do not have a leader.

What should we do to detect such failures? Currently we have to check log to 
find out broker connection problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15685) Missing compatibility for MinGW (windows)

2023-10-26 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15685:


 Summary: Missing compatibility for MinGW (windows)
 Key: KAFKA-15685
 URL: https://issues.apache.org/jira/browse/KAFKA-15685
 Project: Kafka
  Issue Type: Improvement
Reporter: Divij Vaidya
 Fix For: 3.7.0


Starting this Jira on behalf of *[maniekes|https://github.com/maniekes]* 
(github Id)



kafka class runner is not working from MINGW/Git Bash on Windows:
rafal@rafal-laptok MINGW64 /c/rafal/git/configs (master)
$ "$KAFKA_DIR/bin/kafka-server-start.sh" server.properties
[0.004s][error][logging] Error opening log file 
'/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log': No such file 
or directory
[0.004s][error][logging] Initialization of output 
'file=/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log' using 
options 'filecount=10,filesize=100M' failed.
Invalid -Xlog option 
'-Xlog:gc*:file=/c/rafal/tools/kafka_2.13-3.3.2/bin/../logs/kafkaServer-gc.log:time,tags:filecount=10,filesize=100M',
 see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
 
however it works fine when using Cygwin. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15200) verify pre-requisite at start of release.py

2023-10-26 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15200.
--
Resolution: Fixed

> verify pre-requisite at start of release.py
> ---
>
> Key: KAFKA-15200
> URL: https://issues.apache.org/jira/browse/KAFKA-15200
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Owen C.H. Leung
>Priority: Major
> Fix For: 3.7.0
>
>
> At the start of release.py, the first thing it should do is verify that 
> dependency pre-requisites are satisfied. The pre-requisites are:
>  # maven should be installed.
>  # sftp should be installed. Connection to @home.apache.org should be 
> successful. Currently it is done manually at the step "Verify by using 
> `{{{}sftp @home.apache.org{}}}`" in 
> [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]
>  # svn should be installed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)