Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.6 #166

2024-03-27 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reopened KAFKA-16310:
---

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #121

2024-03-27 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 458795 lines...]
[2024-03-28T02:48:34.234Z] 
[2024-03-28T02:48:34.234Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT PASSED
[2024-03-28T02:48:34.234Z] 
[2024-03-28T02:48:34.234Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-03-28T02:48:42.789Z] 
[2024-03-28T02:48:42.789Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-03-28T02:48:42.789Z] 
[2024-03-28T02:48:42.789Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-03-28T02:48:44.597Z] 
[2024-03-28T02:48:44.597Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-03-28T02:48:44.597Z] 
[2024-03-28T02:48:44.597Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-03-28T02:49:04.972Z] 
[2024-03-28T02:49:04.972Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-03-28T02:49:04.972Z] 
[2024-03-28T02:49:04.972Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-03-28T02:49:32.215Z] 
[2024-03-28T02:49:32.215Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-03-28T02:49:32.215Z] 
[2024-03-28T02:49:32.215Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[2] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT STARTED
[2024-03-28T02:49:56.874Z] 
[2024-03-28T02:49:56.874Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[2] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT PASSED
[2024-03-28T02:49:56.874Z] 
[2024-03-28T02:49:56.874Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[3] Type=ZK, MetadataVersion=3.6-IV2, Security=PLAINTEXT STARTED
[2024-03-28T02:50:26.591Z] 
[2024-03-28T02:50:26.591Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[3] Type=ZK, MetadataVersion=3.6-IV2, Security=PLAINTEXT PASSED
[2024-03-28T02:50:26.591Z] 
[2024-03-28T02:50:26.591Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[4] Type=ZK, MetadataVersion=3.7-IV0, Security=PLAINTEXT STARTED
[2024-03-28T02:50:52.568Z] 
[2024-03-28T02:50:52.568Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[4] Type=ZK, MetadataVersion=3.7-IV0, Security=PLAINTEXT PASSED
[2024-03-28T02:50:52.568Z] 
[2024-03-28T02:50:52.568Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[5] Type=ZK, MetadataVersion=3.7-IV1, Security=PLAINTEXT STARTED
[2024-03-28T02:51:17.804Z] 
[2024-03-28T02:51:17.804Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[5] Type=ZK, MetadataVersion=3.7-IV1, Security=PLAINTEXT PASSED
[2024-03-28T02:51:17.804Z] 
[2024-03-28T02:51:17.804Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[6] Type=ZK, MetadataVersion=3.7-IV2, Security=PLAINTEXT STARTED
[2024-03-28T02:51:46.649Z] 
[2024-03-28T02:51:46.649Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > 

Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-27 Thread José Armando García Sancio
Hi Jun,

On Wed, Mar 27, 2024 at 2:26 PM Jun Rao  wrote:
> 55.1 How does the broker and non-leader controller know the pending voters?

They are in the log. Pending voter sets are VotersRecords between the
HWM and the LEO. The leader will make sure that there is at most one
VoterRecord that is uncommitted (between the HWM and LEO).

Maybe uncommitted-voter-change is a better name. Updated the KIP to
use this name.

Thanks,
-- 
-José


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #165

2024-03-27 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 307913 lines...]
[2024-03-27T22:56:04.656Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultStateUpdaterTest > 
shouldRestoreActiveStatefulTasksAndUpdateStandbyTasks() PASSED
[2024-03-27T22:56:04.656Z] 
[2024-03-27T22:56:04.656Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultStateUpdaterTest > shouldPauseStandbyTask() STARTED
[2024-03-27T22:56:04.656Z] 
[2024-03-27T22:56:04.656Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultStateUpdaterTest > shouldPauseStandbyTask() PASSED
[2024-03-27T22:56:04.656Z] 
[2024-03-27T22:56:04.656Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() 
STARTED
[2024-03-27T22:56:04.656Z] 
[2024-03-27T22:56:04.656Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() 
PASSED
[2024-03-27T22:56:08.108Z] 
[2024-03-27T22:56:08.108Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() STARTED
[2024-03-27T22:56:09.691Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldProcessTasks() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldProcessTasks() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldPunctuateStreamTime() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldPunctuateStreamTime() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldShutdownTaskExecutor() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldShutdownTaskExecutor() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldPunctuateSystemTime() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldPunctuateSystemTime() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > StateQueryResultTest > More than one query result throws 
IllegalArgumentException STARTED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > StateQueryResultTest > More than one query result throws 
IllegalArgumentException PASSED
[2024-03-27T22:56:09.692Z] 
[2024-03-27T22:56:09.692Z] Gradle Test Run :streams:test > Gradle Test Executor 
88 > StateQueryResultTest > Zero query results shouldn't error STARTED

Re: [VOTE] KIP-981: Manage Connect topics with custom implementation of Admin

2024-03-27 Thread Greg Harris
Hi Omnia!

Thank you for your response. I want to make sure that we both
understand the motivations and designs here, and we can make a fair
evaluation.

> pluggable extension point

Can you explain the difference? To me, both "plugin" and "pluggable
extension point" refer to an API that lets downstream projects define
custom code to execute within the Connect framework.

> Am not sure I am getting your point here.

I think consistency is important in API design, so that users can take
concepts from one area and apply them again later. For example, a user
that is familiar with implementing ConnectRestExtensions may learn
that they can load arbitrary versions of libraries that the framework
also uses. They then can implement and release a single artifact that
works across all Connect versions. If that user then took that
knowledge to the non-isolated ForwardingAdmin interface, they will
have problems. Without classloader isolation to protect them from
version conflicts, they will then need to distribute multiple
artifacts that are compatible with each version of the framework, or
only support a single version at a time.

> This is a possibility, however if this is a limitation of KIP-787 then this 
> exists already when we run on connect cluster.

Yes, and that is a limitation that I feel must be addressed when
bringing this to the broader Connect framework. Connect goes to great
lengths to allow users to compile against one version of the framework
and run against another, in order to reduce the maintenance burden on
plugin developers. The ForwardingAdmin being strongly coupled to the
MM2 version is consistent with the other plugins within MM2 which are
not isolated, but this is an exception in the broader Connect
ecosystem, not the standard.

> adding competing Admin interfaces would create confusion and a heavier 
> maintenance burden for the project.

I am not suggesting a new Admin interface.

When a new method is added to the Admin interface, Admin developers
are burdened to choose an implementation for the ForwardingAdmin.
Their implementation could either delegate to the real Admin instance,
or throw an UnsupportedOperationException. This has already happened
[1].
If they delegate to the underlying admin, the developer of the custom
ForwardingAdmin subclass is not notified of the new operation after an
upgrade, and the runtime could be "going around" the custom
implementation for a long time without anyone noticing.
If they throw an UnsupportedOperationException, the real Admin
instance is inaccessible to the subclasses, as it is a private
variable.

Related to this, the ForwardingAdmin can't be used to enforce its own
usage, and so is useless for security-oriented implementations. Anyone
can configure the default ForwardingAdmin and get the raw AdminClient
behavior. I understand that this is outside of its original use-case,
but I think it's a natural property that would be desirable for a
federation layer, but isn't possible with the current design.

> There's no space in the Admin API to represent the cluster, and there 
> shouldn't be.

I disagree with this statement. There are at least three ways to
represent the cluster via the current Admin API, each with tradeoffs.
1. You can identify a client by credential. If you are generating a
credential to give the Admin client, you can also remember what
cluster that credential corresponds to.
2. You can identify a client by client.id, and parse/decode/lookup the
cluster from the id.
3. You can identify a client by the endpoint they contact. If you give
each client a unique endpoint, you can infer the client from which
endpoint they contact.

> Some of the requests involve discovering metadata. e.g. listing topics 
> involves discovering metadata and selecting the least busy node before 
> sending the RPC.

Since your federation layer is in the middle of the connection, it has
full control over all requests and responses. It can implement
arbitrary behavior for the metadata operations, such as lying to the
client about the identity of brokers and load.

> If the federation layer has any issue, this will block everyone use 
> AdminClient for basic day-to-day functionality like create/alter resources 
> creating a bottle neck.
> ... this will add another network latency on these operators which are used 
> widely

Yes this is correct, but that is the case regardless of which protocol
is used between Connect and the federation layer. Your architecture
may need to change slightly if you were relying heavily on the
underlying AdminClient.

> The Admin API uses a binary protocol, fit for interfacing with Kafka, whereas 
> a federation layer could use a simpler REST based interface

I understand that implementing the binary Kafka protocol is a burden,
but the extra effort is paid back in a much better composability and
interoperability. You can also strategically re-use code to make
implementing the API easier: Either pull the API definitions from AK,
use 

Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Jun Rao
Hi, Justine,

Thanks for the reply.

So, "dependencies" and "version-mapping" will be added to both
kafka-feature and kafka-storage? Could we document that in the tool format
section?

Jun

On Wed, Mar 27, 2024 at 4:01 PM Justine Olshan 
wrote:

> Ok. I can remove the info from the describe output.
>
> Dependencies is needed for the storage tool because we want to make sure
> the desired versions we are setting will be valid. Version mapping should
> be for both tools since we have --release-version for both tools.
>
> I was considering changing the IV strings, but I wasn't sure if there would
> be some disagreement with the decision. Not sure if that breaks
> compatibility etc. Happy to hear everyone's thoughts.
>
> Justine
>
> On Wed, Mar 27, 2024 at 3:36 PM Jun Rao  wrote:
>
> > Hi, Justine,
> >
> > Thanks for the reply.
> >
> > Having "kafka-feature dependencies" seems enough to me. We don't need to
> > include the dependencies in the output of "kafka-feature describe".
> >
> > We only support "dependencies" in kafka-feature, not kafka-storage. We
> > probably should do the same for "version-mapping".
> >
> > bin/kafka-features.sh downgrade --feature metadata.version=16
> > --transaction.protocol.version=2
> > We need to add the --feature flag for the second feature, right?
> >
> > In "kafka-features.sh describe", we only show the IV string for
> > metadata.version. Should we also show the level number?
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Mar 27, 2024 at 1:52 PM Justine Olshan
> > 
> > wrote:
> >
> > > I had already included this example
> > > bin/kafka-features.sh downgrade --feature metadata.version=16
> > > --transaction.protocol.version=2 // Throws error if metadata version
> is <
> > > 16, and this would be an upgrade
> > > But I have updated the KIP to explicitly say the text you mentioned.
> > >
> > > Justine
> > >
> > > On Wed, Mar 27, 2024 at 1:41 PM José Armando García Sancio
> > >  wrote:
> > >
> > > > Hi Justine,
> > > >
> > > > See my comment below.
> > > >
> > > > On Wed, Mar 27, 2024 at 1:31 PM Justine Olshan
> > > >  wrote:
> > > > > The feature command includes the upgrade or downgrade command along
> > > with
> > > > > the --release-version flag. If some features are not moving in the
> > > > > direction mentioned (upgrade or downgrade) the command will fail --
> > > > perhaps
> > > > > with an error of which features were going in the wrong direction.
> > > >
> > > > How about updating the KIP to show and document this behavior?
> > > >
> > > > Thanks,
> > > > --
> > > > -José
> > > >
> > >
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2759

2024-03-27 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-1032: Upgrade to Jakarta and JavaEE 9 in Kafka 4.0

2024-03-27 Thread Christopher Shannon
Hi,

I'm proposing a KIP for Kafka 4.0 to upgrade to to Jakarta and JavaEE 9
APIs. This will also upgrade dependencies like Jetty and move away from
the depcrated javax namespace to be in line with other libraries and
frameworks. There was some initial

discussion and below is the KIP.

Please take a look and let me know what you think:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1032%3A+Upgrade+to+Jakarta+and+JavaEE+9+in+Kafka+4.0

Thanks,
Chris


[jira] [Created] (KAFKA-16437) Upgrade to Jakarta and JavaEE 9 in Kafka 4.0 (KIP-1032)

2024-03-27 Thread Christopher L. Shannon (Jira)
Christopher L. Shannon created KAFKA-16437:
--

 Summary: Upgrade to Jakarta and JavaEE 9 in Kafka 4.0 (KIP-1032)
 Key: KAFKA-16437
 URL: https://issues.apache.org/jira/browse/KAFKA-16437
 Project: Kafka
  Issue Type: Improvement
Reporter: Christopher L. Shannon






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Justine Olshan
Ok. I can remove the info from the describe output.

Dependencies is needed for the storage tool because we want to make sure
the desired versions we are setting will be valid. Version mapping should
be for both tools since we have --release-version for both tools.

I was considering changing the IV strings, but I wasn't sure if there would
be some disagreement with the decision. Not sure if that breaks
compatibility etc. Happy to hear everyone's thoughts.

Justine

On Wed, Mar 27, 2024 at 3:36 PM Jun Rao  wrote:

> Hi, Justine,
>
> Thanks for the reply.
>
> Having "kafka-feature dependencies" seems enough to me. We don't need to
> include the dependencies in the output of "kafka-feature describe".
>
> We only support "dependencies" in kafka-feature, not kafka-storage. We
> probably should do the same for "version-mapping".
>
> bin/kafka-features.sh downgrade --feature metadata.version=16
> --transaction.protocol.version=2
> We need to add the --feature flag for the second feature, right?
>
> In "kafka-features.sh describe", we only show the IV string for
> metadata.version. Should we also show the level number?
>
> Thanks,
>
> Jun
>
> On Wed, Mar 27, 2024 at 1:52 PM Justine Olshan
> 
> wrote:
>
> > I had already included this example
> > bin/kafka-features.sh downgrade --feature metadata.version=16
> > --transaction.protocol.version=2 // Throws error if metadata version is <
> > 16, and this would be an upgrade
> > But I have updated the KIP to explicitly say the text you mentioned.
> >
> > Justine
> >
> > On Wed, Mar 27, 2024 at 1:41 PM José Armando García Sancio
> >  wrote:
> >
> > > Hi Justine,
> > >
> > > See my comment below.
> > >
> > > On Wed, Mar 27, 2024 at 1:31 PM Justine Olshan
> > >  wrote:
> > > > The feature command includes the upgrade or downgrade command along
> > with
> > > > the --release-version flag. If some features are not moving in the
> > > > direction mentioned (upgrade or downgrade) the command will fail --
> > > perhaps
> > > > with an error of which features were going in the wrong direction.
> > >
> > > How about updating the KIP to show and document this behavior?
> > >
> > > Thanks,
> > > --
> > > -José
> > >
> >
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.7 #120

2024-03-27 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Jun Rao
Hi, Justine,

Thanks for the reply.

Having "kafka-feature dependencies" seems enough to me. We don't need to
include the dependencies in the output of "kafka-feature describe".

We only support "dependencies" in kafka-feature, not kafka-storage. We
probably should do the same for "version-mapping".

bin/kafka-features.sh downgrade --feature metadata.version=16
--transaction.protocol.version=2
We need to add the --feature flag for the second feature, right?

In "kafka-features.sh describe", we only show the IV string for
metadata.version. Should we also show the level number?

Thanks,

Jun

On Wed, Mar 27, 2024 at 1:52 PM Justine Olshan 
wrote:

> I had already included this example
> bin/kafka-features.sh downgrade --feature metadata.version=16
> --transaction.protocol.version=2 // Throws error if metadata version is <
> 16, and this would be an upgrade
> But I have updated the KIP to explicitly say the text you mentioned.
>
> Justine
>
> On Wed, Mar 27, 2024 at 1:41 PM José Armando García Sancio
>  wrote:
>
> > Hi Justine,
> >
> > See my comment below.
> >
> > On Wed, Mar 27, 2024 at 1:31 PM Justine Olshan
> >  wrote:
> > > The feature command includes the upgrade or downgrade command along
> with
> > > the --release-version flag. If some features are not moving in the
> > > direction mentioned (upgrade or downgrade) the command will fail --
> > perhaps
> > > with an error of which features were going in the wrong direction.
> >
> > How about updating the KIP to show and document this behavior?
> >
> > Thanks,
> > --
> > -José
> >
>


Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-27 Thread Jun Rao
Hi, Jose,

Thanks for the reply.

55.1 How does the broker and non-leader controller know the pending voters?

Jun

On Wed, Mar 27, 2024 at 1:03 PM José Armando García Sancio
 wrote:

> Hi Jun,
>
> Thanks for the feedback. See my comments below.
>
> On Mon, Mar 25, 2024 at 2:21 PM Jun Rao  wrote:
> > 54. Yes, we could include SecurityProtocol in DescribeQuorumResponse.
> Then,
> > we could include it in the output of kafka-metadata-quorum --describe.
>
> Yes, I updated the DescribeQuorumResponse to include the
> SecurityProtocol and I also updated the example output for
> "kafka-metadata-quorum describe --status".
>
> > 55.1 Could number-of-observers and pending-voter-change be reported by
> all
> > brokers and controllers? I thought only the controller leader tracks
> those.
>
> These metrics are reported by all of the KRaft replicas (broker and
> controller). I think this makes it easier to monitor since metrics
> collectors can collect the same metrics from all of the nodes
> irrespective of their role (broker or controller). The main exception
> that Kafka has right now is type=KafkaController vs
> type=broker-metadata-metrics but I would favor a KIP that unified
> these two sets of metrics to something like type=metadata-metrics.
>
> > 55.2 So, IgnoredStaticVoters and IsObserver are Yammer metrics and the
> rest
> > are KafkaMetric. It would be useful to document the metric names clearer.
> > For Yammer metrics, we need to specify group, type, name and tags. For
> > KafkaMetrics, we need to specify just name and tags.
>
> Yeah. I always struggle with the MBean specification. I connected
> jconsole to Kafka and updated the KIP to be more accurate. Please take
> a look.
>
> > 57. Could we remove --release-version 3.8 in the upgrade example?
>
> Done. I also removed wording about deprecating --metadata from
> kafka-features.sh. I'll let KIP-1022 and the discussion there make
> that decision.
>
> Thanks,
> --
> -José
>


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Justine Olshan
I had already included this example
bin/kafka-features.sh downgrade --feature metadata.version=16
--transaction.protocol.version=2 // Throws error if metadata version is <
16, and this would be an upgrade
But I have updated the KIP to explicitly say the text you mentioned.

Justine

On Wed, Mar 27, 2024 at 1:41 PM José Armando García Sancio
 wrote:

> Hi Justine,
>
> See my comment below.
>
> On Wed, Mar 27, 2024 at 1:31 PM Justine Olshan
>  wrote:
> > The feature command includes the upgrade or downgrade command along with
> > the --release-version flag. If some features are not moving in the
> > direction mentioned (upgrade or downgrade) the command will fail --
> perhaps
> > with an error of which features were going in the wrong direction.
>
> How about updating the KIP to show and document this behavior?
>
> Thanks,
> --
> -José
>


[jira] [Created] (KAFKA-16436) Online upgrade triggering and group type conversion

2024-03-27 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-16436:
---

 Summary: Online upgrade triggering and group type conversion 
 Key: KAFKA-16436
 URL: https://issues.apache.org/jira/browse/KAFKA-16436
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread José Armando García Sancio
Hi Justine,

See my comment below.

On Wed, Mar 27, 2024 at 1:31 PM Justine Olshan
 wrote:
> The feature command includes the upgrade or downgrade command along with
> the --release-version flag. If some features are not moving in the
> direction mentioned (upgrade or downgrade) the command will fail -- perhaps
> with an error of which features were going in the wrong direction.

How about updating the KIP to show and document this behavior?

Thanks,
-- 
-José


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Justine Olshan
Hey Jun,

Right, sorry this would be one row in an output of all the various features
(transaction.protocol.version, group.coordinator.version) currently set on
the cluster.

If we want to know the dependencies for each of a given feature (ie
transaction.protocol.versions 0, 1, 2, 3 etc) we can use the other command
if those versions are not yet set.

If this makes sense I will update the KIP.

I think one remaining open area was with respect to  `--release-version`
I didn't follow Colin's point
> So how does the command invoking --release-version know whether it's
upgrading or downgrading?
The feature command includes the upgrade or downgrade command along with
the --release-version flag. If some features are not moving in the
direction mentioned (upgrade or downgrade) the command will fail -- perhaps
with an error of which features were going in the wrong direction.

Thanks,
Justine

On Wed, Mar 27, 2024 at 11:52 AM Jun Rao  wrote:

> Hi, Justine,
>
> It seems that we need to specify the dependencies for each feature version?
>
> Thanks,
>
> Jun
>
> On Wed, Mar 27, 2024 at 11:28 AM Justine Olshan
>  wrote:
>
> > Hey Jun,
> >
> > So just including the dependencies for the currently set features? Along
> > with the supported min, max, and finalized versions?
> >
> > Feature: transaction.protocol.version SupportedMinVersion: 0
> > SupportedMaxVersion: 2 FinalizedVersionLevel: 1 Epoch: 3 Dependencies:
> > metadata.version=4
> >
> > On Wed, Mar 27, 2024 at 11:14 AM Jun Rao 
> wrote:
> >
> > > Hi, Justine,
> > >
> > > Yes, something like that. We could also extend "kafka-feature describe"
> > by
> > > adding the dependency to every feature in the output.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Mar 27, 2024 at 10:39 AM Justine Olshan
> > >  wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > We could expose them in a tool. I'm wondering, are you thinking
> > something
> > > > like a command that lists the dependencies for a given feature +
> > version?
> > > >
> > > > Something like:
> > > > kafka-feature dependencies --feature transaction.protocol.version=2
> > > > > transaction.protocol.verison=2 requires metadata.version=4 (listing
> > any
> > > > other version dependencies)
> > > >
> > > > Justine
> > > >
> > > > On Wed, Mar 27, 2024 at 10:28 AM Jun Rao 
> > > wrote:
> > > >
> > > > > Hi, Colin,
> > > > >
> > > > > Thanks for the comments. It's fine if we want to keep the
> --metadata
> > > flag
> > > > > if it's useful. Then we should add the same flag for kafka-storage
> > for
> > > > > consistency, right?
> > > > >
> > > > > Hi, Justine,
> > > > >
> > > > > Thanks for the reply.
> > > > >
> > > > > How will a user know about the dependencies among features? Should
> we
> > > > > expose them in a tool?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe 
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Thanks for the discussion. Let me respond to a few questions I
> saw
> > in
> > > > the
> > > > > > thread (hope I didn't miss any!)
> > > > > >
> > > > > > 
> > > > > >
> > > > > > Feature level 0 is "special" because it effectively means that
> the
> > > > > feature
> > > > > > hasn't been set up. So I could create a foo feature, a bar
> feature,
> > > > and a
> > > > > > baz feature tomorrow and correctly say that your cluster is on
> > > feature
> > > > > > level 0 for foo, bar, and baz. Because feature level 0 is
> > isomorphic
> > > > with
> > > > > > "no feature level set."
> > > > > >
> > > > > > Obviously you can have whatever semantics you want for feature
> > level
> > > 0.
> > > > > In
> > > > > > the case of Jose's Raft changes, feature level 0 may end up being
> > the
> > > > > > current state of the world. That's fine.
> > > > > >
> > > > > > 0 being isomorphic with "not set" simplifes the code a lot
> because
> > we
> > > > > > don't need tons of special cases for "feature not set" versus
> > > "feature
> > > > > set
> > > > > > to 0". Effectively we can use short integers everywhere, and not
> > > > > > Optional. Does that make sense?
> > > > > >
> > > > > > 
> > > > > >
> > > > > > The --metadata flag doesn't quite do the same thing as the
> > --feature
> > > > > flag.
> > > > > > The --metadata flag takes a string like 3.7-IV0, whereas the
> > > --feature
> > > > > flag
> > > > > > takes an integer like "17".
> > > > > >
> > > > > > It's true that in the current kafka-features.sh, you can shun the
> > > > > > --metadata flag, and only use --feature. The --metadata flag is a
> > > > > > convenience. But... conveniences are good. Better than
> > > inconveniences.
> > > > > So I
> > > > > > don't think it makes sense to deprecate --metadata, since its
> > > > > functionality
> > > > > > is not provided by anything else.
> > > > > >
> > > > > > 
> > > > > >
> > > > > > As I said earlier, the proposed semantics for --release-version
> > > aren't
> > > > > > actually 

Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-27 Thread José Armando García Sancio
Hi Jun,

Thanks for the feedback. See my comments below.

On Mon, Mar 25, 2024 at 2:21 PM Jun Rao  wrote:
> 54. Yes, we could include SecurityProtocol in DescribeQuorumResponse. Then,
> we could include it in the output of kafka-metadata-quorum --describe.

Yes, I updated the DescribeQuorumResponse to include the
SecurityProtocol and I also updated the example output for
"kafka-metadata-quorum describe --status".

> 55.1 Could number-of-observers and pending-voter-change be reported by all
> brokers and controllers? I thought only the controller leader tracks those.

These metrics are reported by all of the KRaft replicas (broker and
controller). I think this makes it easier to monitor since metrics
collectors can collect the same metrics from all of the nodes
irrespective of their role (broker or controller). The main exception
that Kafka has right now is type=KafkaController vs
type=broker-metadata-metrics but I would favor a KIP that unified
these two sets of metrics to something like type=metadata-metrics.

> 55.2 So, IgnoredStaticVoters and IsObserver are Yammer metrics and the rest
> are KafkaMetric. It would be useful to document the metric names clearer.
> For Yammer metrics, we need to specify group, type, name and tags. For
> KafkaMetrics, we need to specify just name and tags.

Yeah. I always struggle with the MBean specification. I connected
jconsole to Kafka and updated the KIP to be more accurate. Please take
a look.

> 57. Could we remove --release-version 3.8 in the upgrade example?

Done. I also removed wording about deprecating --metadata from
kafka-features.sh. I'll let KIP-1022 and the discussion there make
that decision.

Thanks,
--
-José


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2758

2024-03-27 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-16428) Fix bug where config change notification znode may not get created during migration

2024-03-27 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-16428.
--
Resolution: Fixed

> Fix bug where config change notification znode may not get created during 
> migration
> ---
>
> Key: KAFKA-16428
> URL: https://issues.apache.org/jira/browse/KAFKA-16428
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16411) Correctly migrate default client quota entities in KRaft migration

2024-03-27 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-16411.
--
Resolution: Fixed

> Correctly migrate default client quota entities in KRaft migration
> --
>
> Key: KAFKA-16411
> URL: https://issues.apache.org/jira/browse/KAFKA-16411
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16435) Add test for KAFKA-16428

2024-03-27 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-16435:


 Summary: Add test for KAFKA-16428
 Key: KAFKA-16435
 URL: https://issues.apache.org/jira/browse/KAFKA-16435
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe


Add a test for KAFKA-16428: Fix bug where config change notification znode may 
not get created during migration #15608



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16434) ForeignKey INNER join does not unset join result when FK becomes null

2024-03-27 Thread Ayoub Omari (Jira)
Ayoub Omari created KAFKA-16434:
---

 Summary: ForeignKey INNER join does not unset join result when FK 
becomes null
 Key: KAFKA-16434
 URL: https://issues.apache.org/jira/browse/KAFKA-16434
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.7.0, 2.8.2
Reporter: Ayoub Omari


We have two topics : _left-topic[String, LeftRecord]_ and _right-topic[String, 
String]_

where _LeftRecord_ :
{code:scala}
 case class LeftRecord(foreignKey: String, name: String){code}
we do a simple *INNER* foreign key join on left-topic's foreignKey field. The 
resulting join value is the value in right-topic. (same topology example as in 
KAFKA-16407)

 

*Scenario: Unset foreign key of a primary key*
{code:scala}
rightTopic.pipeInput("fk1", "1")

leftTopic.pipeInput("pk1", ProductValue("fk1", "pk1")) 
leftTopic.pipeInput("pk1", ProductValue(null, "pk1"))
{code}
 

*+Actual result+*

 
{code:java}
KeyValue(pk1, 3) {code}
 

*+Expected result+*

 
{code:java}
KeyValue(pk1, 3)
KeyValue(pk1, null) // This unsets the join between pk1 and fk1{code}
 

However, in {+}other cases{+}, where the join result should be unset (e.g. the 
primary key is deleted, or the foreign key changes to a non existing FK), that 
record is {+}correctly emitted{+}.

 

Also, the importance of unsetting the join result is mentioned in the code: 
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/internals/foreignkeyjoin/SubscriptionSendProcessorSupplier.java#L161C21-L163C36

 
{code:java}
//[...] Additionally, propagate null if no FK is found there,
// since we must "unset" any output set by the previous FK-join. This is true 
for both INNER and LEFT join. {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Jun Rao
Hi, Justine,

It seems that we need to specify the dependencies for each feature version?

Thanks,

Jun

On Wed, Mar 27, 2024 at 11:28 AM Justine Olshan
 wrote:

> Hey Jun,
>
> So just including the dependencies for the currently set features? Along
> with the supported min, max, and finalized versions?
>
> Feature: transaction.protocol.version SupportedMinVersion: 0
> SupportedMaxVersion: 2 FinalizedVersionLevel: 1 Epoch: 3 Dependencies:
> metadata.version=4
>
> On Wed, Mar 27, 2024 at 11:14 AM Jun Rao  wrote:
>
> > Hi, Justine,
> >
> > Yes, something like that. We could also extend "kafka-feature describe"
> by
> > adding the dependency to every feature in the output.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Mar 27, 2024 at 10:39 AM Justine Olshan
> >  wrote:
> >
> > > Hi Jun,
> > >
> > > We could expose them in a tool. I'm wondering, are you thinking
> something
> > > like a command that lists the dependencies for a given feature +
> version?
> > >
> > > Something like:
> > > kafka-feature dependencies --feature transaction.protocol.version=2
> > > > transaction.protocol.verison=2 requires metadata.version=4 (listing
> any
> > > other version dependencies)
> > >
> > > Justine
> > >
> > > On Wed, Mar 27, 2024 at 10:28 AM Jun Rao 
> > wrote:
> > >
> > > > Hi, Colin,
> > > >
> > > > Thanks for the comments. It's fine if we want to keep the --metadata
> > flag
> > > > if it's useful. Then we should add the same flag for kafka-storage
> for
> > > > consistency, right?
> > > >
> > > > Hi, Justine,
> > > >
> > > > Thanks for the reply.
> > > >
> > > > How will a user know about the dependencies among features? Should we
> > > > expose them in a tool?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Thanks for the discussion. Let me respond to a few questions I saw
> in
> > > the
> > > > > thread (hope I didn't miss any!)
> > > > >
> > > > > 
> > > > >
> > > > > Feature level 0 is "special" because it effectively means that the
> > > > feature
> > > > > hasn't been set up. So I could create a foo feature, a bar feature,
> > > and a
> > > > > baz feature tomorrow and correctly say that your cluster is on
> > feature
> > > > > level 0 for foo, bar, and baz. Because feature level 0 is
> isomorphic
> > > with
> > > > > "no feature level set."
> > > > >
> > > > > Obviously you can have whatever semantics you want for feature
> level
> > 0.
> > > > In
> > > > > the case of Jose's Raft changes, feature level 0 may end up being
> the
> > > > > current state of the world. That's fine.
> > > > >
> > > > > 0 being isomorphic with "not set" simplifes the code a lot because
> we
> > > > > don't need tons of special cases for "feature not set" versus
> > "feature
> > > > set
> > > > > to 0". Effectively we can use short integers everywhere, and not
> > > > > Optional. Does that make sense?
> > > > >
> > > > > 
> > > > >
> > > > > The --metadata flag doesn't quite do the same thing as the
> --feature
> > > > flag.
> > > > > The --metadata flag takes a string like 3.7-IV0, whereas the
> > --feature
> > > > flag
> > > > > takes an integer like "17".
> > > > >
> > > > > It's true that in the current kafka-features.sh, you can shun the
> > > > > --metadata flag, and only use --feature. The --metadata flag is a
> > > > > convenience. But... conveniences are good. Better than
> > inconveniences.
> > > > So I
> > > > > don't think it makes sense to deprecate --metadata, since its
> > > > functionality
> > > > > is not provided by anything else.
> > > > >
> > > > > 
> > > > >
> > > > > As I said earlier, the proposed semantics for --release-version
> > aren't
> > > > > actually possible given the current RPCs on the server side. The
> > > problem
> > > > is
> > > > > that UpdateFeaturesRequest needs to be set upgradType to one of:
> > > > >
> > > > > 1. FeatureUpdate.UpgradeType.UPGRADE
> > > > > 2. FeatureUpdate.UpgradeType.SAFE_DOWNGRADE
> > > > > 3. FeatureUpdate.UpgradeType.UNSAFE_DOWNGRADE
> > > > >
> > > > > If it's set to #1, levels can only go up; if it's set to 2 or 3,
> > levels
> > > > > can only go down. (I forget what happens if the levels are the
> > same...
> > > > you
> > > > > can check)
> > > > >
> > > > > So how does the command invoking --release-version know whether
> it's
> > > > > upgrading or downgrading? I can't see any way for it to know, and
> > plus
> > > it
> > > > > may end up doing more than one of these if some features need to go
> > > down
> > > > > and others up. "Making everything the same as it was in 3.7-IV0"
> may
> > > end
> > > > up
> > > > > down-levelling some features, and up-levelling others. There isn't
> > any
> > > > way
> > > > > to do this atomically in a single RPC today.
> > > > >
> > > > > 
> > > > >
> > > > > I don't find the proposed semantics for --release-version to be
> very
> > > > > useful.
> > > > >
> > > > > 

Re: [VOTE] KIP-1025: Optionally URL-encode clientID and clientSecret in authorization header

2024-03-27 Thread Doğuşcan Namal
Thanks for checking it out Nelson. Yeah I think it makes sense to leave it
for the users who want to use it for testing.

On Mon, 25 Mar 2024 at 20:44, Nelson B.  wrote:

> Hi Doğuşcan,
>
> Thanks for your vote!
>
> Currently, the usage of TLS depends on the protocol used by the
> authorization server which is configured
> through the "sasl.oauthbearer.token.endpoint.url" option. So, if the
> URL address uses simple http (not https)
> then secrets will be transmitted in plaintext. I think it's possible to
> enforce using only https but I think any
> production-grade authorization server uses https anyway and maybe users may
> want to test using http in the dev environment.
>
> Thanks,
>
> On Thu, Mar 21, 2024 at 3:56 PM Doğuşcan Namal 
> wrote:
>
> > Hi Nelson, thanks for the KIP.
> >
> > From the RFC:
> > ```
> > The authorization server MUST require the use of TLS as described in
> >Section 1.6 when sending requests using password authentication.
> > ```
> >
> > I believe we already have an enforcement for OAuth to be enabled only in
> > SSLChannel but would be good to double check. Sending secrets over
> > plaintext is a security bad practice :)
> >
> > +1 (non-binding) from me.
> >
> > On Tue, 19 Mar 2024 at 16:00, Nelson B.  wrote:
> >
> > > Hi all,
> > >
> > > I would like to start a vote on KIP-1025
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1025%3A+Optionally+URL-encode+clientID+and+clientSecret+in+authorization+header
> > > >,
> > > which would optionally URL-encode clientID and clientSecret in the
> > > authorization header.
> > >
> > > I feel like all possible issues have been addressed in the discussion
> > > thread.
> > >
> > > Thanks,
> > >
> >
>


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Justine Olshan
Hey Jun,

So just including the dependencies for the currently set features? Along
with the supported min, max, and finalized versions?

Feature: transaction.protocol.version SupportedMinVersion: 0
SupportedMaxVersion: 2 FinalizedVersionLevel: 1 Epoch: 3 Dependencies:
metadata.version=4

On Wed, Mar 27, 2024 at 11:14 AM Jun Rao  wrote:

> Hi, Justine,
>
> Yes, something like that. We could also extend "kafka-feature describe" by
> adding the dependency to every feature in the output.
>
> Thanks,
>
> Jun
>
> On Wed, Mar 27, 2024 at 10:39 AM Justine Olshan
>  wrote:
>
> > Hi Jun,
> >
> > We could expose them in a tool. I'm wondering, are you thinking something
> > like a command that lists the dependencies for a given feature + version?
> >
> > Something like:
> > kafka-feature dependencies --feature transaction.protocol.version=2
> > > transaction.protocol.verison=2 requires metadata.version=4 (listing any
> > other version dependencies)
> >
> > Justine
> >
> > On Wed, Mar 27, 2024 at 10:28 AM Jun Rao 
> wrote:
> >
> > > Hi, Colin,
> > >
> > > Thanks for the comments. It's fine if we want to keep the --metadata
> flag
> > > if it's useful. Then we should add the same flag for kafka-storage for
> > > consistency, right?
> > >
> > > Hi, Justine,
> > >
> > > Thanks for the reply.
> > >
> > > How will a user know about the dependencies among features? Should we
> > > expose them in a tool?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > Thanks for the discussion. Let me respond to a few questions I saw in
> > the
> > > > thread (hope I didn't miss any!)
> > > >
> > > > 
> > > >
> > > > Feature level 0 is "special" because it effectively means that the
> > > feature
> > > > hasn't been set up. So I could create a foo feature, a bar feature,
> > and a
> > > > baz feature tomorrow and correctly say that your cluster is on
> feature
> > > > level 0 for foo, bar, and baz. Because feature level 0 is isomorphic
> > with
> > > > "no feature level set."
> > > >
> > > > Obviously you can have whatever semantics you want for feature level
> 0.
> > > In
> > > > the case of Jose's Raft changes, feature level 0 may end up being the
> > > > current state of the world. That's fine.
> > > >
> > > > 0 being isomorphic with "not set" simplifes the code a lot because we
> > > > don't need tons of special cases for "feature not set" versus
> "feature
> > > set
> > > > to 0". Effectively we can use short integers everywhere, and not
> > > > Optional. Does that make sense?
> > > >
> > > > 
> > > >
> > > > The --metadata flag doesn't quite do the same thing as the --feature
> > > flag.
> > > > The --metadata flag takes a string like 3.7-IV0, whereas the
> --feature
> > > flag
> > > > takes an integer like "17".
> > > >
> > > > It's true that in the current kafka-features.sh, you can shun the
> > > > --metadata flag, and only use --feature. The --metadata flag is a
> > > > convenience. But... conveniences are good. Better than
> inconveniences.
> > > So I
> > > > don't think it makes sense to deprecate --metadata, since its
> > > functionality
> > > > is not provided by anything else.
> > > >
> > > > 
> > > >
> > > > As I said earlier, the proposed semantics for --release-version
> aren't
> > > > actually possible given the current RPCs on the server side. The
> > problem
> > > is
> > > > that UpdateFeaturesRequest needs to be set upgradType to one of:
> > > >
> > > > 1. FeatureUpdate.UpgradeType.UPGRADE
> > > > 2. FeatureUpdate.UpgradeType.SAFE_DOWNGRADE
> > > > 3. FeatureUpdate.UpgradeType.UNSAFE_DOWNGRADE
> > > >
> > > > If it's set to #1, levels can only go up; if it's set to 2 or 3,
> levels
> > > > can only go down. (I forget what happens if the levels are the
> same...
> > > you
> > > > can check)
> > > >
> > > > So how does the command invoking --release-version know whether it's
> > > > upgrading or downgrading? I can't see any way for it to know, and
> plus
> > it
> > > > may end up doing more than one of these if some features need to go
> > down
> > > > and others up. "Making everything the same as it was in 3.7-IV0" may
> > end
> > > up
> > > > down-levelling some features, and up-levelling others. There isn't
> any
> > > way
> > > > to do this atomically in a single RPC today.
> > > >
> > > > 
> > > >
> > > > I don't find the proposed semantics for --release-version to be very
> > > > useful.
> > > >
> > > > In the clusters I help to administer, I don't like changing a bunch
> of
> > > > things all at once. I'd much rather make one change at a time and see
> > > what
> > > > happens, then move on to the next change.
> > > >
> > > > Earlier I proposed just having a subcommand in kafka-features.sh that
> > > > compared the currently set feature flags against the "default" one
> for
> > > the
> > > > provided Kafka release / MV. I think this would be more useful than
> the
> > > > 

[DISCUSS] KIP-1023: Follower fetch from tiered offset

2024-03-27 Thread Abhijeet Kumar
Hi All,

I have created KIP-1023 to introduce follower fetch from tiered offset.
This feature will be helpful in significantly reducing Kafka
rebalance/rebuild times when the cluster is enabled with tiered storage.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1023%3A+Follower+fetch+from+tiered+offset

Feedback and suggestions are welcome.

Regards,
Abhijeet.


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Jun Rao
Hi, Justine,

Yes, something like that. We could also extend "kafka-feature describe" by
adding the dependency to every feature in the output.

Thanks,

Jun

On Wed, Mar 27, 2024 at 10:39 AM Justine Olshan
 wrote:

> Hi Jun,
>
> We could expose them in a tool. I'm wondering, are you thinking something
> like a command that lists the dependencies for a given feature + version?
>
> Something like:
> kafka-feature dependencies --feature transaction.protocol.version=2
> > transaction.protocol.verison=2 requires metadata.version=4 (listing any
> other version dependencies)
>
> Justine
>
> On Wed, Mar 27, 2024 at 10:28 AM Jun Rao  wrote:
>
> > Hi, Colin,
> >
> > Thanks for the comments. It's fine if we want to keep the --metadata flag
> > if it's useful. Then we should add the same flag for kafka-storage for
> > consistency, right?
> >
> > Hi, Justine,
> >
> > Thanks for the reply.
> >
> > How will a user know about the dependencies among features? Should we
> > expose them in a tool?
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > Thanks for the discussion. Let me respond to a few questions I saw in
> the
> > > thread (hope I didn't miss any!)
> > >
> > > 
> > >
> > > Feature level 0 is "special" because it effectively means that the
> > feature
> > > hasn't been set up. So I could create a foo feature, a bar feature,
> and a
> > > baz feature tomorrow and correctly say that your cluster is on feature
> > > level 0 for foo, bar, and baz. Because feature level 0 is isomorphic
> with
> > > "no feature level set."
> > >
> > > Obviously you can have whatever semantics you want for feature level 0.
> > In
> > > the case of Jose's Raft changes, feature level 0 may end up being the
> > > current state of the world. That's fine.
> > >
> > > 0 being isomorphic with "not set" simplifes the code a lot because we
> > > don't need tons of special cases for "feature not set" versus "feature
> > set
> > > to 0". Effectively we can use short integers everywhere, and not
> > > Optional. Does that make sense?
> > >
> > > 
> > >
> > > The --metadata flag doesn't quite do the same thing as the --feature
> > flag.
> > > The --metadata flag takes a string like 3.7-IV0, whereas the --feature
> > flag
> > > takes an integer like "17".
> > >
> > > It's true that in the current kafka-features.sh, you can shun the
> > > --metadata flag, and only use --feature. The --metadata flag is a
> > > convenience. But... conveniences are good. Better than inconveniences.
> > So I
> > > don't think it makes sense to deprecate --metadata, since its
> > functionality
> > > is not provided by anything else.
> > >
> > > 
> > >
> > > As I said earlier, the proposed semantics for --release-version aren't
> > > actually possible given the current RPCs on the server side. The
> problem
> > is
> > > that UpdateFeaturesRequest needs to be set upgradType to one of:
> > >
> > > 1. FeatureUpdate.UpgradeType.UPGRADE
> > > 2. FeatureUpdate.UpgradeType.SAFE_DOWNGRADE
> > > 3. FeatureUpdate.UpgradeType.UNSAFE_DOWNGRADE
> > >
> > > If it's set to #1, levels can only go up; if it's set to 2 or 3, levels
> > > can only go down. (I forget what happens if the levels are the same...
> > you
> > > can check)
> > >
> > > So how does the command invoking --release-version know whether it's
> > > upgrading or downgrading? I can't see any way for it to know, and plus
> it
> > > may end up doing more than one of these if some features need to go
> down
> > > and others up. "Making everything the same as it was in 3.7-IV0" may
> end
> > up
> > > down-levelling some features, and up-levelling others. There isn't any
> > way
> > > to do this atomically in a single RPC today.
> > >
> > > 
> > >
> > > I don't find the proposed semantics for --release-version to be very
> > > useful.
> > >
> > > In the clusters I help to administer, I don't like changing a bunch of
> > > things all at once. I'd much rather make one change at a time and see
> > what
> > > happens, then move on to the next change.
> > >
> > > Earlier I proposed just having a subcommand in kafka-features.sh that
> > > compared the currently set feature flags against the "default" one for
> > the
> > > provided Kafka release / MV. I think this would be more useful than the
> > > "shotgun approach" of making a bunch of changes together. Just DISPLAY
> > what
> > > would need to be changed to "make everything the same as it was in
> > 3.7-IV0"
> > > but then let the admin decide what they want to do (or not do). You
> could
> > > even display the commands that would need to be run, if you like. But
> let
> > > them decide whether to pull the trigger on each upgrade or downgrade.
> > >
> > > This also avoids having to solve the thorny issue of how to have a
> single
> > > RPC do both upgrades and downgrades.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Tue, Mar 26, 2024, at 14:59, Justine Olshan 

[jira] [Resolved] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16310.

Resolution: Fixed

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Justine Olshan
Hi Jun,

We could expose them in a tool. I'm wondering, are you thinking something
like a command that lists the dependencies for a given feature + version?

Something like:
kafka-feature dependencies --feature transaction.protocol.version=2
> transaction.protocol.verison=2 requires metadata.version=4 (listing any
other version dependencies)

Justine

On Wed, Mar 27, 2024 at 10:28 AM Jun Rao  wrote:

> Hi, Colin,
>
> Thanks for the comments. It's fine if we want to keep the --metadata flag
> if it's useful. Then we should add the same flag for kafka-storage for
> consistency, right?
>
> Hi, Justine,
>
> Thanks for the reply.
>
> How will a user know about the dependencies among features? Should we
> expose them in a tool?
>
> Thanks,
>
> Jun
>
> On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe  wrote:
>
> > Hi all,
> >
> > Thanks for the discussion. Let me respond to a few questions I saw in the
> > thread (hope I didn't miss any!)
> >
> > 
> >
> > Feature level 0 is "special" because it effectively means that the
> feature
> > hasn't been set up. So I could create a foo feature, a bar feature, and a
> > baz feature tomorrow and correctly say that your cluster is on feature
> > level 0 for foo, bar, and baz. Because feature level 0 is isomorphic with
> > "no feature level set."
> >
> > Obviously you can have whatever semantics you want for feature level 0.
> In
> > the case of Jose's Raft changes, feature level 0 may end up being the
> > current state of the world. That's fine.
> >
> > 0 being isomorphic with "not set" simplifes the code a lot because we
> > don't need tons of special cases for "feature not set" versus "feature
> set
> > to 0". Effectively we can use short integers everywhere, and not
> > Optional. Does that make sense?
> >
> > 
> >
> > The --metadata flag doesn't quite do the same thing as the --feature
> flag.
> > The --metadata flag takes a string like 3.7-IV0, whereas the --feature
> flag
> > takes an integer like "17".
> >
> > It's true that in the current kafka-features.sh, you can shun the
> > --metadata flag, and only use --feature. The --metadata flag is a
> > convenience. But... conveniences are good. Better than inconveniences.
> So I
> > don't think it makes sense to deprecate --metadata, since its
> functionality
> > is not provided by anything else.
> >
> > 
> >
> > As I said earlier, the proposed semantics for --release-version aren't
> > actually possible given the current RPCs on the server side. The problem
> is
> > that UpdateFeaturesRequest needs to be set upgradType to one of:
> >
> > 1. FeatureUpdate.UpgradeType.UPGRADE
> > 2. FeatureUpdate.UpgradeType.SAFE_DOWNGRADE
> > 3. FeatureUpdate.UpgradeType.UNSAFE_DOWNGRADE
> >
> > If it's set to #1, levels can only go up; if it's set to 2 or 3, levels
> > can only go down. (I forget what happens if the levels are the same...
> you
> > can check)
> >
> > So how does the command invoking --release-version know whether it's
> > upgrading or downgrading? I can't see any way for it to know, and plus it
> > may end up doing more than one of these if some features need to go down
> > and others up. "Making everything the same as it was in 3.7-IV0" may end
> up
> > down-levelling some features, and up-levelling others. There isn't any
> way
> > to do this atomically in a single RPC today.
> >
> > 
> >
> > I don't find the proposed semantics for --release-version to be very
> > useful.
> >
> > In the clusters I help to administer, I don't like changing a bunch of
> > things all at once. I'd much rather make one change at a time and see
> what
> > happens, then move on to the next change.
> >
> > Earlier I proposed just having a subcommand in kafka-features.sh that
> > compared the currently set feature flags against the "default" one for
> the
> > provided Kafka release / MV. I think this would be more useful than the
> > "shotgun approach" of making a bunch of changes together. Just DISPLAY
> what
> > would need to be changed to "make everything the same as it was in
> 3.7-IV0"
> > but then let the admin decide what they want to do (or not do). You could
> > even display the commands that would need to be run, if you like. But let
> > them decide whether to pull the trigger on each upgrade or downgrade.
> >
> > This also avoids having to solve the thorny issue of how to have a single
> > RPC do both upgrades and downgrades.
> >
> > best,
> > Colin
> >
> >
> > On Tue, Mar 26, 2024, at 14:59, Justine Olshan wrote:
> > > Hi all,
> > >
> > > I've added the notes about requiring 3.3-IV0 and that the tool/rpc will
> > > fail if the metadata version is too low.
> > >
> > > I will wait for Colin's response on the deprecation. I am not opposed
> to
> > > deprecating it but want everyone to agree.
> > >
> > > Justine
> > >
> > > On Tue, Mar 26, 2024 at 12:26 PM José Armando García Sancio
> > >  wrote:
> > >
> > >> Hi Justine,
> > >>
> > >> On Mon, Mar 25, 2024 at 5:09 PM Justine Olshan
> > >>  

Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-27 Thread Jun Rao
Hi, Colin,

Thanks for the comments. It's fine if we want to keep the --metadata flag
if it's useful. Then we should add the same flag for kafka-storage for
consistency, right?

Hi, Justine,

Thanks for the reply.

How will a user know about the dependencies among features? Should we
expose them in a tool?

Thanks,

Jun

On Tue, Mar 26, 2024 at 4:33 PM Colin McCabe  wrote:

> Hi all,
>
> Thanks for the discussion. Let me respond to a few questions I saw in the
> thread (hope I didn't miss any!)
>
> 
>
> Feature level 0 is "special" because it effectively means that the feature
> hasn't been set up. So I could create a foo feature, a bar feature, and a
> baz feature tomorrow and correctly say that your cluster is on feature
> level 0 for foo, bar, and baz. Because feature level 0 is isomorphic with
> "no feature level set."
>
> Obviously you can have whatever semantics you want for feature level 0. In
> the case of Jose's Raft changes, feature level 0 may end up being the
> current state of the world. That's fine.
>
> 0 being isomorphic with "not set" simplifes the code a lot because we
> don't need tons of special cases for "feature not set" versus "feature set
> to 0". Effectively we can use short integers everywhere, and not
> Optional. Does that make sense?
>
> 
>
> The --metadata flag doesn't quite do the same thing as the --feature flag.
> The --metadata flag takes a string like 3.7-IV0, whereas the --feature flag
> takes an integer like "17".
>
> It's true that in the current kafka-features.sh, you can shun the
> --metadata flag, and only use --feature. The --metadata flag is a
> convenience. But... conveniences are good. Better than inconveniences. So I
> don't think it makes sense to deprecate --metadata, since its functionality
> is not provided by anything else.
>
> 
>
> As I said earlier, the proposed semantics for --release-version aren't
> actually possible given the current RPCs on the server side. The problem is
> that UpdateFeaturesRequest needs to be set upgradType to one of:
>
> 1. FeatureUpdate.UpgradeType.UPGRADE
> 2. FeatureUpdate.UpgradeType.SAFE_DOWNGRADE
> 3. FeatureUpdate.UpgradeType.UNSAFE_DOWNGRADE
>
> If it's set to #1, levels can only go up; if it's set to 2 or 3, levels
> can only go down. (I forget what happens if the levels are the same... you
> can check)
>
> So how does the command invoking --release-version know whether it's
> upgrading or downgrading? I can't see any way for it to know, and plus it
> may end up doing more than one of these if some features need to go down
> and others up. "Making everything the same as it was in 3.7-IV0" may end up
> down-levelling some features, and up-levelling others. There isn't any way
> to do this atomically in a single RPC today.
>
> 
>
> I don't find the proposed semantics for --release-version to be very
> useful.
>
> In the clusters I help to administer, I don't like changing a bunch of
> things all at once. I'd much rather make one change at a time and see what
> happens, then move on to the next change.
>
> Earlier I proposed just having a subcommand in kafka-features.sh that
> compared the currently set feature flags against the "default" one for the
> provided Kafka release / MV. I think this would be more useful than the
> "shotgun approach" of making a bunch of changes together. Just DISPLAY what
> would need to be changed to "make everything the same as it was in 3.7-IV0"
> but then let the admin decide what they want to do (or not do). You could
> even display the commands that would need to be run, if you like. But let
> them decide whether to pull the trigger on each upgrade or downgrade.
>
> This also avoids having to solve the thorny issue of how to have a single
> RPC do both upgrades and downgrades.
>
> best,
> Colin
>
>
> On Tue, Mar 26, 2024, at 14:59, Justine Olshan wrote:
> > Hi all,
> >
> > I've added the notes about requiring 3.3-IV0 and that the tool/rpc will
> > fail if the metadata version is too low.
> >
> > I will wait for Colin's response on the deprecation. I am not opposed to
> > deprecating it but want everyone to agree.
> >
> > Justine
> >
> > On Tue, Mar 26, 2024 at 12:26 PM José Armando García Sancio
> >  wrote:
> >
> >> Hi Justine,
> >>
> >> On Mon, Mar 25, 2024 at 5:09 PM Justine Olshan
> >>  wrote:
> >> > The reason it is not removed is purely for backwards
> >> > compatibility. Colin had strong feelings about not removing any flags.
> >>
> >> We are not saying that we should remove that flag. That would break
> >> backward compatibility of 3.8 with 3.7. We are suggesting to deprecate
> >> the flag in the next release.
> >>
> >> Thanks,
> >> -José
> >>
>


Re: [DISCUSS] Jakarta and Java EE 9/10 support in Kafka 4.0

2024-03-27 Thread Christopher Shannon
Hi Ismael,

Thanks for the feedback, I can definitely raise a KIP, that is no problem.

I will write one up and then we can have further discussion on the details.
I should have time to get one created later today or by tomorrow.

Chris


On Wed, Mar 27, 2024 at 11:54 AM Ismael Juma  wrote:

> Hi Christopher,
>
> Thanks for raising this. Moving to the new namespace makes sense - would
> you be willing to submit a KIP? The point you raised regarding Jetty 11 EOL
> and Jetty 12 requiring Java 17 is a good one and is worth discussing the
> trade-offs in more detail. I originally did not propose moving Connect to
> Java 17 because of the risk that it might break several connectors. If
> someone summarized the number of connectors that support Java 17 and the
> number that does not, it would be a useful first step in the discussion.
>
> Ismael
>
> On Tue, Mar 26, 2024 at 9:04 AM Christopher Shannon <
> christopher.l.shan...@gmail.com> wrote:
>
> > Hi Greg,
> >
> > You are right that KIP-1013 and the JDK 17 upgrade is not directly
> relevant
> > because JDK 11 can still be used with Jakarta APIs. However, it's still
> > somewhat relevant and important because if we are stuck at JDK 11 then we
> > can't upgrade to certain versions. For Connect, there is a Jetty server
> for
> > the rest API, if we wanted to use Jetty 12.x that requires JDK 17+. The
> > problem with using Jetty 11.x is that it is already EOL.
> >
> > So are we really locked into JDK 11 for Connect for version 4.0? It would
> > require people to upgrade their connectors to run on JDK 17 but shipping
> > Kafka 4.0 with a Jetty version that is already end of life doesn't make
> > sense to me. I know that Connect supports isolated classloaders for
> > connectors but that of course is not the same as different Java versions.
> >
> > Chris
> >
> > On Tue, Mar 26, 2024 at 11:33 AM Greg Harris
>  > >
> > wrote:
> >
> > > Hi Christopher!
> > >
> > > Thanks so much for raising this. I agree that we should move to the
> > > new namespace in 4.0, and not doing so would be a mistake.
> > > This breaking change has a lot of benefits, and the only cost I am
> > > aware of is that ConnectRestExtensions will need to be migrated,
> > > rebuilt, and re-released for 4.0+
> > >
> > > Can you explain how KIP-1013 and the Java version are relevant?
> > > Connect is dependent on this namespace, but will still need to support
> > > Java 11 in 4.0.
> > >
> > > Thanks!
> > > Greg
> > >
> > > On Tue, Mar 26, 2024 at 5:04 AM Christopher Shannon
> > >  wrote:
> > > >
> > > > Is this already being planned for version 4.0? If not, I strongly
> thing
> > > it
> > > > should be.
> > > >
> > > > Kafka is currently using the old long deprecated javax apis which is
> > > going
> > > > to continue to cause issues [1] for people as more and more things
> are
> > > > updated to use Jakarta.
> > > >
> > > > With the bump to require JDK 17 for version 4.0 [2] this seems like
> the
> > > > perfect time to upgrade to a new version of JavaEE and Jakarta apis
> and
> > > new
> > > > versions of dependencies like Jackson, Jersey, Jetty (12.x), etc that
> > all
> > > > support the new namespace. It needs to be upgraded at some point
> > anyways
> > > so
> > > > a major version makes sense to me.
> > > >
> > > > Another scenario where I've run into this problem is testing. For
> > > example,
> > > > If I try to run tests against my custom code with an embedded Kafka
> > > broker
> > > > and components in JUnit, then things break with newer dependencies
> like
> > > > Spring that require Jakarta as it interferes on the classpath.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/KAFKA-16326
> > > > [2]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
> > >
> >
>


[jira] [Created] (KAFKA-16433) beginningOffsets and offsetsForTimes don't behave consistently when providing a zero timeout

2024-03-27 Thread Philip Nee (Jira)
Philip Nee created KAFKA-16433:
--

 Summary: beginningOffsets and offsetsForTimes don't behave 
consistently when providing a zero timeout
 Key: KAFKA-16433
 URL: https://issues.apache.org/jira/browse/KAFKA-16433
 Project: Kafka
  Issue Type: Task
  Components: consumer
Reporter: Philip Nee
Assignee: Philip Nee


As documented here:[https://github.com/apache/kafka/pull/15525]

 

Both API should at least send out a request when zero timeout is provided.

 

This is corrected in the PR above.  We however still to fix the implementation 
for offsetsForTimes API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-477: Add PATCH method for connector config in Connect REST API

2024-03-27 Thread Chris Egerton
Hi Ivan,

Thanks for the updates. LGTM!

RE atomicity: I think it should be possible and not _too_ invasive to
detect and handle these kinds of races by tracking the offset in the config
topic for connector configs and aborting an operation if that offset
changes between when the request was initiated and when the write to the
config topic will take place, but since the same kind of issue is also
possible with other connector operations (concurrent configuration PUTs,
for example) due to how validation is split out into a separate thread, I
agree that it's not worth blocking the KIP on fixing this.

One final nit: Can you update the Jira ticket link in the KIP?

Cheers,

Chris

On Wed, Mar 27, 2024 at 2:56 PM Ivan Yurchenko  wrote:

> Hi,
>
> I updated the KIP with the two following changes:
> 1. Using `null` values as tombstone value for removing existing fields
> from configuration.
> 2. Added a note about the lack of 100% atomicity, which seems very
> difficult to achieve practically.
>
> Ivan
>
>
> On Tue, Mar 26, 2024, at 14:45, Ivan Yurchenko wrote:
> > Speaking of the Chris' comment
> >
> > > One thought that comes to mind is that sometimes it may be useful to
> > > explicitly remove properties from a connector configuration. We might
> > > permit this by allowing users to specify null (the JSON literal, not a
> > > string containing the characters "null") as the value for to-be-removed
> > > properties.
> >
> > This actually makes sense. AFAIU, `null` cannot be in the connector
> config since https://github.com/apache/kafka/pull/11333, so using it as a
> tombstone value is a good idea. I can update the KIP.
> >
> > Ivan
> >
> >
> > On Tue, Mar 26, 2024, at 14:19, Ivan Yurchenko wrote:
> > > Hi all,
> > >
> > > This KIP is a bit old now :) but I think its context hasn't changed
> much since then and the KIP is still valid. I would like to finally bring
> it to some conclusion.
> > >
> > > Best,
> > > Ivan
> > >
> > > On 2021/07/12 14:49:47 Chris Egerton wrote:
> > > > Hi all,
> > > >
> > > > Know it's been a while for this KIP but in my personal experience
> the value
> > > > of a PATCH method in the REST API has actually increased over time
> and I'd
> > > > love to have this option for quick, stop-the-bleeding remediation
> efforts.
> > > >
> > > > One thought that comes to mind is that sometimes it may be useful to
> > > > explicitly remove properties from a connector configuration. We might
> > > > permit this by allowing users to specify null (the JSON literal, not
> a
> > > > string containing the characters "null") as the value for
> to-be-removed
> > > > properties.
> > > >
> > > > I'd love to see this change if you're still interested in driving
> it, Ivan.
> > > > Hopefully we can give it the attention it deserves in the upcoming
> months!
> > > >
> > > > Cheers,
> > > >
> > > > Chris
> > > >
> > > > On Fri, Jun 28, 2019 at 4:56 AM Ivan Yurchenko 
> > > > wrote:
> > > >
> > > > > Thank you for your feedback Ryanne!
> > > > > These are all surely valid concerns and PATCH isn't really
> necessary or
> > > > > suitable for normal production configuration management. However,
> there are
> > > > > cases where quick patching of the configuration is useful, such as
> hot
> > > > > fixes of production or in development.
> > > > >
> > > > > Overall, the change itself is really tiny and if the cost-benefit
> balance
> > > > > is positive, I'd still like to drive it further.
> > > > >
> > > > > Ivan
> > > > >
> > > > > On Wed, 26 Jun 2019 at 17:45, Ryanne Dolan 
> wrote:
> > > > >
> > > > > > Ivan, I looked at adding PATCH a while ago as well. I decided
> not to
> > > > > pursue
> > > > > > the idea for a few reasons:
> > > > > >
> > > > > > 1) PATCH is still racy. For example, if you want to add a topic
> to the
> > > > > > "topics" property, you still need to read, modify, and write the
> existing
> > > > > > value. To handle this, you'd need to support atomic sub-document
> > > > > > operations, which I don't see happening.
> > > > > >
> > > > > > 2) A common pattern is to store your configurations in git or
> something,
> > > > > > and then apply them via PUT. Throw in some triggers or jenkins
> etc, and
> > > > > you
> > > > > > have a more robust solution than PATCH provides.
> > > > > >
> > > > > > 3) For properties that change a lot, it's possible to use an
> out-of-band
> > > > > > data source, e.g. Kafka or Zookeeper, and then have your
> Connector
> > > > > > subscribe to changes. I've done something like this to enable
> dynamic
> > > > > > reconfiguration of Connectors from command-line tools and
> dashboards
> > > > > > without involving the Connect REST API at all. Moreover, I've
> done so in
> > > > > an
> > > > > > atomic, non-racy way.
> > > > > >
> > > > > > So I don't think PATCH is strictly necessary nor sufficient for
> atomic
> > > > > > partial updates. That said, it doesn't hurt and I'm happy to
> support the
> > > > > > KIP.
> > > > > >
> > > > > > Ryanne
> > > > > >
> > > 

Re: [DISCUSS] Jakarta and Java EE 9/10 support in Kafka 4.0

2024-03-27 Thread Ismael Juma
Hi Christopher,

Thanks for raising this. Moving to the new namespace makes sense - would
you be willing to submit a KIP? The point you raised regarding Jetty 11 EOL
and Jetty 12 requiring Java 17 is a good one and is worth discussing the
trade-offs in more detail. I originally did not propose moving Connect to
Java 17 because of the risk that it might break several connectors. If
someone summarized the number of connectors that support Java 17 and the
number that does not, it would be a useful first step in the discussion.

Ismael

On Tue, Mar 26, 2024 at 9:04 AM Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:

> Hi Greg,
>
> You are right that KIP-1013 and the JDK 17 upgrade is not directly relevant
> because JDK 11 can still be used with Jakarta APIs. However, it's still
> somewhat relevant and important because if we are stuck at JDK 11 then we
> can't upgrade to certain versions. For Connect, there is a Jetty server for
> the rest API, if we wanted to use Jetty 12.x that requires JDK 17+. The
> problem with using Jetty 11.x is that it is already EOL.
>
> So are we really locked into JDK 11 for Connect for version 4.0? It would
> require people to upgrade their connectors to run on JDK 17 but shipping
> Kafka 4.0 with a Jetty version that is already end of life doesn't make
> sense to me. I know that Connect supports isolated classloaders for
> connectors but that of course is not the same as different Java versions.
>
> Chris
>
> On Tue, Mar 26, 2024 at 11:33 AM Greg Harris  >
> wrote:
>
> > Hi Christopher!
> >
> > Thanks so much for raising this. I agree that we should move to the
> > new namespace in 4.0, and not doing so would be a mistake.
> > This breaking change has a lot of benefits, and the only cost I am
> > aware of is that ConnectRestExtensions will need to be migrated,
> > rebuilt, and re-released for 4.0+
> >
> > Can you explain how KIP-1013 and the Java version are relevant?
> > Connect is dependent on this namespace, but will still need to support
> > Java 11 in 4.0.
> >
> > Thanks!
> > Greg
> >
> > On Tue, Mar 26, 2024 at 5:04 AM Christopher Shannon
> >  wrote:
> > >
> > > Is this already being planned for version 4.0? If not, I strongly thing
> > it
> > > should be.
> > >
> > > Kafka is currently using the old long deprecated javax apis which is
> > going
> > > to continue to cause issues [1] for people as more and more things are
> > > updated to use Jakarta.
> > >
> > > With the bump to require JDK 17 for version 4.0 [2] this seems like the
> > > perfect time to upgrade to a new version of JavaEE and Jakarta apis and
> > new
> > > versions of dependencies like Jackson, Jersey, Jetty (12.x), etc that
> all
> > > support the new namespace. It needs to be upgraded at some point
> anyways
> > so
> > > a major version makes sense to me.
> > >
> > > Another scenario where I've run into this problem is testing. For
> > example,
> > > If I try to run tests against my custom code with an embedded Kafka
> > broker
> > > and components in JUnit, then things break with newer dependencies like
> > > Spring that require Jakarta as it interferes on the classpath.
> > >
> > > [1] https://issues.apache.org/jira/browse/KAFKA-16326
> > > [2]
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2757

2024-03-27 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-16432) KStreams: Joining KStreams and GlobalKTable requires a state store

2024-03-27 Thread Matej Sprysl (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matej Sprysl resolved KAFKA-16432.
--
Resolution: Fixed

Fixed by using the same StreamsBuilder for both streams.

> KStreams: Joining KStreams and GlobalKTable requires a state store
> --
>
> Key: KAFKA-16432
> URL: https://issues.apache.org/jira/browse/KAFKA-16432
> Project: Kafka
>  Issue Type: Bug
>Reporter: Matej Sprysl
>Priority: Major
>
> h2.  Code:
> {code:java}
> final StreamsBuilder builder = new StreamsBuilder();
> final GlobalKTable table = builder.globalTable("tableTopic", 
> Consumed.with(Serdes.Long(), Serdes.Long()));
> final KStream stream =
> builder.stream(
> Pattern.compile("streamTopic"),
> Consumed.with(Serdes.ByteArray(), Message.SERDE));
> (...some processing, no state store added...)
> final var joiner = new MyJoiner();
> final var keyMapper = new MyKeyMapper();
> final KStream enriched =
> messages
> .join(table, keyMapper, joiner, Named.as("innerJoin"));{code}
> h2. Error:
> {code:java}
> Caused by: org.apache.kafka.streams.errors.StreamsException: Processor 
> innerJoin has no access to StateStore tableTopic-STATE-STORE-00 as 
> the store is not connected to the processor. If you add stores manually via 
> '.addStateStore()' make sure to connect the added store to the processor by 
> providing the processor name to '.addStateStore()' or connect them via 
> '.connectProcessorAndStateStores()'. DSL users need to provide the store name 
> to '.process()', '.transform()', or '.transformValues()' to connect the store 
> to the corresponding operator, or they can provide a StoreBuilder by 
> implementing the stores() method on the Supplier itself. If you do not add 
> stores manually, please file a bug report at 
> https://issues.apache.org/jira/projects/KAFKA.
> {code}
> h2. Additional notes:
> This error happens when I try to join a KStreams instance with a GlobalKTable 
> instance.
> It is important to emphasize that I am not connecting any state store 
> manually.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-477: Add PATCH method for connector config in Connect REST API

2024-03-27 Thread Ivan Yurchenko
Hi,

I updated the KIP with the two following changes:
1. Using `null` values as tombstone value for removing existing fields from 
configuration.
2. Added a note about the lack of 100% atomicity, which seems very difficult to 
achieve practically.

Ivan


On Tue, Mar 26, 2024, at 14:45, Ivan Yurchenko wrote:
> Speaking of the Chris' comment
> 
> > One thought that comes to mind is that sometimes it may be useful to
> > explicitly remove properties from a connector configuration. We might
> > permit this by allowing users to specify null (the JSON literal, not a
> > string containing the characters "null") as the value for to-be-removed
> > properties.
> 
> This actually makes sense. AFAIU, `null` cannot be in the connector config 
> since https://github.com/apache/kafka/pull/11333, so using it as a tombstone 
> value is a good idea. I can update the KIP.
> 
> Ivan
> 
> 
> On Tue, Mar 26, 2024, at 14:19, Ivan Yurchenko wrote:
> > Hi all,
> > 
> > This KIP is a bit old now :) but I think its context hasn't changed much 
> > since then and the KIP is still valid. I would like to finally bring it to 
> > some conclusion.
> > 
> > Best,
> > Ivan
> > 
> > On 2021/07/12 14:49:47 Chris Egerton wrote:
> > > Hi all,
> > > 
> > > Know it's been a while for this KIP but in my personal experience the 
> > > value
> > > of a PATCH method in the REST API has actually increased over time and I'd
> > > love to have this option for quick, stop-the-bleeding remediation efforts.
> > > 
> > > One thought that comes to mind is that sometimes it may be useful to
> > > explicitly remove properties from a connector configuration. We might
> > > permit this by allowing users to specify null (the JSON literal, not a
> > > string containing the characters "null") as the value for to-be-removed
> > > properties.
> > > 
> > > I'd love to see this change if you're still interested in driving it, 
> > > Ivan.
> > > Hopefully we can give it the attention it deserves in the upcoming months!
> > > 
> > > Cheers,
> > > 
> > > Chris
> > > 
> > > On Fri, Jun 28, 2019 at 4:56 AM Ivan Yurchenko 
> > > wrote:
> > > 
> > > > Thank you for your feedback Ryanne!
> > > > These are all surely valid concerns and PATCH isn't really necessary or
> > > > suitable for normal production configuration management. However, there 
> > > > are
> > > > cases where quick patching of the configuration is useful, such as hot
> > > > fixes of production or in development.
> > > >
> > > > Overall, the change itself is really tiny and if the cost-benefit 
> > > > balance
> > > > is positive, I'd still like to drive it further.
> > > >
> > > > Ivan
> > > >
> > > > On Wed, 26 Jun 2019 at 17:45, Ryanne Dolan  wrote:
> > > >
> > > > > Ivan, I looked at adding PATCH a while ago as well. I decided not to
> > > > pursue
> > > > > the idea for a few reasons:
> > > > >
> > > > > 1) PATCH is still racy. For example, if you want to add a topic to the
> > > > > "topics" property, you still need to read, modify, and write the 
> > > > > existing
> > > > > value. To handle this, you'd need to support atomic sub-document
> > > > > operations, which I don't see happening.
> > > > >
> > > > > 2) A common pattern is to store your configurations in git or 
> > > > > something,
> > > > > and then apply them via PUT. Throw in some triggers or jenkins etc, 
> > > > > and
> > > > you
> > > > > have a more robust solution than PATCH provides.
> > > > >
> > > > > 3) For properties that change a lot, it's possible to use an 
> > > > > out-of-band
> > > > > data source, e.g. Kafka or Zookeeper, and then have your Connector
> > > > > subscribe to changes. I've done something like this to enable dynamic
> > > > > reconfiguration of Connectors from command-line tools and dashboards
> > > > > without involving the Connect REST API at all. Moreover, I've done so 
> > > > > in
> > > > an
> > > > > atomic, non-racy way.
> > > > >
> > > > > So I don't think PATCH is strictly necessary nor sufficient for atomic
> > > > > partial updates. That said, it doesn't hurt and I'm happy to support 
> > > > > the
> > > > > KIP.
> > > > >
> > > > > Ryanne
> > > > >
> > > > > On Tue, Jun 25, 2019 at 12:15 PM Ivan Yurchenko <
> > > > ivan0yurche...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Since Kafka 2.3 has just been release and more people may have time 
> > > > > > to
> > > > > look
> > > > > > at this now, I'd like to bump this discussion.
> > > > > > Thanks.
> > > > > >
> > > > > > Ivan
> > > > > >
> > > > > >
> > > > > > On Thu, 13 Jun 2019 at 17:20, Ivan Yurchenko 
> > > > > >  > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > I'd like to start the discussion of KIP-477: Add PATCH method for
> > > > > > > connector config in Connect REST API.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-477%3A+Add+PATCH+method+for+connector+config+in+Connect+REST+API
> > 

[jira] [Created] (KAFKA-16432) KStreams: Joining KStreams and GlobalKTable requires a state store

2024-03-27 Thread Matej Sprysl (Jira)
Matej Sprysl created KAFKA-16432:


 Summary: KStreams: Joining KStreams and GlobalKTable requires a 
state store
 Key: KAFKA-16432
 URL: https://issues.apache.org/jira/browse/KAFKA-16432
 Project: Kafka
  Issue Type: Bug
Reporter: Matej Sprysl


h2.  Code:
{code:java}
final StreamsBuilder builder = new StreamsBuilder();

final GlobalKTable table = builder.globalTable("tableTopic", 
Consumed.with(Serdes.Long(), Serdes.Long()));

final KStream stream =
builder.stream(
Pattern.compile("streamTopic"),
Consumed.with(Serdes.ByteArray(), Message.SERDE));

(...some processing, no state store added...)

final var joiner = new MyJoiner();
final var keyMapper = new MyKeyMapper();

final KStream enriched =
messages
.join(table, keyMapper, joiner, Named.as("innerJoin"));{code}
h2. Error:
{code:java}
Caused by: org.apache.kafka.streams.errors.StreamsException: Processor 
innerJoin has no access to StateStore tableTopic-STATE-STORE-00 as the 
store is not connected to the processor. If you add stores manually via 
'.addStateStore()' make sure to connect the added store to the processor by 
providing the processor name to '.addStateStore()' or connect them via 
'.connectProcessorAndStateStores()'. DSL users need to provide the store name 
to '.process()', '.transform()', or '.transformValues()' to connect the store 
to the corresponding operator, or they can provide a StoreBuilder by 
implementing the stores() method on the Supplier itself. If you do not add 
stores manually, please file a bug report at 
https://issues.apache.org/jira/projects/KAFKA.
{code}
h2. Additional notes:

This error happens when I try to join a KStreams instance with a GlobalKTable 
instance.

It is important to emphasize that I am not connecting any state store manually.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-981: Manage Connect topics with custom implementation of Admin

2024-03-27 Thread Omnia Ibrahim
Hi Greg thanks for the feedback and sorry for the late response. 

> I don't think we can adapt the ForwardingAdmin as-is for use as a
> first-class Connect plugin.
> 1. It doesn't have a default constructor, and so can't be included in
> the existing plugin discovery mechanisms.
> 2. It doesn't implement Versioned, and so won't have version
> information exposed in the REST API
The goal isn't for ForwardingAdmin to become a Connect plugin but rather to 
introduce a pluggable extension point that intercepts or replaces Admin Client 
functionality.  
Especially since as far as I know Admin Client in Connect isn’t a plugin or any 
of Kafka clients. Maybe the KIP wasn’t clear enough on this front so I’ll 
update the KIP to make this clearer. 
Even if we want to be versioned we can have another KIP to do this but is there 
a reason why we'd want that?

> I also don't think that we should make the ForwardingAdmin a
> second-class Connect plugin.
> 1. Having some plugins but not others benefit from classloader
> isolation would be a "gotcha" for anyone familiar with existing
> Connect plugins
Am not sure I am getting your point here. 
> 2. Some future implementations may have a use-case for classloader
> isolation (such as depending on their own HTTP/json library) and
> retrofitting isolation would be more complicated than including it
> initially.
This is a possibility, however if this is a limitation of KIP-787 then this 
exists already when we run on connect cluster. 

> I also have concerns about the complexity of the implementation as a
> superclass instead of an interface, especially when considering the
> evolution of the Admin interface.

This was discussed in the alternatives and discussion thread for KIP-787 which 
is the original KIP. 
Having a delegator class like forwarding admin has some advantages over an 
interface or inheriting Kafka admin client
having a delegator class over an interface or inheriting KafkaAdminClient would 
make it easier to test the customised implementation 
adding competing Admin interfaces would create confusion and a heavier 
maintenance burden for the project.
Kafka already has this pattern of wrapper/delegator class for Admin client like 
`org.apache.kafka.streams.processor.internals.InternalTopicManager`,`org.apache.kafka.connect.util.SharedTopicAdmin`
 and `org.apache.kafka.connect.util.TopicAdmin` and never had another interface 
for AdminClient.

> I don't think the original proposal included the rejected alternative
> of having the existing AdminClient talk to the federation layer, which
> could implement a Kafka-compatible endpoint.
 
Thanks for suggesting that, indeed I had not included that option in the 
rejected alternatives. I think this approach should be ruled out for the 
following reasons (I will update the KIP to reflect this)
The Admin API is an interface is modelled around a cluster, but the federation 
layer will have to encompass multiple ones, it operates at a different 
abstraction level, creating all sorts of problems such as – which cluster does 
request this refer too? There's no space in the Admin API to represent the 
cluster, and there shouldn't be. With the forwarding admin implementation, the 
cluster identifier can be configured locally and used accordingly.
The Admin API expects Kafka a cluster to be configured as an endpoint. Some of 
the requests involve discovering metadata. e.g. listing topics involves 
discovering metadata and selecting the least busy node before sending the RPC. 
A federation layer should be in the same scope as any of this, it can be a 
stateless service.
If the federation layer has any issue, this will block everyone use AdminClient 
for basic day-to-day functionality like create/alter resources creating a 
bottle neck. It also might block the operators of Kafka cluster as well (We can 
argue that the admin client can run in 2 modes with a flag one for enable 
federation layer and another without but still it is a blocker).
This suggestion might be tricky with Kafka as a Services providers; who will 
provide this AdminClient implementation with federation layer?
How this will work with K8S operators and IaC management tools? Specially when 
the operator is deployed in another K8S cluster this will add another network 
latency on these operators which are used widely 
The Admin API uses a binary protocol, fit for interfacing with Kafka, whereas a 
federation layer could use a simpler REST based interface

> If a federation layer needs to intercept the Admin client behaviors,
> It sounds more reasonable for that to be addressed for all Admin
> clients at the network boundary rather than one-by-one updating the
> Java APIs to use this new plugin.
I want just to clarify not all Java APIs need this new plugin only “MM2” and 
Connect to make the deployment of running MM2 on connect fully managed with 
federation layer. The other API that might need this is only Stream for the 
initialising `InternalTopicManager` which 

Re: Permission to assign tickets in Jira

2024-03-27 Thread Yash Mayya
Hi Pavel,

I've granted you the necessary permissions. Thanks for your interest in
contributing to the Apache Kafka project!

Cheers,
Yash

On Tue, Mar 26, 2024 at 11:32 PM Pavel Pozdeev 
wrote:

>
> Hi Team,
>
> Would it be possible to get a permission to assign tickets in Jira?
> I've got a Jira account, but can only leave a comments on tickets, can not
> assign a ticket to myself.
> My Jira username: pasharik
>
> Best,
> Pavel Pozdeev
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2756

2024-03-27 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-16403) Flaky test org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords

2024-03-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez resolved KAFKA-16403.
-
Resolution: Not A Bug

> Flaky test 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords
> -
>
> Key: KAFKA-16403
> URL: https://issues.apache.org/jira/browse/KAFKA-16403
> Project: Kafka
>  Issue Type: Bug
>Reporter: Igor Soarez
>Priority: Major
>
> {code:java}
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords()
>  failed, log available in 
> /home/jenkins/workspace/Kafka_kafka-pr_PR-14903/streams/examples/build/reports/testOutput/org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testCountListOfWords().test.stdout
> Gradle Test Run :streams:examples:test > Gradle Test Executor 82 > 
> WordCountDemoTest > testCountListOfWords() FAILED
>     org.apache.kafka.streams.errors.ProcessorStateException: Error opening 
> store KSTREAM-AGGREGATE-STATE-STORE-03 at location 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:329)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBTimestampedStore.openRocksDB(RocksDBTimestampedStore.java:69)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:254)
>         at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:175)
>         at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:56)
>         at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$3(MeteredKeyValueStore.java:151)
>         at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
>         at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:151)
>         at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:232)
>         at 
> org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:102)
>         at 
> org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:258)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.setupTask(TopologyTestDriver.java:530)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:373)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:300)
>         at 
> org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:276)
>         at 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.setup(WordCountDemoTest.java:60)
>         Caused by:
>         org.rocksdb.RocksDBException: Corruption: IO error: No such file or 
> directory: While open a file for random read: 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/10.ldb:
>  No such file or directory in file 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/MANIFEST-05
>             at org.rocksdb.RocksDB.open(Native Method)
>             at org.rocksdb.RocksDB.open(RocksDB.java:307)
>             at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:323)
>             ... 17 more
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16404) Flaky test org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig

2024-03-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez resolved KAFKA-16404.
-
Resolution: Not A Bug

Same as KAFKA-16403, this only failed once. It was likely the result of a 
testing infrastructure problem. We can always re-open if we see this again and 
suspect otherwise.

> Flaky test 
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig
> -
>
> Key: KAFKA-16404
> URL: https://issues.apache.org/jira/browse/KAFKA-16404
> Project: Kafka
>  Issue Type: Bug
>Reporter: Igor Soarez
>Priority: Major
>
>  
> {code:java}
> org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig()
>  failed, log available in 
> /home/jenkins/workspace/Kafka_kafka-pr_PR-14903@2/streams/examples/build/reports/testOutput/org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.testGetStreamsConfig().test.stdout
> Gradle Test Run :streams:examples:test > Gradle Test Executor 87 > 
> WordCountDemoTest > testGetStreamsConfig() FAILED
>     org.apache.kafka.streams.errors.ProcessorStateException: Error opening 
> store KSTREAM-AGGREGATE-STATE-STORE-03 at location 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:329)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBTimestampedStore.openRocksDB(RocksDBTimestampedStore.java:69)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:254)
>         at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:175)
>         at 
> app//org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> app//org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:56)
>         at 
> app//org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:71)
>         at 
> app//org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$3(MeteredKeyValueStore.java:151)
>         at 
> app//org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
>         at 
> app//org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:151)
>         at 
> app//org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:232)
>         at 
> app//org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:102)
>         at 
> app//org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:258)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.setupTask(TopologyTestDriver.java:530)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:373)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:300)
>         at 
> app//org.apache.kafka.streams.TopologyTestDriver.(TopologyTestDriver.java:276)
>         at 
> app//org.apache.kafka.streams.examples.wordcount.WordCountDemoTest.setup(WordCountDemoTest.java:60)
>         Caused by:
>         org.rocksdb.RocksDBException: While lock file: 
> /tmp/kafka-streams/streams-wordcount/0_0/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-03/LOCK:
>  Resource temporarily unavailable
>             at app//org.rocksdb.RocksDB.open(Native Method)
>             at app//org.rocksdb.RocksDB.open(RocksDB.java:307)
>             at 
> app//org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:323)
>             ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16353) Offline protocol migration integration tests

2024-03-27 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16353.
-
Resolution: Fixed

> Offline protocol migration integration tests
> 
>
> Key: KAFKA-16353
> URL: https://issues.apache.org/jira/browse/KAFKA-16353
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Community Over Code NA 2024 Travel Assistance Applications now open!

2024-03-27 Thread Gavin McDonald
Hello to all users, contributors and Committers!

[ You are receiving this email as a subscriber to one or more ASF project
dev or user
  mailing lists and is not being sent to you directly. It is important that
we reach all of our
  users and contributors/committers so that they may get a chance to
benefit from this.
  We apologise in advance if this doesn't interest you but it is on topic
for the mailing
  lists of the Apache Software Foundation; and it is important please that
you do not
  mark this as spam in your email client. Thank You! ]

The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code NA 2024 are now
open!

We will be supporting Community over Code NA, Denver Colorado in
October 7th to the 10th 2024.

TAC exists to help those that would like to attend Community over Code
events, but are unable to do so for financial reasons. For more info
on this years applications and qualifying criteria, please visit the
TAC website at < https://tac.apache.org/ >. Applications are already
open on https://tac-apply.apache.org/, so don't delay!

The Apache Travel Assistance Committee will only be accepting
applications from those people that are able to attend the full event.

Important: Applications close on Monday 6th May, 2024.

Applicants have until the the closing date above to submit their
applications (which should contain as much supporting material as
required to efficiently and accurately process their request), this
will enable TAC to announce successful applications shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a
diverse range of backgrounds; therefore, we encourage (as always)
anyone thinking about sending in an application to do so ASAP.

For those that will need a Visa to enter the Country - we advise you apply
now so that you have enough time in case of interview delays. So do not
wait until you know if you have been accepted or not.

We look forward to greeting many of you in Denver, Colorado , October 2024!

Kind Regards,

Gavin

(On behalf of the Travel Assistance Committee)


Re: [ANNOUNCE] New committer: Christo Lolov

2024-03-27 Thread Matthias J. Sax

Congrats!

On 3/26/24 9:39 PM, Christo Lolov wrote:

Thank you everyone!

It wouldn't have been possible without quite a lot of reviews and extremely
helpful inputs from you and the rest of the community! I am looking forward
to working more closely with you going forward :)

On Tue, 26 Mar 2024 at 14:31, Kirk True  wrote:


Congratulations Christo!


On Mar 26, 2024, at 7:27 AM, Satish Duggana 

wrote:


Congratulations Christo!

On Tue, 26 Mar 2024 at 19:20, Ivan Yurchenko  wrote:


Congrats!

On Tue, Mar 26, 2024, at 14:48, Lucas Brutschy wrote:

Congrats!

On Tue, Mar 26, 2024 at 2:44 PM Federico Valeri 

wrote:


Congrats!

On Tue, Mar 26, 2024 at 2:27 PM Mickael Maison <

mickael.mai...@gmail.com> wrote:


Congratulations Christo!

On Tue, Mar 26, 2024 at 2:26 PM Chia-Ping Tsai 

wrote:


Congrats Christo!

Chia-Ping









[jira] [Created] (KAFKA-16430) The group-metadata-manager thread is always in a loading state and occupies one CPU, unable to end.

2024-03-27 Thread Gao Fei (Jira)
Gao Fei created KAFKA-16430:
---

 Summary: The group-metadata-manager thread is always in a loading 
state and occupies one CPU, unable to end.
 Key: KAFKA-16430
 URL: https://issues.apache.org/jira/browse/KAFKA-16430
 Project: Kafka
  Issue Type: Bug
  Components: group-coordinator
Affects Versions: 2.4.0
Reporter: Gao Fei


I deployed three broker instances and suddenly found that the client was unable 
to consume data from certain topic partitions. I first tried to log in to the 
broker corresponding to the group and used the following command to view the 
consumer group:
{code:java}
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9093 --describe 
--group mygroup{code}
and found the following error:
{code:java}
Error: Executing consumer group command failed due to 
org.apache.kafka.common.errors.CoodinatorLoadInProgressException: The 
coodinator is loading and hence can't process requests.{code}

I then discovered that the broker may be stuck in a loop, which is constantly 
in a loading state. At the same time, I found through the top command that the 
"group-metadata-manager-0" thread was constantly consuming 100% of the CPU 
resources. This loop could not be broken, resulting in the inability to consume 
topic partition data on that node. At this point, I suspected that the issue 
may be related to the __consumer_offsets partition data file loaded by this 
thread.
Finally, after restarting the broker instance, everything was back to normal. 
It's very strange that if there was an issue with the __consumer_offsets 
partition data file, the broker should have failed to start. Why was it able to 
automatically recover after a restart? And why did this continuous loop loading 
of the __consumer_offsets partition data occur?

We encountered this issue in our production environment using Kafka versions 
2.2.1 and 2.4.0, and I believe it may also affect other versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16429) Enhance all configs which can trigger rolling of new segment

2024-03-27 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16429:
--

 Summary: Enhance all configs which can trigger rolling of new 
segment
 Key: KAFKA-16429
 URL: https://issues.apache.org/jira/browse/KAFKA-16429
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


see discussion: 
https://github.com/apache/kafka/pull/15588#issuecomment-2021842695



--
This message was sent by Atlassian Jira
(v8.20.10#820010)