Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2750

2024-03-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 462807 lines...]
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldVerifyIfPendingTaskToInitExist() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldVerifyIfPendingTaskToInitExist() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldAddAndRemovePendingTaskToAddBack() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldAddAndRemovePendingTaskToAddBack() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldCheckStateWhenRemoveTask() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldCheckStateWhenRemoveTask() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldDrainPendingTasksToCreate() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldDrainPendingTasksToCreate() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldAddAndRemovePendingTaskToCloseClean() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldAddAndRemovePendingTaskToCloseClean() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldKeepAddedTasks() STARTED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > TasksTest > shouldKeepAddedTasks() PASSED
[2024-03-26T04:26:11.024Z] 
[2024-03-26T04:26:11.024Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheUsage(RocksDBStore, StateStoreContext) > 
"shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext).org.apache.kafka.streams.state.internals.RocksDBStore@67230063,
 org.apache.kafka.test.MockInternalProcessorContext@171da6d8" STARTED
[2024-03-26T04:26:11.125Z] 
[2024-03-26T04:26:11.125Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheUsage(RocksDBStore, StateStoreContext) > 
"shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext).org.apache.kafka.streams.state.internals.RocksDBStore@67230063,
 org.apache.kafka.test.MockInternalProcessorContext@171da6d8" PASSED
[2024-03-26T04:26:11.125Z] 
[2024-03-26T04:26:11.125Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheUsage(RocksDBStore, StateStoreContext) > 
"shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext).org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@2cdbe19c,
 org.apache.kafka.test.MockInternalProcessorContext@1d1126c1" STARTED
[2024-03-26T04:26:11.225Z] 
[2024-03-26T04:26:11.225Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheUsage(RocksDBStore, StateStoreContext) > 
"shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext).org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@2cdbe19c,
 org.apache.kafka.test.MockInternalProcessorContext@1d1126c1" PASSED
[2024-03-26T04:26:11.225Z] 
[2024-03-26T04:26:11.225Z] Gradle Test Run :streams:test > Gradle Test Executor 
25 > RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > 

Re: [Confluence] Request for an account

2024-03-25 Thread Luke Chen
Hi Johnny,

Currently, there is an infra issue about this:
https://issues.apache.org/jira/browse/INFRA-25451 , and unfortunately it's
not fixed, yet.
I think alternatively, maybe you could put your proposal in a shared google
doc for discussion. (without comment enabled since we want to keep all the
discussion history in apache email threads).
After discussion is completed, committers can help you add the content into
confluence wiki.

Thanks.
Luke

On Mon, Mar 25, 2024 at 8:56 PM ChengHan Hsu 
wrote:

> Hi all,
>
> I have sent a email to infrastruct...@apache.org for registering an
> account
> of Confluence, I am contributing to Kafka and would like to update some
> wiki.
> May I know if anyone can help with this?
>
> Thanks in advance!
>
> Best,
> Johnny
>


[jira] [Resolved] (KAFKA-16409) kafka-delete-records / DeleteRecordsCommand should use standard exception handling

2024-03-25 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16409.
---
Fix Version/s: 3.8.0
   Resolution: Fixed

> kafka-delete-records / DeleteRecordsCommand should use standard exception 
> handling
> --
>
> Key: KAFKA-16409
> URL: https://issues.apache.org/jira/browse/KAFKA-16409
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 3.7.0
>Reporter: Greg Harris
>Assignee: PoAn Yang
>Priority: Minor
>  Labels: newbie
> Fix For: 3.8.0
>
>
> When an exception is thrown in kafka-delete-records, it propagates through 
> `main` to the JVM, producing the following message:
> {noformat}
> bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
> --offset-json-file /tmp/does-not-exist
> Exception in thread "main" java.io.IOException: Unable to read file 
> /tmp/does-not-exist
>         at 
> org.apache.kafka.common.utils.Utils.readFileAsString(Utils.java:787)
>         at 
> org.apache.kafka.tools.DeleteRecordsCommand.execute(DeleteRecordsCommand.java:105)
>         at 
> org.apache.kafka.tools.DeleteRecordsCommand.main(DeleteRecordsCommand.java:64)
> Caused by: java.nio.file.NoSuchFileException: /tmp/does-not-exist
>         at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>         at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>         at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>         at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>         at java.nio.file.Files.newByteChannel(Files.java:361)
>         at java.nio.file.Files.newByteChannel(Files.java:407)
>         at java.nio.file.Files.readAllBytes(Files.java:3152)
>         at 
> org.apache.kafka.common.utils.Utils.readFileAsString(Utils.java:784)
>         ... 2 more{noformat}
> This is in contrast to the error handling used in other tools, such as the 
> kafka-log-dirs:
> {noformat}
> bin/kafka-log-dirs.sh --bootstrap-server localhost:9092 --describe 
> --command-config /tmp/does-not-exist
> /tmp/does-not-exist
> java.nio.file.NoSuchFileException: /tmp/does-not-exist
>         at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>         at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>         at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>         at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>         at java.nio.file.Files.newByteChannel(Files.java:361)
>         at java.nio.file.Files.newByteChannel(Files.java:407)
>         at 
> java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
>         at java.nio.file.Files.newInputStream(Files.java:152)
>         at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:686)
>         at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:673)
>         at 
> org.apache.kafka.tools.LogDirsCommand.createAdminClient(LogDirsCommand.java:149)
>         at 
> org.apache.kafka.tools.LogDirsCommand.execute(LogDirsCommand.java:68)
>         at 
> org.apache.kafka.tools.LogDirsCommand.mainNoExit(LogDirsCommand.java:54)
>         at 
> org.apache.kafka.tools.LogDirsCommand.main(LogDirsCommand.java:49){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread Justine Olshan
Hi Jun,

I apologize for typos. I thought I got rid of all the non protocol
versions. It is protocol version as per my previous email. I will fix the
KIP.

The group coordinator version is used to upgrade to the new group
coordinator protocol. (KIP-848)
I don't have all the context there.

I would prefer to not add the metadata flag to the storage tool as it is
not necessary. The reason it is not removed is purely for backwards
compatibility. Colin had strong feelings about not removing any flags.

Justine

On Mon, Mar 25, 2024 at 5:01 PM Jun Rao  wrote:

> Hi, Justine,
>
> Thanks for the updated KIP. A few more comments.
>
> 10. Could we describe what RPCs group.coordinator.version controls?
>
> 12. --metadata METADATA is not removed from kafka-features. Do we have a
> justification for this? If so, should we add that to kafka-storage to be
> consistent?
>
> 14. The KIP has both transaction.protocol.version vs transaction.version.
> What's the correct feature name?
>
> Jun
>
> On Mon, Mar 25, 2024 at 4:54 PM Justine Olshan
> 
> wrote:
>
> > I've updated the KIP to include this CLI. For now, I've moved the new api
> > to the server as a rejected alternative.
> >
> > It seems like we can keep the mapping in the tool. Given that we also
> need
> > to do the same validation for using multiple --feature flags (the request
> > will look the same to the server), we can have the --release-version
> flag.
> >
> > I think that closes the main discussions. Please let me know if there is
> > any further discussion I missed.
> >
> > Justine
> >
> > On Mon, Mar 25, 2024 at 4:44 PM Justine Olshan 
> > wrote:
> >
> > > Hi Jose,
> > >
> > > Sorry for the typos. I think you figured out what I meant.
> > >
> > > I can make a new API. There is a risk of creating a ton of very similar
> > > APIs though. Even the ApiVersions api is confusing with its supported
> and
> > > finalized features fields. I wonder if there is a middle ground here.
> > > I can have the storage tool and features tool rely on the feature and
> not
> > > query the server. Colin seemed to be against that.
> > > > Anyway your idea of putting the info on the server side is probably
> for
> > > the best.
> > >
> > > --release-version can work with the downgrade tool too. I just didn't
> > > think I needed to directly spell that out. I can though.
> > >
> > > I wish we weren't splitting this conversation among the two threads.
> :( I
> > > tried to get this out so it could cover all KIPs. Having this on
> separate
> > > threads makes getting this consensus even harder than it already is.
> > > From what I can tell your KIP's text matches this. Is the expectation
> > that
> > > the added flags will be done as part of this KIP or your KIP? I don't
> > > really have a strong opinion about --release-version so maybe it should
> > > have been part of your KIP all along.
> > >
> > > > To me version 0 doesn't necessarily mean that the feature is not
> > > enabled. For example, for kraft.version, version 0 means the protocol
> > > prior to KIP-853. Version 0 is the currently implemented version of
> > > the KRaft protocol.
> > >
> > > This is what Colin told me in our previous discussions. I don't really
> > > feel too strongly about the semantics here.
> > >
> > > So it seems like the only real undecided item here is whether we should
> > > have this new api query the server or rely on the information being
> built
> > > in the tool.
> > > I will update the KIP to include the CLI command to get the
> information.
> > >
> > > Justine
> > >
> > > On Mon, Mar 25, 2024 at 4:19 PM José Armando García Sancio
> > >  wrote:
> > >
> > >> Hi Justine,
> > >>
> > >> Thanks for the update. See my comments below.
> > >>
> > >> On Mon, Mar 25, 2024 at 2:51 PM Justine Olshan
> > >>  wrote:
> > >> > I've updated the KIP with the changes I mentioned earlier. I have
> not
> > >> yet
> > >> > removed the --feature-version flag from the upgrade tool.
> > >>
> > >> What's the "--feature-version" flag? This is the first time I see it
> > >> mentioned and I don't see it in the KIP. Did you mean
> > >> "--release-version"?
> > >>
> > >> > Please take a look at the API to get the versions for a given
> > >> > metadata version. Right now I'm using ApiVersions request and
> > >> specifying a
> > >> > metadata version. The supported versions are then supplied in the
> > >> > ApiVersions response.
> > >> > There were tradeoffs with this approach. It is a pretty minimal
> > change,
> > >> but
> > >> > reusing the API means that it could be confusing (ie, the ApiKeys
> will
> > >> be
> > >> > supplied in the response but not needed.) I considered making a
> whole
> > >> new
> > >> > API, but that didn't seem necessary for this use.
> > >>
> > >> I agree that this is extremely confusing and we shouldn't overload the
> > >> ApiVersions RPC to return this information. The KIP doesn't mention
> > >> how it is going to use this API. Do you need to update the Admin
> > >> client to include this 

Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread Jun Rao
Hi, Justine,

Thanks for the updated KIP. A few more comments.

10. Could we describe what RPCs group.coordinator.version controls?

12. --metadata METADATA is not removed from kafka-features. Do we have a
justification for this? If so, should we add that to kafka-storage to be
consistent?

14. The KIP has both transaction.protocol.version vs transaction.version.
What's the correct feature name?

Jun

On Mon, Mar 25, 2024 at 4:54 PM Justine Olshan 
wrote:

> I've updated the KIP to include this CLI. For now, I've moved the new api
> to the server as a rejected alternative.
>
> It seems like we can keep the mapping in the tool. Given that we also need
> to do the same validation for using multiple --feature flags (the request
> will look the same to the server), we can have the --release-version flag.
>
> I think that closes the main discussions. Please let me know if there is
> any further discussion I missed.
>
> Justine
>
> On Mon, Mar 25, 2024 at 4:44 PM Justine Olshan 
> wrote:
>
> > Hi Jose,
> >
> > Sorry for the typos. I think you figured out what I meant.
> >
> > I can make a new API. There is a risk of creating a ton of very similar
> > APIs though. Even the ApiVersions api is confusing with its supported and
> > finalized features fields. I wonder if there is a middle ground here.
> > I can have the storage tool and features tool rely on the feature and not
> > query the server. Colin seemed to be against that.
> > > Anyway your idea of putting the info on the server side is probably for
> > the best.
> >
> > --release-version can work with the downgrade tool too. I just didn't
> > think I needed to directly spell that out. I can though.
> >
> > I wish we weren't splitting this conversation among the two threads. :( I
> > tried to get this out so it could cover all KIPs. Having this on separate
> > threads makes getting this consensus even harder than it already is.
> > From what I can tell your KIP's text matches this. Is the expectation
> that
> > the added flags will be done as part of this KIP or your KIP? I don't
> > really have a strong opinion about --release-version so maybe it should
> > have been part of your KIP all along.
> >
> > > To me version 0 doesn't necessarily mean that the feature is not
> > enabled. For example, for kraft.version, version 0 means the protocol
> > prior to KIP-853. Version 0 is the currently implemented version of
> > the KRaft protocol.
> >
> > This is what Colin told me in our previous discussions. I don't really
> > feel too strongly about the semantics here.
> >
> > So it seems like the only real undecided item here is whether we should
> > have this new api query the server or rely on the information being built
> > in the tool.
> > I will update the KIP to include the CLI command to get the information.
> >
> > Justine
> >
> > On Mon, Mar 25, 2024 at 4:19 PM José Armando García Sancio
> >  wrote:
> >
> >> Hi Justine,
> >>
> >> Thanks for the update. See my comments below.
> >>
> >> On Mon, Mar 25, 2024 at 2:51 PM Justine Olshan
> >>  wrote:
> >> > I've updated the KIP with the changes I mentioned earlier. I have not
> >> yet
> >> > removed the --feature-version flag from the upgrade tool.
> >>
> >> What's the "--feature-version" flag? This is the first time I see it
> >> mentioned and I don't see it in the KIP. Did you mean
> >> "--release-version"?
> >>
> >> > Please take a look at the API to get the versions for a given
> >> > metadata version. Right now I'm using ApiVersions request and
> >> specifying a
> >> > metadata version. The supported versions are then supplied in the
> >> > ApiVersions response.
> >> > There were tradeoffs with this approach. It is a pretty minimal
> change,
> >> but
> >> > reusing the API means that it could be confusing (ie, the ApiKeys will
> >> be
> >> > supplied in the response but not needed.) I considered making a whole
> >> new
> >> > API, but that didn't seem necessary for this use.
> >>
> >> I agree that this is extremely confusing and we shouldn't overload the
> >> ApiVersions RPC to return this information. The KIP doesn't mention
> >> how it is going to use this API. Do you need to update the Admin
> >> client to include this information?
> >>
> >> Having said this, as you mentioned in the KIP the kafka-storage tool
> >> needs this information and that tool cannot assume that there is a
> >> running server it can query (send an RPC). Can the kafka-features use
> >> the same mechanism used by kafka-storage without calling into a
> >> broker?
> >>
> >> re: "It will work just like the storage tool and upgrade all the
> >> features to a version"
> >>
> >> Does this mean that --release-version cannot be used with
> >> "kafka-features downgrade"?
> >>
> >> re: Consistency with KIP-853
> >>
> >> Jun and I have been having a similar conversation in the discussion
> >> thread for KIP-853. From what I can tell both changes are compatible.
> >> Do you mind taking a look at these two sections and confirming that
> >> they 

Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread Justine Olshan
I've updated the KIP to include this CLI. For now, I've moved the new api
to the server as a rejected alternative.

It seems like we can keep the mapping in the tool. Given that we also need
to do the same validation for using multiple --feature flags (the request
will look the same to the server), we can have the --release-version flag.

I think that closes the main discussions. Please let me know if there is
any further discussion I missed.

Justine

On Mon, Mar 25, 2024 at 4:44 PM Justine Olshan  wrote:

> Hi Jose,
>
> Sorry for the typos. I think you figured out what I meant.
>
> I can make a new API. There is a risk of creating a ton of very similar
> APIs though. Even the ApiVersions api is confusing with its supported and
> finalized features fields. I wonder if there is a middle ground here.
> I can have the storage tool and features tool rely on the feature and not
> query the server. Colin seemed to be against that.
> > Anyway your idea of putting the info on the server side is probably for
> the best.
>
> --release-version can work with the downgrade tool too. I just didn't
> think I needed to directly spell that out. I can though.
>
> I wish we weren't splitting this conversation among the two threads. :( I
> tried to get this out so it could cover all KIPs. Having this on separate
> threads makes getting this consensus even harder than it already is.
> From what I can tell your KIP's text matches this. Is the expectation that
> the added flags will be done as part of this KIP or your KIP? I don't
> really have a strong opinion about --release-version so maybe it should
> have been part of your KIP all along.
>
> > To me version 0 doesn't necessarily mean that the feature is not
> enabled. For example, for kraft.version, version 0 means the protocol
> prior to KIP-853. Version 0 is the currently implemented version of
> the KRaft protocol.
>
> This is what Colin told me in our previous discussions. I don't really
> feel too strongly about the semantics here.
>
> So it seems like the only real undecided item here is whether we should
> have this new api query the server or rely on the information being built
> in the tool.
> I will update the KIP to include the CLI command to get the information.
>
> Justine
>
> On Mon, Mar 25, 2024 at 4:19 PM José Armando García Sancio
>  wrote:
>
>> Hi Justine,
>>
>> Thanks for the update. See my comments below.
>>
>> On Mon, Mar 25, 2024 at 2:51 PM Justine Olshan
>>  wrote:
>> > I've updated the KIP with the changes I mentioned earlier. I have not
>> yet
>> > removed the --feature-version flag from the upgrade tool.
>>
>> What's the "--feature-version" flag? This is the first time I see it
>> mentioned and I don't see it in the KIP. Did you mean
>> "--release-version"?
>>
>> > Please take a look at the API to get the versions for a given
>> > metadata version. Right now I'm using ApiVersions request and
>> specifying a
>> > metadata version. The supported versions are then supplied in the
>> > ApiVersions response.
>> > There were tradeoffs with this approach. It is a pretty minimal change,
>> but
>> > reusing the API means that it could be confusing (ie, the ApiKeys will
>> be
>> > supplied in the response but not needed.) I considered making a whole
>> new
>> > API, but that didn't seem necessary for this use.
>>
>> I agree that this is extremely confusing and we shouldn't overload the
>> ApiVersions RPC to return this information. The KIP doesn't mention
>> how it is going to use this API. Do you need to update the Admin
>> client to include this information?
>>
>> Having said this, as you mentioned in the KIP the kafka-storage tool
>> needs this information and that tool cannot assume that there is a
>> running server it can query (send an RPC). Can the kafka-features use
>> the same mechanism used by kafka-storage without calling into a
>> broker?
>>
>> re: "It will work just like the storage tool and upgrade all the
>> features to a version"
>>
>> Does this mean that --release-version cannot be used with
>> "kafka-features downgrade"?
>>
>> re: Consistency with KIP-853
>>
>> Jun and I have been having a similar conversation in the discussion
>> thread for KIP-853. From what I can tell both changes are compatible.
>> Do you mind taking a look at these two sections and confirming that
>> they don't contradict your KIP?
>> 1.
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-storage
>> 2.
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-features
>>
>> re: nit "For MVs that existed before these features, we map the new
>> features to version 0 (no feature enabled)."
>>
>> To me version 0 doesn't necessarily mean that the feature is not
>> enabled. For example, for kraft.version, version 0 means the protocol
>> prior to KIP-853. Version 0 is the currently implemented 

Re: [DISCUSS] KIP-1024: Make the restore behavior of GlobalKTables with custom processors configureable

2024-03-25 Thread Almog Gavra
Hello Folk!

Glad to see improvements to the GlobalKTables in discussion! I think they
deserve more love :)

Scope creep alert (which I'm generally against and certainly still support
this KIP without but I want to see if there's an elegant way to address
both problems). The KIP mentions that "Now the restore is done by
reprocessing using an instance from the customer processor supplier" which
I suppose fixed a long-standing bug (
https://issues.apache.org/jira/browse/KAFKA-8037) but only for
GlobalKTables and not for normal KTables that use the source-changelog
optimization. Since this API could be used to signal "I want to reprocess
on restore" I'm wondering whether it makes sense to design this API in a
way that could be extended for KTables as well so a fix for KAFKA-8037
would be possible with the same mechanism. Thoughts?

Cheers,
Almog

On Mon, Mar 25, 2024 at 11:06 AM Walker Carlson
 wrote:

> Hey Bruno,
>
> 1) I'm actually not sure why that is in there. It certainly doesn't match
> the convention. Best to remove it and match the other methods.
>
> 2) Yeah, I thought about it but I'm not convinced it is a necessary
> restriction. It might be useful for the already defined processors but then
> they might as well use the `globalTable` method. I think the add state
> store option should go for maximum flexibility.
>
> Best,
> Walker
>
>
>
> On Fri, Mar 22, 2024 at 10:01 AM Bruno Cadonna  wrote:
>
> > Hi Walker,
> >
> > A couple of follow-up questions.
> >
> > 1.
> > Why do you propose to explicitly pass a parameter "storeName" in
> > StreamsBuilder#addGlobalStore?
> > The StoreBuilder should already provide a name for the store, if I
> > understand the code correctly.
> > I would avoid using the same name for the source node and the state
> > store, because it limits the flexibility in naming. Why do you not use
> > Named for the name of the source node?
> >
> > 2.
> > Did you consider Matthias' proposal to restrict the type of the store
> > builder to `StoreBuilder` (or even
> > `StoreBuilder`) for the case where
> > the processor is built-in?
> >
> >
> > Best,
> > Bruno
> >
> > On 3/13/24 11:05 PM, Walker Carlson wrote:
> > > Thanks for the feedback Bruno, Matthias, and Lucas!
> > >
> > > There is a decent amount but I'm going to try and just hit the major
> > points
> > > as I would like to keep this change simple.
> > >
> > > I've made corrections for the mistakes pointed out. Thanks for the
> > > suggestions everyone.
> > >
> > > The main sticking point seems to be with the method of signalling the
> > > restore behavior. It seems we can all agree with how the API should
> look
> > > with the default option we are adding. I think keeping the option to
> load
> > > directly from the topic into the store is a good idea. It is much more
> > > performant and could make a simple metric collector processor much
> > simpler.
> > >
> > > I think something that Matthais said about creating a special class of
> > > processors for the global stores helps me think about the issue. I tend
> > to
> > > fall into the category that we should keep global stores open to the
> > > possibility of having child nodes in the future. I don't really see the
> > > downside of having that as an option. It might not be best for a lot of
> > > cases, but something simple could be very useful to put in the PAPI.
> > >
> > > I like the idea of having a `GlobalStoreParameters` but only if we
> decide
> > > to make the processor need to extend an interface like
> > > 'GobalStoreProcessor`. If not that seems excessive.
> > >
> > > As of right now I don't see a better option than having a boolean flag
> > for
> > > the reprocessOnRestore option. I expanded the description in the docs
> so
> > I
> > > hope that helps.
> > >
> > > I am more than willing to take other ideas on it.
> > >
> > > thanks,
> > > Walker
> > >
> >
>


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread Justine Olshan
Hi Jose,

Sorry for the typos. I think you figured out what I meant.

I can make a new API. There is a risk of creating a ton of very similar
APIs though. Even the ApiVersions api is confusing with its supported and
finalized features fields. I wonder if there is a middle ground here.
I can have the storage tool and features tool rely on the feature and not
query the server. Colin seemed to be against that.
> Anyway your idea of putting the info on the server side is probably for
the best.

--release-version can work with the downgrade tool too. I just didn't think
I needed to directly spell that out. I can though.

I wish we weren't splitting this conversation among the two threads. :( I
tried to get this out so it could cover all KIPs. Having this on separate
threads makes getting this consensus even harder than it already is.
>From what I can tell your KIP's text matches this. Is the expectation that
the added flags will be done as part of this KIP or your KIP? I don't
really have a strong opinion about --release-version so maybe it should
have been part of your KIP all along.

> To me version 0 doesn't necessarily mean that the feature is not
enabled. For example, for kraft.version, version 0 means the protocol
prior to KIP-853. Version 0 is the currently implemented version of
the KRaft protocol.

This is what Colin told me in our previous discussions. I don't really feel
too strongly about the semantics here.

So it seems like the only real undecided item here is whether we should
have this new api query the server or rely on the information being built
in the tool.
I will update the KIP to include the CLI command to get the information.

Justine

On Mon, Mar 25, 2024 at 4:19 PM José Armando García Sancio
 wrote:

> Hi Justine,
>
> Thanks for the update. See my comments below.
>
> On Mon, Mar 25, 2024 at 2:51 PM Justine Olshan
>  wrote:
> > I've updated the KIP with the changes I mentioned earlier. I have not yet
> > removed the --feature-version flag from the upgrade tool.
>
> What's the "--feature-version" flag? This is the first time I see it
> mentioned and I don't see it in the KIP. Did you mean
> "--release-version"?
>
> > Please take a look at the API to get the versions for a given
> > metadata version. Right now I'm using ApiVersions request and specifying
> a
> > metadata version. The supported versions are then supplied in the
> > ApiVersions response.
> > There were tradeoffs with this approach. It is a pretty minimal change,
> but
> > reusing the API means that it could be confusing (ie, the ApiKeys will be
> > supplied in the response but not needed.) I considered making a whole new
> > API, but that didn't seem necessary for this use.
>
> I agree that this is extremely confusing and we shouldn't overload the
> ApiVersions RPC to return this information. The KIP doesn't mention
> how it is going to use this API. Do you need to update the Admin
> client to include this information?
>
> Having said this, as you mentioned in the KIP the kafka-storage tool
> needs this information and that tool cannot assume that there is a
> running server it can query (send an RPC). Can the kafka-features use
> the same mechanism used by kafka-storage without calling into a
> broker?
>
> re: "It will work just like the storage tool and upgrade all the
> features to a version"
>
> Does this mean that --release-version cannot be used with
> "kafka-features downgrade"?
>
> re: Consistency with KIP-853
>
> Jun and I have been having a similar conversation in the discussion
> thread for KIP-853. From what I can tell both changes are compatible.
> Do you mind taking a look at these two sections and confirming that
> they don't contradict your KIP?
> 1.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-storage
> 2.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-features
>
> re: nit "For MVs that existed before these features, we map the new
> features to version 0 (no feature enabled)."
>
> To me version 0 doesn't necessarily mean that the feature is not
> enabled. For example, for kraft.version, version 0 means the protocol
> prior to KIP-853. Version 0 is the currently implemented version of
> the KRaft protocol.
>
> Thanks,
> --
> -José
>


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread José Armando García Sancio
Hi Justine,

Thanks for the update. See my comments below.

On Mon, Mar 25, 2024 at 2:51 PM Justine Olshan
 wrote:
> I've updated the KIP with the changes I mentioned earlier. I have not yet
> removed the --feature-version flag from the upgrade tool.

What's the "--feature-version" flag? This is the first time I see it
mentioned and I don't see it in the KIP. Did you mean
"--release-version"?

> Please take a look at the API to get the versions for a given
> metadata version. Right now I'm using ApiVersions request and specifying a
> metadata version. The supported versions are then supplied in the
> ApiVersions response.
> There were tradeoffs with this approach. It is a pretty minimal change, but
> reusing the API means that it could be confusing (ie, the ApiKeys will be
> supplied in the response but not needed.) I considered making a whole new
> API, but that didn't seem necessary for this use.

I agree that this is extremely confusing and we shouldn't overload the
ApiVersions RPC to return this information. The KIP doesn't mention
how it is going to use this API. Do you need to update the Admin
client to include this information?

Having said this, as you mentioned in the KIP the kafka-storage tool
needs this information and that tool cannot assume that there is a
running server it can query (send an RPC). Can the kafka-features use
the same mechanism used by kafka-storage without calling into a
broker?

re: "It will work just like the storage tool and upgrade all the
features to a version"

Does this mean that --release-version cannot be used with
"kafka-features downgrade"?

re: Consistency with KIP-853

Jun and I have been having a similar conversation in the discussion
thread for KIP-853. From what I can tell both changes are compatible.
Do you mind taking a look at these two sections and confirming that
they don't contradict your KIP?
1. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-storage
2. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-kafka-features

re: nit "For MVs that existed before these features, we map the new
features to version 0 (no feature enabled)."

To me version 0 doesn't necessarily mean that the feature is not
enabled. For example, for kraft.version, version 0 means the protocol
prior to KIP-853. Version 0 is the currently implemented version of
the KRaft protocol.

Thanks,
-- 
-José


Re: [DISCUSS] KIP-1026: Handling producer snapshot when upgrading from < v2.8.0 for Tiered Storage

2024-03-25 Thread Greg Harris
Hi Arpit,

I think creating empty producer snapshots would be
backwards-compatible for the tiered storage plugins, but I'm not aware
of what the other compatibility/design concerns might be. Maybe you or
another reviewer can answer these questions:
1. Does an empty producer snapshot have the same behavior as a
non-existent snapshot when restored?
2. Why were empty producer snapshots not backfilled for older data
when clusters were upgraded from 2.8?
3. Do producer snapshots need to be available contiguously, or can
earlier snapshots be empty while later segments do not exist?

Thanks,
Greg

On Sat, Mar 23, 2024 at 3:24 AM Arpit Goyal  wrote:
>
> Yes Luke,
> I am also in favour of creating producer snapshot at run time if
> foundEmpty  as this would only be required for topics migrated from < 2.8
> version. This will not break the existing contract with the plugin. Yes,
> metrics do not make sense here as of now.
> Greg, @Kamal Chandraprakash   WDYT ?
> Arpit Goyal
> 8861094754
>
>
> On Sat, Mar 23, 2024 at 3:05 PM Luke Chen  wrote:
>
> > Hi Arpit,
> >
> > I'm in favor of creating an empty producer snapshot since it's only for
> > topics <= v2.8.
> > About the metric, I don't know what we expect users to know.
> > I think we can implement with the empty producer snapshot method, without
> > the metric.
> > And add them if users are requested it.
> > WDYT?
> >
> > Thank you.
> > Luke
> >
> > On Sat, Mar 23, 2024 at 1:24 PM Arpit Goyal 
> > wrote:
> >
> > > Hi Team,
> > > Any further comments or suggestions on the possible approaches discussed
> > > above.
> > >
> > > On Tue, Mar 19, 2024, 09:55 Arpit Goyal 
> > wrote:
> > >
> > > > @Luke Chen @Kamal Chandraprakash 
> > @Greg
> > > > Harris Any suggestion on the above two possible approaches.
> > > > On Sun, Mar 17, 2024, 13:36 Arpit Goyal 
> > > wrote:
> > > >
> > > >>
> > >   In summary , There are two possible solution to handle the above
> > > >> scenario when producer snapshot file found to be null
> > > >>
> > > >> 1. *Generate empty producer snapshot file at run time when copying
> > > >> LogSegment*
> > > >>
> > > >>
> > > >>- This will not require any backward compatibility dependencies
> > with
> > > >>the plugin.
> > > >>- It preserves the contract i.e producerSnapshot files should be
> > > >>mandatory.
> > > >>- We could have a metric which helps us to assess how many times
> > > >>empty snapshot files have been created.
> > > >>
> > > >> 2.*  Make producerSnapshot files optional *
> > > >>
> > > >>- This would break the contract with the plugin and would require
> > > >>defining a set of approaches to handle it which is mentioned
> > earlier
> > > in the
> > > >>thread.
> > > >>- If we make producer Snapshot optional , We would   not be
> > handling
> > > >>the error which @LukeChen mentioned when producerSnapshot
> > > >>accidentally deleted a given segment. But this holds true for
> > > >>TransactionalIndex.
> > > >>- The other question is do we really need to make the field
> > optional.
> > > >>The only case where this problem can occur is only when the topic
> > > migrated
> > > >>from < 2.8 version.
> > > >>
> > > >>
> > >
> >


Re: [DISCUSS] KIP-1022 Formatting and Updating Features

2024-03-25 Thread Justine Olshan
Hey all,

I've updated the KIP with the changes I mentioned earlier. I have not yet
removed the --feature-version flag from the upgrade tool.

Please take a look at the API to get the versions for a given
metadata version. Right now I'm using ApiVersions request and specifying a
metadata version. The supported versions are then supplied in the
ApiVersions response.
There were tradeoffs with this approach. It is a pretty minimal change, but
reusing the API means that it could be confusing (ie, the ApiKeys will be
supplied in the response but not needed.) I considered making a whole new
API, but that didn't seem necessary for this use.

Please please try to let me know in the next few days, I hope to start a
vote soon.

Thanks,
Justine

On Mon, Mar 18, 2024 at 9:17 AM Justine Olshan  wrote:

> Hey folks,
>
> I didn't have a strong preference for requesting the versions via the tool
> only or getting it from the server. Colin seemed to suggest it was "for the
> best" to request from the server to make the tool lightweight.
> I guess the argument against that is having to build and support another
> API.
>
> It also seems like there may be some confusion -- partially on my part.
> For the tools, I had a question about the feature upgrade tool. So it seems
> like we already support multiple features via the `--feature` flag, we
> simply rely on the server side to throw errors now?
>
> Justine
>
> On Fri, Mar 15, 2024 at 3:38 PM José Armando García Sancio
>  wrote:
>
>> Hi Justine,
>>
>> Thanks for the update. Some comments below.
>>
>> On Wed, Mar 13, 2024 at 10:53 AM Justine Olshan
>>  wrote:
>> > 4. Include an API to list the features for a given metadata version
>>
>> I am not opposed to designing and implementing this. I am just
>> wondering if this is strictly required?
>>
>> Would having auto-generated documentation address the issue of not
>> knowing which feature versions are associated with a given release
>> version?
>>
>> Does it need to be a Kafka API (RPC)? Or can this be strictly
>> implemented in the tool? The latest tool, which is needed to parse
>> release version to feature version, can print this mapping. Is this
>> what you mean by API?
>>
>> > 5. I'm going back and forth on whether we should support the
>> > --release-version flag still. If we do, we need to include validation
>> so we
>> > only upgrade on upgrade.
>>
>> I am not sure how this is different from supporting multiple --feature
>> flags. The user can run an upgrade command where some of the features
>> specified are greater than what the cluster has finalized and some of
>> the features specified are less than what the cluster has finalized.
>>
>> In other words, the KRaft controller and kafka-storage tool are going
>> to have to implement this validation even if you don't implement
>> --release-version in the tools.
>> Thanks,
>> --
>> -José
>>
>


Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-25 Thread Jun Rao
Hi, Jose,

Thanks for the reply.

54. Yes, we could include SecurityProtocol in DescribeQuorumResponse. Then,
we could include it in the output of kafka-metadata-quorum --describe.

55.1 Could number-of-observers and pending-voter-change be reported by all
brokers and controllers? I thought only the controller leader tracks those.

55.2 So, IgnoredStaticVoters and IsObserver are Yammer metrics and the rest
are KafkaMetric. It would be useful to document the metric names clearer.
For Yammer metrics, we need to specify group, type, name and tags. For
KafkaMetrics, we need to specify just name and tags.

57. Could we remove --release-version 3.8 in the upgrade example?

Jun

On Mon, Mar 25, 2024 at 11:54 AM José Armando García Sancio
 wrote:

> Hi Jun,
>
> See my comments below.
>
> On Fri, Mar 22, 2024 at 1:30 PM Jun Rao  wrote:
> > 54. Admin.addMetadataVoter: It seems that Endpoint shouldn't include
> > securityProtocol since it's not in DescribeQuorumResponse.
>
> Yeah. I noticed that when I made the Admin changes. We either use a
> different type in the Admin client or add SecurityProtocol to the
> DescribeQuorumResponse. I was originally leaning towards adding
> SecurityProtocol to the DescribeQuorumResponse.
>
> Does the user want to see the security protocol in the response and
> CLI output? Listener name is not very useful unless the user also has
> access to the controller's configuration.
>
> I can go either way, what do you think?
>
> > 55. Metrics:
> > 55.1 It would be useful to be clear whether they are reported by the
> > controller leader, all controllers or all controllers and brokers.
>
> Done. I also noticed that I was missing one metric in the controller
> process role.
>
> > 55.2 IsObserver, type=KafkaController: Should we use the dash convention
> to
> > be consistent with the rest of the metrics?
>
> I would like to but I had to do this for backward compatibility. The
> existing controller metrics are all scoped under the KafkaController
> type. We had a similar discussion for "KIP-835: Monitor KRaft
> Controller Quorum Health."
>
> > 56. kafka-storage : "If the --release-version flag is not specified, the
> > IBP in the configuration is used."
> >   kafka-storage takes controller.properties as the input parameter and
> IBP
> > is not a controller property, right?
>
> I was documenting the current code. I suspect that the developer that
> implemented kafka-storage wanted it to work with a configuration that
> had an IBP.
>
> > 57. To be consistent with kafka-storage, should we make the
> > --release-version flag in kafka-features optional too? If this is not
> > specified, the default associated with the tool will be used.
>
> Sounds good. I updated that section to define this behavior for both
> the upgrade and downgrade commands.
>
> > 58. Typo: when the voter ID and UUID doesn't match
> >   doesn't => don't
>
> Fixed.
>
> Thanks, I already updated the KIP to match my comments above and
> include your feedback.
> --
> -José
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #119

2024-03-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 447336 lines...]
[2024-03-25T19:35:34.982Z] 
[2024-03-25T19:35:34.982Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 43 > org.apache.kafka.connect.integration.OffsetsApiIntegrationTest > 
testGetSourceConnectorOffsets STARTED
[2024-03-25T19:35:38.706Z] 
[2024-03-25T19:35:38.706Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 44 > 
org.apache.kafka.connect.integration.InternalTopicsIntegrationTest > 
testStartWhenInternalTopicsCreatedManuallyWithCompactForBrokersDefaultCleanupPolicy
 PASSED
[2024-03-25T19:35:38.706Z] 
[2024-03-25T19:35:38.706Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 44 > 
org.apache.kafka.connect.integration.InternalTopicsIntegrationTest > 
testFailToCreateInternalTopicsWithMoreReplicasThanBrokers STARTED
[2024-03-25T19:35:59.025Z] 
[2024-03-25T19:35:59.025Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [8] Type=ZK, MetadataVersion=3.8-IV0, 
Security=PLAINTEXT SKIPPED
[2024-03-25T19:35:59.025Z] 
[2024-03-25T19:35:59.025Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT STARTED
[2024-03-25T19:36:22.486Z] 
[2024-03-25T19:36:22.486Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT PASSED
[2024-03-25T19:36:22.587Z] 
[2024-03-25T19:36:22.587Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
STARTED
[2024-03-25T19:36:38.027Z] 
[2024-03-25T19:36:38.027Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
PASSED
[2024-03-25T19:36:38.027Z] 
[2024-03-25T19:36:38.027Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-03-25T19:36:56.168Z] 
[2024-03-25T19:36:56.168Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-03-25T19:36:56.168Z] 
[2024-03-25T19:36:56.168Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT STARTED
[2024-03-25T19:37:12.112Z] 
[2024-03-25T19:37:12.112Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT PASSED
[2024-03-25T19:37:12.112Z] 
[2024-03-25T19:37:12.112Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-03-25T19:37:17.549Z] 
[2024-03-25T19:37:17.549Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-03-25T19:37:17.549Z] 
[2024-03-25T19:37:17.549Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-03-25T19:37:20.580Z] 
[2024-03-25T19:37:20.580Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-03-25T19:37:20.580Z] 
[2024-03-25T19:37:20.580Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-03-25T19:37:36.216Z] 
[2024-03-25T19:37:36.216Z] Gradle Test Run :core:test > Gradle Test Executor 78 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> 

Re: [VOTE] KIP-1025: Optionally URL-encode clientID and clientSecret in authorization header

2024-03-25 Thread Nelson B.
Hi Doğuşcan,

Thanks for your vote!

Currently, the usage of TLS depends on the protocol used by the
authorization server which is configured
through the "sasl.oauthbearer.token.endpoint.url" option. So, if the
URL address uses simple http (not https)
then secrets will be transmitted in plaintext. I think it's possible to
enforce using only https but I think any
production-grade authorization server uses https anyway and maybe users may
want to test using http in the dev environment.

Thanks,

On Thu, Mar 21, 2024 at 3:56 PM Doğuşcan Namal 
wrote:

> Hi Nelson, thanks for the KIP.
>
> From the RFC:
> ```
> The authorization server MUST require the use of TLS as described in
>Section 1.6 when sending requests using password authentication.
> ```
>
> I believe we already have an enforcement for OAuth to be enabled only in
> SSLChannel but would be good to double check. Sending secrets over
> plaintext is a security bad practice :)
>
> +1 (non-binding) from me.
>
> On Tue, 19 Mar 2024 at 16:00, Nelson B.  wrote:
>
> > Hi all,
> >
> > I would like to start a vote on KIP-1025
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1025%3A+Optionally+URL-encode+clientID+and+clientSecret+in+authorization+header
> > >,
> > which would optionally URL-encode clientID and clientSecret in the
> > authorization header.
> >
> > I feel like all possible issues have been addressed in the discussion
> > thread.
> >
> > Thanks,
> >
>


Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-25 Thread José Armando García Sancio
Hi Jun,

See my comments below.

On Fri, Mar 22, 2024 at 1:30 PM Jun Rao  wrote:
> 54. Admin.addMetadataVoter: It seems that Endpoint shouldn't include
> securityProtocol since it's not in DescribeQuorumResponse.

Yeah. I noticed that when I made the Admin changes. We either use a
different type in the Admin client or add SecurityProtocol to the
DescribeQuorumResponse. I was originally leaning towards adding
SecurityProtocol to the DescribeQuorumResponse.

Does the user want to see the security protocol in the response and
CLI output? Listener name is not very useful unless the user also has
access to the controller's configuration.

I can go either way, what do you think?

> 55. Metrics:
> 55.1 It would be useful to be clear whether they are reported by the
> controller leader, all controllers or all controllers and brokers.

Done. I also noticed that I was missing one metric in the controller
process role.

> 55.2 IsObserver, type=KafkaController: Should we use the dash convention to
> be consistent with the rest of the metrics?

I would like to but I had to do this for backward compatibility. The
existing controller metrics are all scoped under the KafkaController
type. We had a similar discussion for "KIP-835: Monitor KRaft
Controller Quorum Health."

> 56. kafka-storage : "If the --release-version flag is not specified, the
> IBP in the configuration is used."
>   kafka-storage takes controller.properties as the input parameter and IBP
> is not a controller property, right?

I was documenting the current code. I suspect that the developer that
implemented kafka-storage wanted it to work with a configuration that
had an IBP.

> 57. To be consistent with kafka-storage, should we make the
> --release-version flag in kafka-features optional too? If this is not
> specified, the default associated with the tool will be used.

Sounds good. I updated that section to define this behavior for both
the upgrade and downgrade commands.

> 58. Typo: when the voter ID and UUID doesn't match
>   doesn't => don't

Fixed.

Thanks, I already updated the KIP to match my comments above and
include your feedback.
-- 
-José


[VOTE] KIP-1024: Make the restore behavior of GlobalKTables with custom processors configureable

2024-03-25 Thread Walker Carlson
Hello everybody,

I think we have had some pretty good discussion on this kip and it seems
that we are close if not yet settled on the final version.

So I would like to open up the voting for KIP-1024:
https://cwiki.apache.org/confluence/x/E4t3EQ

Thanks everyone!
Walker


Re: [DISCUSS] KIP-1024: Make the restore behavior of GlobalKTables with custom processors configureable

2024-03-25 Thread Walker Carlson
Hey Bruno,

1) I'm actually not sure why that is in there. It certainly doesn't match
the convention. Best to remove it and match the other methods.

2) Yeah, I thought about it but I'm not convinced it is a necessary
restriction. It might be useful for the already defined processors but then
they might as well use the `globalTable` method. I think the add state
store option should go for maximum flexibility.

Best,
Walker



On Fri, Mar 22, 2024 at 10:01 AM Bruno Cadonna  wrote:

> Hi Walker,
>
> A couple of follow-up questions.
>
> 1.
> Why do you propose to explicitly pass a parameter "storeName" in
> StreamsBuilder#addGlobalStore?
> The StoreBuilder should already provide a name for the store, if I
> understand the code correctly.
> I would avoid using the same name for the source node and the state
> store, because it limits the flexibility in naming. Why do you not use
> Named for the name of the source node?
>
> 2.
> Did you consider Matthias' proposal to restrict the type of the store
> builder to `StoreBuilder` (or even
> `StoreBuilder`) for the case where
> the processor is built-in?
>
>
> Best,
> Bruno
>
> On 3/13/24 11:05 PM, Walker Carlson wrote:
> > Thanks for the feedback Bruno, Matthias, and Lucas!
> >
> > There is a decent amount but I'm going to try and just hit the major
> points
> > as I would like to keep this change simple.
> >
> > I've made corrections for the mistakes pointed out. Thanks for the
> > suggestions everyone.
> >
> > The main sticking point seems to be with the method of signalling the
> > restore behavior. It seems we can all agree with how the API should look
> > with the default option we are adding. I think keeping the option to load
> > directly from the topic into the store is a good idea. It is much more
> > performant and could make a simple metric collector processor much
> simpler.
> >
> > I think something that Matthais said about creating a special class of
> > processors for the global stores helps me think about the issue. I tend
> to
> > fall into the category that we should keep global stores open to the
> > possibility of having child nodes in the future. I don't really see the
> > downside of having that as an option. It might not be best for a lot of
> > cases, but something simple could be very useful to put in the PAPI.
> >
> > I like the idea of having a `GlobalStoreParameters` but only if we decide
> > to make the processor need to extend an interface like
> > 'GobalStoreProcessor`. If not that seems excessive.
> >
> > As of right now I don't see a better option than having a boolean flag
> for
> > the reprocessOnRestore option. I expanded the description in the docs so
> I
> > hope that helps.
> >
> > I am more than willing to take other ideas on it.
> >
> > thanks,
> > Walker
> >
>


[jira] [Resolved] (KAFKA-15950) Serialize broker heartbeat requests

2024-03-25 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-15950.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

merged the PR to trunk.

> Serialize broker heartbeat requests
> ---
>
> Key: KAFKA-15950
> URL: https://issues.apache.org/jira/browse/KAFKA-15950
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.7.0
>Reporter: Jun Rao
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.8.0
>
>
> This is a followup issue from the discussion in 
> [https://github.com/apache/kafka/pull/14836#discussion_r1409739363].
> {{KafkaEventQueue}} does de-duping and only allows one outstanding 
> {{CommunicationEvent}} in the queue. But it seems that duplicated 
> {{{}HeartbeatRequest{}}}s could still be generated. {{CommunicationEvent}} 
> calls {{sendBrokerHeartbeat}} that calls the following.
> {code:java}
> _channelManager.sendRequest(new BrokerHeartbeatRequest.Builder(data), 
> handler){code}
> The problem is that we have another queue in 
> {{NodeToControllerChannelManagerImpl}} that doesn't do the de-duping. Once a 
> {{CommunicationEvent}} is dequeued from {{{}KafkaEventQueue{}}}, a 
> {{HeartbeatRequest}} will be queued in 
> {{{}NodeToControllerChannelManagerImpl{}}}. At this point, another 
> {{CommunicationEvent}} could be enqueued in {{{}KafkaEventQueue{}}}. When 
> it's processed, another {{HeartbeatRequest}} will be queued in 
> {{{}NodeToControllerChannelManagerImpl{}}}.
> This probably won't introduce long lasting duplicated {{HeartbeatRequest}} in 
> practice since {{CommunicationEvent}} is typically queued in 
> {{KafkaEventQueue}} for heartbeat interval. By that time, other pending 
> {{{}HeartbeatRequest{}}}s will be processed and de-duped when enqueuing to 
> {{{}KafkaEventQueue{}}}. However, duplicated requests could make it hard to 
> reason about tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16422) Flaky test org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest."testFailingOverIncrementsNewActiveControllerCount(boolean).true"

2024-03-25 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16422:
---

 Summary: Flaky test 
org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest."testFailingOverIncrementsNewActiveControllerCount(boolean).true"
 Key: KAFKA-16422
 URL: https://issues.apache.org/jira/browse/KAFKA-16422
 Project: Kafka
  Issue Type: Bug
Reporter: Igor Soarez


{code:java}
[2024-03-22T10:39:59.911Z] Gradle Test Run :metadata:test > Gradle Test 
Executor 92 > QuorumControllerMetricsIntegrationTest > 
testFailingOverIncrementsNewActiveControllerCount(boolean) > 
"testFailingOverIncrementsNewActiveControllerCount(boolean).true" FAILED
[2024-03-22T10:39:59.912Z]     org.opentest4j.AssertionFailedError: expected: 
<1> but was: <2>
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
[2024-03-22T10:39:59.912Z]         at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:632)
[2024-03-22T10:39:59.912Z]         at 
app//org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest.lambda$testFailingOverIncrementsNewActiveControllerCount$1(QuorumControllerMetricsIntegrationTest.java:107)
[2024-03-22T10:39:59.912Z]         at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)
[2024-03-22T10:39:59.912Z]         at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:412)
[2024-03-22T10:39:59.912Z]         at 
app//org.apache.kafka.controller.QuorumControllerMetricsIntegrationTest.testFailingOverIncrementsNewActiveControllerCount(QuorumControllerMetricsIntegrationTest.java:105)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14246) Update threading model for Consumer

2024-03-25 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True resolved KAFKA-14246.
---
Resolution: Fixed

> Update threading model for Consumer
> ---
>
> Key: KAFKA-14246
> URL: https://issues.apache.org/jira/browse/KAFKA-14246
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.8.0
>
>
> Hi community,
>  
> We are refactoring the current KafkaConsumer and making it more asynchronous. 
>  This is the master Jira to track the project's progress; subtasks will be 
> linked to this ticket.  Please review the design document and feel free to 
> use this thread for discussion. 
>  
> The design document is here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/Proposal%3A+Consumer+Threading+Model+Refactor]
>  
> The original email thread is here: 
> [https://lists.apache.org/thread/13jvwzkzmb8c6t7drs4oj2kgkjzcn52l]
>  
> I will continue to update the 1pager as reviews and comments come.
>  
> Thanks, 
> P



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16421) Refactor CommandDefaultOptions subclasses to throw exceptions instead of calling exit.

2024-03-25 Thread Greg Harris (Jira)
Greg Harris created KAFKA-16421:
---

 Summary: Refactor CommandDefaultOptions subclasses to throw 
exceptions instead of calling exit.
 Key: KAFKA-16421
 URL: https://issues.apache.org/jira/browse/KAFKA-16421
 Project: Kafka
  Issue Type: Wish
  Components: tools
Reporter: Greg Harris


Many command-line utilities use the "mainNoExit()" idiom to provide a testable 
entrypoint to the command-line utility that doesn't include calling 
System.exit. This allows tests to safely exercise the command-line utility 
end-to-end, without risk that the JVM will stop.

Often, command implementations themselves adhere to this idiom, and don't call 
Exit. However, this is compromised by the CommandLineUtils functions, which 
call Exit.exit when an error is encountered while parsing the command-line 
arguments. 

These utilities are pervasively used in subclasses of CommandDefaultOptions, 
across hundreds of call-sites. We should figure out a way to replace this exit 
behavior with exceptions that are eventually propagated from the *Options 
constructors. This will allow the command-line implementations to handle these 
errors, and return the appropriate exit code from mainNoExit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16420) Replace utils.Exit with a thread-safe alternative

2024-03-25 Thread Greg Harris (Jira)
Greg Harris created KAFKA-16420:
---

 Summary: Replace utils.Exit with a thread-safe alternative
 Key: KAFKA-16420
 URL: https://issues.apache.org/jira/browse/KAFKA-16420
 Project: Kafka
  Issue Type: Wish
  Components: connect, core, tools
Reporter: Greg Harris
Assignee: Greg Harris


The Exit class is not thread-safe, and exposes our tests to race conditions and 
inconsistent execution. It is not possible to make it thread-safe due to the 
static design of the API.

We should add an alternative to the Exit class, and migrate the existing usages 
to the replacement, before finally removing the legacy Exit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #164

2024-03-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 373413 lines...]
[2024-03-25T15:05:09.623Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testEmptyWrite() PASSED
[2024-03-25T15:05:09.623Z] 
[2024-03-25T15:05:09.623Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testReadMigrateAndWriteProducerId() STARTED
[2024-03-25T15:05:09.824Z] 
[2024-03-25T15:05:09.824Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testReadMigrateAndWriteProducerId() PASSED
[2024-03-25T15:05:09.824Z] 
[2024-03-25T15:05:09.824Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testExistingKRaftControllerClaim() STARTED
[2024-03-25T15:05:09.925Z] 
[2024-03-25T15:05:09.925Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testExistingKRaftControllerClaim() PASSED
[2024-03-25T15:05:09.925Z] 
[2024-03-25T15:05:09.925Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testMigrateTopicConfigs() STARTED
[2024-03-25T15:05:10.126Z] 
[2024-03-25T15:05:10.126Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testMigrateTopicConfigs() PASSED
[2024-03-25T15:05:10.126Z] 
[2024-03-25T15:05:10.126Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testNonIncreasingKRaftEpoch() STARTED
[2024-03-25T15:05:10.327Z] 
[2024-03-25T15:05:10.327Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testNonIncreasingKRaftEpoch() PASSED
[2024-03-25T15:05:10.327Z] 
[2024-03-25T15:05:10.327Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testMigrateEmptyZk() STARTED
[2024-03-25T15:05:10.427Z] 
[2024-03-25T15:05:10.427Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testMigrateEmptyZk() PASSED
[2024-03-25T15:05:10.427Z] 
[2024-03-25T15:05:10.427Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testTopicAndBrokerConfigsMigrationWithSnapshots() 
STARTED
[2024-03-25T15:05:10.829Z] 
[2024-03-25T15:05:10.829Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testTopicAndBrokerConfigsMigrationWithSnapshots() 
PASSED
[2024-03-25T15:05:10.829Z] 
[2024-03-25T15:05:10.829Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() STARTED
[2024-03-25T15:05:10.929Z] 
[2024-03-25T15:05:10.929Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() PASSED
[2024-03-25T15:05:10.929Z] 
[2024-03-25T15:05:10.929Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testClaimAbsentController() STARTED
[2024-03-25T15:05:11.130Z] 
[2024-03-25T15:05:11.130Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testClaimAbsentController() PASSED
[2024-03-25T15:05:11.130Z] 
[2024-03-25T15:05:11.130Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testIdempotentCreateTopics() STARTED
[2024-03-25T15:05:11.231Z] 
[2024-03-25T15:05:11.231Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testIdempotentCreateTopics() PASSED
[2024-03-25T15:05:11.231Z] 
[2024-03-25T15:05:11.231Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testCreateNewTopic() STARTED
[2024-03-25T15:05:11.432Z] 
[2024-03-25T15:05:11.432Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testCreateNewTopic() PASSED
[2024-03-25T15:05:11.432Z] 
[2024-03-25T15:05:11.432Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
STARTED
[2024-03-25T15:05:11.633Z] 
[2024-03-25T15:05:11.633Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED
[2024-03-25T15:05:11.633Z] 
[2024-03-25T15:05:11.633Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED
[2024-03-25T15:05:11.834Z] 
[2024-03-25T15:05:11.834Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED
[2024-03-25T15:05:11.834Z] 
[2024-03-25T15:05:11.834Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED
[2024-03-25T15:05:12.035Z] 
[2024-03-25T15:05:12.035Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED
[2024-03-25T15:05:12.035Z] 
[2024-03-25T15:05:12.035Z] Gradle Test Run :core:test > Gradle Test Executor 92 
> 

[jira] [Created] (KAFKA-16419) Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply the process

2024-03-25 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16419:
--

 Summary: Abstract validateMessagesAndAssignOffsetsCompressed of 
LogValidator to simply the process
 Key: KAFKA-16419
 URL: https://issues.apache.org/jira/browse/KAFKA-16419
 Project: Kafka
  Issue Type: Improvement
Reporter: Johnny Hsu
Assignee: Johnny Hsu


Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, there 
are lots of if-else checks based on the `magic` and `CompressionType`, which 
makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into x steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2748

2024-03-25 Thread Apache Jenkins Server
See 




Re: [VOTE] 3.6.2 RC1

2024-03-25 Thread Manikumar
Hi,

Thanks for letting me know. Pls let me know after merging the PR. I will
generate RC2.


Thanks

On Sat, Mar 23, 2024 at 1:58 AM Colin McCabe  wrote:

> Sorry but I have to vote -1
>
> I tried verifying that the migration quotas bug described in
> https://issues.apache.org/jira/browse/KAFKA-16222 was fixed, and it
> appears to still be an issue with 3.6.2 RC1. The quota on the default
> resource is still getting translated improperly.
>
> I am looking into what the issue is here.
>
> best,
> Colin
>
>
> On Thu, Mar 21, 2024, at 19:32, Chia-Ping Tsai wrote:
> > hi Manikumar
> >
> >> Pls let me know after merging the PR. I will generate RC2 later today.
> >
> > Sure. We will complete it ASAP
> >
> >
> >> Manikumar  於 2024年3月22日 上午9:26 寫道:
> >>
> >> Hi,
> >>
> >> Thanks. Since we have merged KAFKA-16342
> >> , it's probably
> better
> >> to take the PR for KAFKA-16341
> >>  for consistency.
> >> I am canceling the RC1 in favour of including KAFKA-16341
> >> .
> >>
> >> Chia-Ping,
> >>
> >> Pls let me know after merging the PR. I will generate RC2 later today.
> >>
> >>
> >> Thank you,
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Mar 21, 2024 at 9:10 PM Chia-Ping Tsai 
> wrote:
> >>
>  Is this a regression from the 3.5.0 release?
> >>>
> >>> I believe the bug is existent for a while, but it is not a true issue
> until
> >>> we allowed users to fetch offset of max timestamp.
> >>>
>  Can we update the "Affects Version/s" field on JIRA?
> >>>
> >>> done. I attach the tags for active branches - 3.6.1 and 3.7.0
> >>>
> >>> Manikumar  於 2024年3月21日 週四 下午11:12寫道:
> >>>
>  Hi Chia-Ping,
> 
>  Thanks for letting me know.
> 
>  Is this a regression from the 3.5.0 release?  Can we update the
> "Affects
>  Version/s" field on JIRA?
> 
>  Thanks,
> 
> 
>  On Thu, Mar 21, 2024 at 5:06 PM Chia-Ping Tsai 
> >>> wrote:
> 
> > hi Manikumar,
> >
> > There is a bug fix which needs to be backport to 3.6 (
> > https://issues.apache.org/jira/browse/KAFKA-16341)
> >
> > It fixes the incorrect offset of max timestamp in non-compress path.
> >>> The
> > other paths are already fixed by
> > https://issues.apache.org/jira/browse/KAFKA-16342.
> >
> > Personally, we should backport both fixes for all paths, and we can
> > complete the backport today.
> >
> > Sorry for bring this news to RC1.
> >
> > Best,
> > Chia-Ping
> >
> >
> > Manikumar  於 2024年3月21日 週四 下午6:11寫道:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the first candidate for release of Apache Kafka 3.6.2.
> >>
> >> This is a bugfix release with several fixes, including dependency
> >> version bumps for CVEs.
> >>
> >> Release notes for the 3.6.2 release:
> >>
> >>> https://home.apache.org/~manikumar/kafka-3.6.2-rc1/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Tuesday, March 26th
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> https://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> https://home.apache.org/~manikumar/kafka-3.6.2-rc1/
> >>
> >>
> >> * Maven artifacts to be voted upon:
> >>
> >>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >> * Javadoc:
> >> https://home.apache.org/~manikumar/kafka-3.6.2-rc1/javadoc/
> >>
> >> * Tag to be voted upon (off 3.6 branch) is the 3.6.2 tag:
> >> https://github.com/apache/kafka/releases/tag/3.6.2-rc1
> >>
> >> * Documentation:
> >> https://kafka.apache.org/36/documentation.html
> >>
> >> * Protocol:
> >> https://kafka.apache.org/36/protocol.html
> >>
> >> * Successful Jenkins builds for the 3.6 branch:
> >> Unit/integration tests:
> >>
> >>
> >
> 
> >>>
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.6/159/tests
> >> (with few flaky failures)
> >> System tests: I will update system test results
> >>
> >> Thanks,
> >> Manikumar
> >>
> >
> 
> >>>
>


[jira] [Created] (KAFKA-16418) Split long-running admin client integration tests

2024-03-25 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16418:
--

 Summary: Split long-running admin client integration tests
 Key: KAFKA-16418
 URL: https://issues.apache.org/jira/browse/KAFKA-16418
 Project: Kafka
  Issue Type: Task
  Components: clients
Reporter: Lianet Magrans
Assignee: Lianet Magrans


Review PlaintextAdminIntegrationTest and attempt to split it to allow for 
parallelization and improve build times. This tests is the longest running 
integration test in kafka.api, so a similar approach to what has been done with 
the consumer tests in PlaintextConsumerTest should be a good improvement. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: KRaft Observer Nodes

2024-03-25 Thread Paolo Patierno
Hi Sanaa,
yes it's enough NOT having the "controller" role to be just an "observer".
As soon as your node has "controller" role configured (in addition to
"broker" or alone) it acts as a "voter" and then being a leader or
"follower" based on the election result.

Thanks
Paolo

On Mon, 25 Mar 2024, 14:25 Sanaa Syed, 
wrote:

> Hi Paolo,
>
> If I'm understanding correctly, a kafka broker (aside from the controller
> quorum) can be the observer? Do we need to set the process role to anything
> in this case to indicate that we want a broker to be an observer?
>
> On Sun, Mar 24, 2024 at 3:01 PM Paolo Patierno 
> wrote:
>
> > Hi Sanaa,
> > in KRaft mode there is the role of "observer" which is typically taken by
> > brokers as part of the data plane. The broker/observer can discover which
> > node is the leader/active controller and fetches the metadata from it (as
> > the "followers" in the KRaft quorum), but cannot vote and cannot become
> > leader.
> > AFAIK it's allowing to change its role at some point if needed to be
> also a
> > controller or anyway taking part at the quorum by also having the
> metadata
> > topic already replicated.
> >
> > Thanks
> > Paolo
> >
> > On Sun, 24 Mar 2024, 19:05 Sanaa Syed, 
> > wrote:
> >
> > > Hello,
> > >
> > > Currently in Zookeeper, there is the concept of an observer/dummy node
> > that
> > > can be used when scaling up a cluster (
> > > https://zookeeper.apache.org/doc/r3.4.13/zookeeperObservers.html).
> > >
> > > Is there an equivalent of an observer node in KRaft mode? Asking
> because
> > we
> > > use the observer node for our stretched Kafka Clusters (6 zookeeper
> nodes
> > > across 3 clusters) and would like to find a direct replacement or come
> up
> > > with an alternative.
> > >
> > > Thank you,
> > > Sanaa
> > >
> >
>
>
> --
> Sanaa Syed
>


[jira] [Resolved] (KAFKA-16375) Fix logic for discarding reconciliation if member rejoined

2024-03-25 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans resolved KAFKA-16375.

Resolution: Fixed

> Fix logic for discarding reconciliation if member rejoined
> --
>
> Key: KAFKA-16375
> URL: https://issues.apache.org/jira/browse/KAFKA-16375
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Critical
>  Labels: client-transitions-issues, kip-848-client-support
> Fix For: 3.8.0
>
>
> The current implementation of the new consumer discards the result of a 
> reconciliation if the member rejoined, based on a comparison of the member 
> epoch at the start and end of the reconciliation. If the epochs changed the 
> reconciliation is discarded. This is not right because the member epoch could 
> be incremented without an assignment change. This should be fixed to ensure 
> that the reconciliation is discarded if the member rejoined, probably based 
> on a flag that truly reflects that it went through a transition to joining 
> while reconciling.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: KRaft Observer Nodes

2024-03-25 Thread Sanaa Syed
Hi Paolo,

If I'm understanding correctly, a kafka broker (aside from the controller
quorum) can be the observer? Do we need to set the process role to anything
in this case to indicate that we want a broker to be an observer?

On Sun, Mar 24, 2024 at 3:01 PM Paolo Patierno 
wrote:

> Hi Sanaa,
> in KRaft mode there is the role of "observer" which is typically taken by
> brokers as part of the data plane. The broker/observer can discover which
> node is the leader/active controller and fetches the metadata from it (as
> the "followers" in the KRaft quorum), but cannot vote and cannot become
> leader.
> AFAIK it's allowing to change its role at some point if needed to be also a
> controller or anyway taking part at the quorum by also having the metadata
> topic already replicated.
>
> Thanks
> Paolo
>
> On Sun, 24 Mar 2024, 19:05 Sanaa Syed, 
> wrote:
>
> > Hello,
> >
> > Currently in Zookeeper, there is the concept of an observer/dummy node
> that
> > can be used when scaling up a cluster (
> > https://zookeeper.apache.org/doc/r3.4.13/zookeeperObservers.html).
> >
> > Is there an equivalent of an observer node in KRaft mode? Asking because
> we
> > use the observer node for our stretched Kafka Clusters (6 zookeeper nodes
> > across 3 clusters) and would like to find a direct replacement or come up
> > with an alternative.
> >
> > Thank you,
> > Sanaa
> >
>


-- 
Sanaa Syed


[Confluence] Request for an account

2024-03-25 Thread ChengHan Hsu
Hi all,

I have sent a email to infrastruct...@apache.org for registering an account
of Confluence, I am contributing to Kafka and would like to update some
wiki.
May I know if anyone can help with this?

Thanks in advance!

Best,
Johnny


[jira] [Created] (KAFKA-16417) When initializeResources throws an exception in TopicBasedRemoteLogMetadataManager.scala, initializationFailed needs to be set to true

2024-03-25 Thread zhaobo (Jira)
zhaobo created KAFKA-16417:
--

 Summary: When initializeResources throws an exception in 
TopicBasedRemoteLogMetadataManager.scala, initializationFailed needs to be set 
to true
 Key: KAFKA-16417
 URL: https://issues.apache.org/jira/browse/KAFKA-16417
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: zhaobo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Request to join

2024-03-25 Thread Luke Chen
Hi,

Please send an email to dev-subscr...@kafka.apache.org to subscribe to the
group.

Thanks.
Luke

On Mon, Mar 25, 2024 at 4:19 PM durairaj t  wrote:

>
>


Re: [PR] Update Bruno's picture [kafka-site]

2024-03-25 Thread via GitHub


C0urante commented on PR #592:
URL: https://github.com/apache/kafka-site/pull/592#issuecomment-2017751127

   I was wondering who that bearded gentleman was at Kafka Summit!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-932: Queues for Kafka

2024-03-25 Thread Andrew Schofield
Hi Justine,
Thanks for your questions.

There are several limits in this KIP. With consumer groups, we see problems
where there are huge numbers of consumer groups, and we also see problems
when there are huge number of members in a consumer group.

There’s a limit on the number of members in share group. When the limit is 
reached,
additional members are not admitted to the group. The members heartbeat to 
remain
in the group and that enables timely expiration.

There’s also a limit of the number of share groups in a cluster. Initially, 
this limit has been
set low. As a result, it would be possible to create sufficient groups to reach 
the limit,
and then creating additional groups will fail. It will be possible to delete a 
share group
administratively, but share groups do not automatically expire (just like 
topics do not
expire and queues in message-queuing systems do not expire).

The `kafka-console-share-consumer.sh` tool in the KIP defaults the group name to
“share”. This has two benefits. First, it means that the trivial exploratory 
use of it running
multiple concurrent copies will naturally get sharing of the records consumed.
Second, it means that only one share group is being create, rather than 
generating another
group ID for each execution.

Please do re-read the read-committed section. I’ll grateful for all the 
thoughtful reviews
that the community is able to provide. The KIP says that broker-side filtering
removes the records for aborted transactions. This is obviously quite a 
difference compared
with consumers in consumer groups. It think it would also be possible to do it 
client-side
but the records fetched from the replica manager are distributed among the 
consumers,
and I’m concerned that it would be difficult to distribute the list of aborted 
transactions
relevant to the records each consumer receives. I’m considering prototyping 
client-side
filtering to see how well it works in practice.

I am definitely thoughtful about the inter-broker hops in order to persist the 
share-group
state. Originally, I did look at writing the state directly into the user’s 
topic-partitions
because this means the share-partition leader would be able to write directly.
This has downsides as documented in the “Rejected Alternatives” section of the 
KIP.

We do have opportunities for pipelining and batching which I expect we will 
exploit
in order to improve the performance.

This KIP is only the beginning. I expect a future KIP will address storage of 
metadata
in a more performant way.

Thanks,
Andrew

> On 21 Mar 2024, at 15:40, Justine Olshan  wrote:
> 
> Thanks Andrew,
> 
> That answers some of the questions I have.
> 
> With respect to the limits -- how will this be implemented? One issue we
> saw with producers is "short-lived" producers that send one message and
> disconnect.
> Due to how expiration works for producer state, if we have a simple limit
> for producer IDs, all new producers are blocked until the old ones expire.
> Will we block new group members as well if we reach our limit?
> 
> In the consumer case, we have a heartbeat which can be used for expiration
> behavior and avoid the headache we see on the producer side, but I can
> imagine a case where misuse of the groups themselves could occur -- ie
> creating a short lived share group that I believe will take some time to
> expire. Do we have considerations for this case?
> 
> I also plan to re-read the read-committed section and may have further
> questions there.
> 
> You also mentioned in the KIP how there are a few inter-broker hops to the
> share coordinator, etc for a given read operation of a partition. Are we
> concerned about performance here? My work in transactions and trying to
> optimize performance made me realize how expensive these inter-broker hops
> can be.
> 
> Justine
> 
> On Thu, Mar 21, 2024 at 7:37 AM Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
> 
>> Hi Justine,
>> Thanks for your comment. Sorry for the delay responding.
>> 
>> It was not my intent to leave a query unanswered. I have modified the KIP
>> as a result
>> of the discussion and I think perhaps I didn’t neatly close off the email
>> thread.
>> 
>> In summary:
>> 
>> * The share-partition leader does not maintain an explicit cache of
>> records that it has
>> fetched. When it fetches records, it does “shallow” iteration to look at
>> the batch
>> headers only so that it understands at least the base/last offset of the
>> records within.
>> It is left to the consumers to do the “deep” iteration of the records
>> batches they fetch.
>> 
>> * It may sometimes be necessary to re-fetch records for redelivery. This
>> is essentially
>> analogous to two consumer groups independently fetching the same records
>> today.
>> We will be relying on the efficiency of the page cache.
>> 
>> * The zero-copy optimisation is not possible for records fetched for
>> consumers in
>> share groups. The KIP does not affect the use of the zero-copy
>> 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2747

2024-03-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 469275 lines...]
[2024-03-25T09:45:39.013Z] 
[2024-03-25T09:45:39.013Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED
[2024-03-25T09:45:39.317Z] 
[2024-03-25T09:45:39.317Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED
[2024-03-25T09:45:39.317Z] 
[2024-03-25T09:45:39.317Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED
[2024-03-25T09:45:42.194Z] 
[2024-03-25T09:45:42.194Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED
[2024-03-25T09:45:42.194Z] 
[2024-03-25T09:45:42.194Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED
[2024-03-25T09:45:42.495Z] 
[2024-03-25T09:45:42.495Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED
[2024-03-25T09:45:42.495Z] 
[2024-03-25T09:45:42.496Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED
[2024-03-25T09:45:51.416Z] 
[2024-03-25T09:45:51.416Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED
[2024-03-25T09:45:51.416Z] 
[2024-03-25T09:45:51.416Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testExistsNonExistentZNode() STARTED
[2024-03-25T09:45:51.901Z] 
[2024-03-25T09:45:51.901Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testExistsNonExistentZNode() PASSED
[2024-03-25T09:45:51.901Z] 
[2024-03-25T09:45:51.901Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetDataNonExistentZNode() STARTED
[2024-03-25T09:45:52.202Z] 
[2024-03-25T09:45:52.202Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetDataNonExistentZNode() PASSED
[2024-03-25T09:45:52.202Z] 
[2024-03-25T09:45:52.202Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testConnectionTimeout() STARTED
[2024-03-25T09:45:54.131Z] 
[2024-03-25T09:45:54.131Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testConnectionTimeout() PASSED
[2024-03-25T09:45:54.131Z] 
[2024-03-25T09:45:54.131Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
STARTED
[2024-03-25T09:45:54.764Z] 
[2024-03-25T09:45:54.764Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
PASSED
[2024-03-25T09:45:54.764Z] 
[2024-03-25T09:45:54.764Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testUnresolvableConnectString() STARTED
[2024-03-25T09:45:55.309Z] 
[2024-03-25T09:45:55.309Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testUnresolvableConnectString() PASSED
[2024-03-25T09:45:55.309Z] 
[2024-03-25T09:45:55.309Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetChildrenNonExistentZNode() STARTED
[2024-03-25T09:45:57.080Z] 
[2024-03-25T09:45:57.080Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetChildrenNonExistentZNode() PASSED
[2024-03-25T09:45:57.080Z] 
[2024-03-25T09:45:57.080Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testPipelinedGetData() STARTED
[2024-03-25T09:45:57.248Z] 
[2024-03-25T09:45:57.248Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testPipelinedGetData() PASSED
[2024-03-25T09:45:57.248Z] 
[2024-03-25T09:45:57.248Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange() STARTED
[2024-03-25T09:45:57.669Z] 
[2024-03-25T09:45:57.669Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange() PASSED
[2024-03-25T09:45:57.669Z] 
[2024-03-25T09:45:57.669Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren() STARTED
[2024-03-25T09:45:58.724Z] 
[2024-03-25T09:45:58.724Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren() PASSED
[2024-03-25T09:45:58.724Z] 
[2024-03-25T09:45:58.724Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZooKeeperClientTest > testSetDataExistingZNode() STARTED
[2024-03-25T09:45:59.152Z] 
[2024-03-25T09:45:59.152Z] Gradle Test Run :core:test > Gradle Test Executor 96 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #117

2024-03-25 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15882) Scheduled nightly github actions workflow for CVE reports on published docker images

2024-03-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15882.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Scheduled nightly github actions workflow for CVE reports on published docker 
> images
> 
>
> Key: KAFKA-15882
> URL: https://issues.apache.org/jira/browse/KAFKA-15882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Vedarth Sharma
>Assignee: Vedarth Sharma
>Priority: Major
> Fix For: 3.8.0
>
>
> This scheduled github actions workflow will check supported published docker 
> images for CVEs and generate reports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16374) High watermark updates should have a higher priority

2024-03-25 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16374.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> High watermark updates should have a higher priority
> 
>
> Key: KAFKA-16374
> URL: https://issues.apache.org/jira/browse/KAFKA-16374
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Request to join

2024-03-25 Thread durairaj t



Re: [DISCUSS] KIP-950: Tiered Storage Disablement

2024-03-25 Thread Doğuşcan Namal
Hi Christo and Luke,

I think the KRaft section of the KIP requires slight improvement. The
metadata propagation in KRaft is handled by the RAFT layer instead of
sending Controller -> Broker RPCs. In fact, KIP-631 deprecated these RPCs

.

I will come up with some recommendations on how we could improve that one
but until then, @Luke please feel free to review the KIP.

@Satish, if we want this to make it to Kafka 3.8 I believe we need to aim
to get the KIP approved in the following weeks otherwise it will slip and
we can not support it in Zookeeper mode.

I also would like to better understand what is the community's stand for
adding a new feature for Zookeeper since it is marked as deprecated already.

Thanks.



On Mon, 18 Mar 2024 at 13:42, Christo Lolov  wrote:

> Heya,
>
> I do have some time to put into this, but to be honest I am still after
> reviews of the KIP itself :)
>
> After the latest changes it ought to be detailing both a Zookeeper approach
> and a KRaft approach.
>
> Do you have any thoughts on how it could be improved or should I start a
> voting thread?
>
> Best,
> Christo
>
> On Thu, 14 Mar 2024 at 06:12, Luke Chen  wrote:
>
> > Hi Christo,
> >
> > Any update with this KIP?
> > If you don't have time to complete it, I can collaborate with you to work
> > on it.
> >
> > Thanks.
> > Luke
> >
> > On Wed, Jan 17, 2024 at 11:38 PM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Hi Christo,
> > > Thanks for volunteering to contribute to the KIP discussion. I suggest
> > > considering this KIP for both ZK and KRaft as it will be helpful for
> > > this feature to be available in 3.8.0 running with ZK clusters.
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Wed, 17 Jan 2024 at 19:04, Christo Lolov 
> > > wrote:
> > > >
> > > > Hello!
> > > >
> > > > I volunteer to get this KIP moving forward and implemented in Apache
> > > Kafka
> > > > 3.8.
> > > >
> > > > I have caught up with Mehari offline and we have agreed that given
> > Apache
> > > > Kafka 4.0 being around the corner we would like to propose this
> feature
> > > > only for KRaft clusters.
> > > >
> > > > Any and all reviews and comments are welcome!
> > > >
> > > > Best,
> > > > Christo
> > > >
> > > > On Tue, 9 Jan 2024 at 09:44, Doğuşcan Namal <
> namal.dogus...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi everyone, any progress on the status of this KIP? Overall looks
> > > good to
> > > > > me but I wonder whether we still need to support it for Zookeeper
> > mode
> > > > > given that it will be deprecated in the next 3 months.
> > > > >
> > > > > On 2023/07/21 20:16:46 "Beyene, Mehari" wrote:
> > > > > > Hi everyone,
> > > > > > I would like to start a discussion on KIP-950: Tiered Storage
> > > Disablement
> > > > > (
> > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-950%3A++Tiered+Storage+Disablement
> > > > > ).
> > > > > >
> > > > > > This KIP proposes adding the ability to disable and re-enable
> > tiered
> > > > > storage on a topic.
> > > > > >
> > > > > > Thanks,
> > > > > > Mehari
> > > > > >
> > > > >
> > >
> >
>


[jira] [Created] (KAFKA-16416) Use NetworkClientTest to replace RequestResponseTest to be the example of log4j output

2024-03-25 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16416:
--

 Summary: Use NetworkClientTest to replace RequestResponseTest to 
be the example of log4j output
 Key: KAFKA-16416
 URL: https://issues.apache.org/jira/browse/KAFKA-16416
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


https://github.com/apache/kafka/blob/trunk/README.md#running-a-particular-unitintegration-test-with-log4j-output

`RequestResponseTest` does not produce log4j output even though we set the log 
level to info, so it could misdirect users. Hence, we should use another test 
class which can produce log output to replace `RequestResponseTest`. For 
example: NetworkClientTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)