[VOTE] KIP-792: Add "generation" field into consumer protocol

2021-11-30 Thread Luke Chen
Hi all,

I'd like to start the vote for KIP-792: Add "generation" field into
consumer protocol.

The goal of this KIP is to allow the assignor/consumer coordinator to have
a way to identify the out-of-date members/assignments, to avoid rebalance
stuck issues in current protocol.

Detailed description can be found here:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=191336614

Any feedback is welcome.

Thank you.
Luke


Jenkins build is back to normal : Kafka » Kafka Branch Builder » 3.0 #159

2021-11-30 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-30 Thread Jun Rao
Hi, David,

Thanks for the reply. No more questions from me.

Jun

On Tue, Nov 30, 2021 at 1:52 PM David Arthur
 wrote:

> Thanks Jun,
>
> 30: I clarified the description of the "upgrade" command to:
>
> The FEATURE and VERSION arguments can be repeated to indicate an upgrade of
> > multiple features in one request. Alternatively, the RELEASE flag can be
> > given to upgrade to the default versions of the specified release. These
> > two options, FEATURE and RELEASE, are mutually exclusive. If neither is
> > given, this command will do nothing.
>
>
> Basically, you must provide either "kafka-features.sh upgrade --release
> 3.2" or something like "kafka-features.sh upgrade --feature X --version 10"
>
> -David
>
> On Tue, Nov 30, 2021 at 2:51 PM Jun Rao  wrote:
>
> > Hi, David,
> >
> > Thanks for the reply. Just one more minor comment.
> >
> > 30. ./kafka-features.sh upgrade: It seems that the release param is
> > optional. Could you describe the semantic when release is not specified?
> >
> > Jun
> >
> > On Mon, Nov 29, 2021 at 5:06 PM David Arthur
> >  wrote:
> >
> > > Jun, I updated the KIP with the "disable" CLI.
> > >
> > > For 16, I think you're asking how we can safely introduce the
> > > initial version of other feature flags in the future. This will
> probably
> > > depend on the nature of the feature that the new flag is gating. We can
> > > probably take a similar approach and say version 1 is backwards
> > compatible
> > > to some point (possibly an IBP or metadata.version?).
> > >
> > > -David
> > >
> > > On Fri, Nov 19, 2021 at 3:00 PM Jun Rao 
> > wrote:
> > >
> > > > Hi, David,
> > > >
> > > > Thanks for the reply. The new CLI sounds reasonable to me.
> > > >
> > > > 16.
> > > > For case C, choosing the latest version sounds good to me.
> > > > For case B, for metadata.version, we pick version 1 since it just
> > happens
> > > > that for metadata.version version 1 is backward compatible. How do we
> > > make
> > > > this more general for other features?
> > > >
> > > > 21. Do you still plan to change "delete" to "disable" in the CLI?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > >
> > > > On Thu, Nov 18, 2021 at 11:50 AM David Arthur
> > > >  wrote:
> > > >
> > > > > Jun,
> > > > >
> > > > > The KIP has some changes to the CLI for KIP-584. With Jason's
> > > suggestion
> > > > > incorporated, these three commands would look like:
> > > > >
> > > > > 1) kafka-features.sh upgrade --release latest
> > > > > upgrades all known features to their defaults in the latest release
> > > > >
> > > > > 2) kafka-features.sh downgrade --release 3.x
> > > > > downgrade all known features to the default versions from 3.x
> > > > >
> > > > > 3) kafka-features.sh describe --release latest
> > > > > Describe the known features of the latest release
> > > > >
> > > > > The --upgrade-all and --downgrade-all are replaced by specific each
> > > > > feature+version or giving the --release argument. One problem with
> > > > > --downgrade-all is it's not clear what the target versions should
> be:
> > > the
> > > > > previous version before the last upgrade -- or the lowest supported
> > > > > version. Since downgrades will be less common, I think it's better
> to
> > > > make
> > > > > the operator be more explicit about the desired downgrade version
> > > (either
> > > > > through [--feature X --version Y] or [--release 3.1]). Does that
> seem
> > > > > reasonable?
> > > > >
> > > > > 16. For case C, I think we will want to always use the latest
> > > > > metadata.version for new clusters (like we do for IBP). We can
> always
> > > > > change what we mean by "default" down the road. Also, this will
> > likely
> > > > > become a bit of information we include in release and upgrade notes
> > > with
> > > > > each release.
> > > > >
> > > > > -David
> > > > >
> > > > > On Thu, Nov 18, 2021 at 1:14 PM Jun Rao 
> > > > wrote:
> > > > >
> > > > > > Hi, Jason, David,
> > > > > >
> > > > > > Just to clarify on the interaction with the end user, the design
> in
> > > > > KIP-584
> > > > > > allows one to do the following.
> > > > > > (1) kafka-features.sh  --upgrade-all --bootstrap-server
> > > > > > kafka-broker0.prn1:9071 to upgrade all features to the latest
> > version
> > > > > known
> > > > > > by the tool. The user doesn't need to know a specific feature
> > > version.
> > > > > > (2) kafka-features.sh  --downgrade-all --bootstrap-server
> > > > > > kafka-broker0.prn1:9071 to downgrade all features to the latest
> > > version
> > > > > > known by the tool. The user doesn't need to know a specific
> feature
> > > > > > version.
> > > > > > (3) kafka-features.sh  --describe --bootstrap-server
> > > > > > kafka-broker0.prn1:9071 to find out the supported version for
> each
> > > > > feature.
> > > > > > This allows the user to upgrade each feature individually.
> > > > > >
> > > > > > Most users will be doing (1) and occasionally (2), and won't need
> > to
> > > > know
> > > > > > the exact version 

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-30 Thread David Arthur
Thanks Jun,

30: I clarified the description of the "upgrade" command to:

The FEATURE and VERSION arguments can be repeated to indicate an upgrade of
> multiple features in one request. Alternatively, the RELEASE flag can be
> given to upgrade to the default versions of the specified release. These
> two options, FEATURE and RELEASE, are mutually exclusive. If neither is
> given, this command will do nothing.


Basically, you must provide either "kafka-features.sh upgrade --release
3.2" or something like "kafka-features.sh upgrade --feature X --version 10"

-David

On Tue, Nov 30, 2021 at 2:51 PM Jun Rao  wrote:

> Hi, David,
>
> Thanks for the reply. Just one more minor comment.
>
> 30. ./kafka-features.sh upgrade: It seems that the release param is
> optional. Could you describe the semantic when release is not specified?
>
> Jun
>
> On Mon, Nov 29, 2021 at 5:06 PM David Arthur
>  wrote:
>
> > Jun, I updated the KIP with the "disable" CLI.
> >
> > For 16, I think you're asking how we can safely introduce the
> > initial version of other feature flags in the future. This will probably
> > depend on the nature of the feature that the new flag is gating. We can
> > probably take a similar approach and say version 1 is backwards
> compatible
> > to some point (possibly an IBP or metadata.version?).
> >
> > -David
> >
> > On Fri, Nov 19, 2021 at 3:00 PM Jun Rao 
> wrote:
> >
> > > Hi, David,
> > >
> > > Thanks for the reply. The new CLI sounds reasonable to me.
> > >
> > > 16.
> > > For case C, choosing the latest version sounds good to me.
> > > For case B, for metadata.version, we pick version 1 since it just
> happens
> > > that for metadata.version version 1 is backward compatible. How do we
> > make
> > > this more general for other features?
> > >
> > > 21. Do you still plan to change "delete" to "disable" in the CLI?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Thu, Nov 18, 2021 at 11:50 AM David Arthur
> > >  wrote:
> > >
> > > > Jun,
> > > >
> > > > The KIP has some changes to the CLI for KIP-584. With Jason's
> > suggestion
> > > > incorporated, these three commands would look like:
> > > >
> > > > 1) kafka-features.sh upgrade --release latest
> > > > upgrades all known features to their defaults in the latest release
> > > >
> > > > 2) kafka-features.sh downgrade --release 3.x
> > > > downgrade all known features to the default versions from 3.x
> > > >
> > > > 3) kafka-features.sh describe --release latest
> > > > Describe the known features of the latest release
> > > >
> > > > The --upgrade-all and --downgrade-all are replaced by specific each
> > > > feature+version or giving the --release argument. One problem with
> > > > --downgrade-all is it's not clear what the target versions should be:
> > the
> > > > previous version before the last upgrade -- or the lowest supported
> > > > version. Since downgrades will be less common, I think it's better to
> > > make
> > > > the operator be more explicit about the desired downgrade version
> > (either
> > > > through [--feature X --version Y] or [--release 3.1]). Does that seem
> > > > reasonable?
> > > >
> > > > 16. For case C, I think we will want to always use the latest
> > > > metadata.version for new clusters (like we do for IBP). We can always
> > > > change what we mean by "default" down the road. Also, this will
> likely
> > > > become a bit of information we include in release and upgrade notes
> > with
> > > > each release.
> > > >
> > > > -David
> > > >
> > > > On Thu, Nov 18, 2021 at 1:14 PM Jun Rao 
> > > wrote:
> > > >
> > > > > Hi, Jason, David,
> > > > >
> > > > > Just to clarify on the interaction with the end user, the design in
> > > > KIP-584
> > > > > allows one to do the following.
> > > > > (1) kafka-features.sh  --upgrade-all --bootstrap-server
> > > > > kafka-broker0.prn1:9071 to upgrade all features to the latest
> version
> > > > known
> > > > > by the tool. The user doesn't need to know a specific feature
> > version.
> > > > > (2) kafka-features.sh  --downgrade-all --bootstrap-server
> > > > > kafka-broker0.prn1:9071 to downgrade all features to the latest
> > version
> > > > > known by the tool. The user doesn't need to know a specific feature
> > > > > version.
> > > > > (3) kafka-features.sh  --describe --bootstrap-server
> > > > > kafka-broker0.prn1:9071 to find out the supported version for each
> > > > feature.
> > > > > This allows the user to upgrade each feature individually.
> > > > >
> > > > > Most users will be doing (1) and occasionally (2), and won't need
> to
> > > know
> > > > > the exact version of each feature.
> > > > >
> > > > > 16. For case C, what's the default version? Is it always the
> latest?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Thu, Nov 18, 2021 at 8:15 AM David Arthur
> > > > >  wrote:
> > > > >
> > > > > > Colin, thanks for the detailed response. I understand what you're
> > > > saying
> > > > > > and I agree with your 

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.1 #27

2021-11-30 Thread Apache Jenkins Server
See 




[VOTE] KIP-805: Add range and scan query support in IQ v2

2021-11-30 Thread Vasiliki Papavasileiou
Hello everyone,

I would like to start a vote for KIP-805 that adds range and scan KeyValue
queries in IQ2.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-805%3A+Add+range+and+scan+query+support+in+IQ+v2

Cheers!
Vicky


Re: [DISCUSS] KIP-805: Add range and scan query support in IQ v2

2021-11-30 Thread Vasiliki Papavasileiou
Thank you John! Yes, that was a typo from copying and I fixed it.

Since there have been no more comments, I will start the vote.

Best,
Vicky

On Tue, Nov 30, 2021 at 5:22 AM John Roesler  wrote:

> Thanks for the KIP, Vicky!
>
> This KIP will help fill in the parity gap between IQ and
> IQv2.
>
> One thing I noticed, which looks like just a typo is that
> the value type of the proposed RangeQuery should probably be
> KeyValueIterator, right?
>
> Otherwise, it looks good to me!
>
> Thanks,
> -John
>
> On Mon, 2021-11-29 at 12:20 +, Vasiliki Papavasileiou
> wrote:
> > Hello everyone,
> >
> > I would like to start the discussion for KIP-805: Add range and scan
> query
> > support in IQ v2
> >
> > The KIP can be found here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-805%3A+Add+range+and+scan+query+support+in+IQ+v2
> >
> > Any suggestions are more than welcome.
> >
> > Many thanks,
> > Vicky
>
>


Re: Filtering support on Fetch API

2021-11-30 Thread Talat Uyarer
Hi Eric,

Thanks for your comments. My goal is apply filter without any
serialization.

I will generate headers distinct values on Record Batch in producer. Broker
will build an index for header values like as timeindex. When consumer
apply filter broker will filter only record batch level. Filter will not
guarantee exact results. but it will reduce cost consumer side. Consumer
still needs to do whatever it does but for less amount of messages.

Do you see any issue ? In this model I think we dont have any penalty
except creating additional index file on broker and increase storage size
little bit.

Thanks

On Tue, Nov 30, 2021 at 10:21 AM Eric Azama  wrote:

> Something to keep in mind with your proposal is that you're moving the
> Decompression and Filtering costs into the Brokers. It probably also adds a
> new Compression cost if you want the Broker to send compressed data over
> the network. Centralizing that cost on the cluster may not be desirable and
> would likely increase latency across the board.
>
> Additionally, because header values are byte arrays, the Brokers probably
> would not be able to do very sophisticated filtering. Support for basic
> comparisons of the built-in Serdes might be simple enough, but anything
> more complex or involving custom Serdes would probably require a new
> plug-in type on the broker.
>
> On Mon, Nov 29, 2021 at 10:49 AM Talat Uyarer <
> tuya...@paloaltonetworks.com>
> wrote:
>
> > Hi All,
> >
> > I want to get your advice about one subject. I want to create a KIP for
> > message header base filtering on Fetch API.
> >
> > Our current use case We have 1k+ topics and per topic, have 10+ consumers
> > for different use cases. However all consumers are interested in
> different
> > sets of messages on the same topic. Currently  We read all messages from
> a
> > given topic and drop logs on the consumer side. To reduce our stream
> > processing cost I want to drop logs on the broker side. So far my
> > understanding
> >
> > *Broker send messages as is (No serilization cost) -> Network Transfer ->
> > > Consumer Deserialize Messages(User side deserilization cost) -> User
> > Space
> > > drop or use messages (User Sidefiltering cost)*
> >
> >
> > If I can drop messages based on their headers without serialization and
> > deserialization messages. It will help us save network bandwidth and as
> > well as consumer side cpu cost.
> >
> > My approach is building a header index. Consumer clients will define
> > their filter in the fetch call. If the filter is matching, the broker
> will
> > send the messages. I would like to hear your suggestions about my
> solution.
> >
> > Thanks
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #158

2021-11-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 281183 lines...]
[2021-11-30T19:58:11.007Z] LogOffsetTest > testGetOffsetsForUnknownTopic() 
PASSED
[2021-11-30T19:58:11.007Z] 
[2021-11-30T19:58:11.007Z] LogOffsetTest > 
testFetchOffsetByTimestampForMaxTimestampWithEmptyLog() STARTED
[2021-11-30T19:58:13.093Z] 
[2021-11-30T19:58:13.093Z] LogOffsetTest > 
testFetchOffsetByTimestampForMaxTimestampWithEmptyLog() PASSED
[2021-11-30T19:58:13.093Z] 
[2021-11-30T19:58:13.093Z] LogOffsetTest > testEmptyLogsGetOffsets() STARTED
[2021-11-30T19:58:14.451Z] 
[2021-11-30T19:58:14.451Z] PlaintextConsumerTest > testFetchInvalidOffset() 
PASSED
[2021-11-30T19:58:14.451Z] 
[2021-11-30T19:58:14.451Z] PlaintextConsumerTest > testAutoCommitIntercept() 
STARTED
[2021-11-30T19:58:16.227Z] 
[2021-11-30T19:58:16.227Z] LogOffsetTest > testEmptyLogsGetOffsets() PASSED
[2021-11-30T19:58:16.227Z] 
[2021-11-30T19:58:16.227Z] LogOffsetTest > 
testFetchOffsetsBeforeWithChangingSegments() STARTED
[2021-11-30T19:58:18.765Z] 
[2021-11-30T19:58:18.765Z] PlaintextConsumerTest > testAutoCommitIntercept() 
PASSED
[2021-11-30T19:58:18.765Z] 
[2021-11-30T19:58:18.765Z] PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() STARTED
[2021-11-30T19:58:19.530Z] 
[2021-11-30T19:58:19.530Z] LogOffsetTest > 
testFetchOffsetsBeforeWithChangingSegments() PASSED
[2021-11-30T19:58:19.530Z] 
[2021-11-30T19:58:19.530Z] LogOffsetTest > testGetOffsetsBeforeLatestTime() 
STARTED
[2021-11-30T19:58:23.879Z] 
[2021-11-30T19:58:23.879Z] LogOffsetTest > testGetOffsetsBeforeLatestTime() 
PASSED
[2021-11-30T19:58:23.879Z] 
[2021-11-30T19:58:23.879Z] LogOffsetTest > testGetOffsetsBeforeNow() STARTED
[2021-11-30T19:58:24.286Z] 
[2021-11-30T19:58:24.286Z] PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() PASSED
[2021-11-30T19:58:24.286Z] 
[2021-11-30T19:58:24.286Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
STARTED
[2021-11-30T19:58:28.595Z] 
[2021-11-30T19:58:28.595Z] PlaintextConsumerTest > testCommitSpecifiedOffsets() 
PASSED
[2021-11-30T19:58:28.595Z] 
[2021-11-30T19:58:28.595Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() STARTED
[2021-11-30T19:58:29.354Z] 
[2021-11-30T19:58:29.354Z] LogOffsetTest > testGetOffsetsBeforeNow() PASSED
[2021-11-30T19:58:29.354Z] 
[2021-11-30T19:58:29.354Z] LogOffsetTest > testGetOffsetsAfterDeleteRecords() 
STARTED
[2021-11-30T19:58:33.785Z] 
[2021-11-30T19:58:33.785Z] LogOffsetTest > testGetOffsetsAfterDeleteRecords() 
PASSED
[2021-11-30T19:58:33.785Z] 
[2021-11-30T19:58:33.785Z] DeleteTopicsRequestTest > 
testValidDeleteTopicRequests() STARTED
[2021-11-30T19:58:38.731Z] 
[2021-11-30T19:58:38.731Z] PlaintextConsumerTest > 
testPerPartitionLeadMetricsCleanUpWithSubscribe() PASSED
[2021-11-30T19:58:38.731Z] 
[2021-11-30T19:58:38.731Z] PlaintextConsumerTest > testCommitMetadata() STARTED
[2021-11-30T19:58:42.226Z] 
[2021-11-30T19:58:42.226Z] PlaintextConsumerTest > testCommitMetadata() PASSED
[2021-11-30T19:58:42.226Z] 
[2021-11-30T19:58:42.226Z] PlaintextConsumerTest > testRoundRobinAssignment() 
STARTED
[2021-11-30T19:58:42.991Z] 
[2021-11-30T19:58:42.991Z] DeleteTopicsRequestTest > 
testValidDeleteTopicRequests() PASSED
[2021-11-30T19:58:42.991Z] 
[2021-11-30T19:58:42.991Z] DeleteTopicsRequestTest > 
testErrorDeleteTopicRequests() STARTED
[2021-11-30T19:58:48.369Z] 
[2021-11-30T19:58:48.369Z] DeleteTopicsRequestTest > 
testErrorDeleteTopicRequests() PASSED
[2021-11-30T19:58:48.369Z] 
[2021-11-30T19:58:48.369Z] DeleteTopicsRequestTest > testNotController() STARTED
[2021-11-30T19:58:51.670Z] 
[2021-11-30T19:58:51.670Z] DeleteTopicsRequestTest > testNotController() PASSED
[2021-11-30T19:58:51.670Z] 
[2021-11-30T19:58:51.670Z] DelegationTokenRequestsWithDisableTokenFeatureTest > 
testDelegationTokenRequests() STARTED
[2021-11-30T19:58:52.364Z] 
[2021-11-30T19:58:52.364Z] PlaintextConsumerTest > testRoundRobinAssignment() 
PASSED
[2021-11-30T19:58:52.364Z] 
[2021-11-30T19:58:52.364Z] PlaintextConsumerTest > testPatternSubscription() 
STARTED
[2021-11-30T19:58:54.801Z] 
[2021-11-30T19:58:54.801Z] DelegationTokenRequestsWithDisableTokenFeatureTest > 
testDelegationTokenRequests() PASSED
[2021-11-30T19:58:54.801Z] 
[2021-11-30T19:58:54.801Z] ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle() STARTED
[2021-11-30T19:59:03.372Z] 
[2021-11-30T19:59:03.372Z] PlaintextConsumerTest > testPatternSubscription() 
PASSED
[2021-11-30T19:59:04.314Z] 
[2021-11-30T19:59:04.314Z] 1331 tests completed, 1 failed, 7 skipped
[2021-11-30T19:59:04.314Z] There were failing tests. See the report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/core/build/reports/tests/integrationTest/index.html
[2021-11-30T19:59:05.346Z] 
[2021-11-30T19:59:05.346Z] Deprecated Gradle features were used in this build, 
making it 

[jira] [Created] (KAFKA-13492) IQ Parity: queries for key/value store range and scan

2021-11-30 Thread John Roesler (Jira)
John Roesler created KAFKA-13492:


 Summary: IQ Parity: queries for key/value store range and scan
 Key: KAFKA-13492
 URL: https://issues.apache.org/jira/browse/KAFKA-13492
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13491) Implement IQv2 Framework

2021-11-30 Thread John Roesler (Jira)
John Roesler created KAFKA-13491:


 Summary: Implement IQv2 Framework
 Key: KAFKA-13491
 URL: https://issues.apache.org/jira/browse/KAFKA-13491
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler


See https://cwiki.apache.org/confluence/x/85OqCw



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-30 Thread Jun Rao
Hi, David,

Thanks for the reply. Just one more minor comment.

30. ./kafka-features.sh upgrade: It seems that the release param is
optional. Could you describe the semantic when release is not specified?

Jun

On Mon, Nov 29, 2021 at 5:06 PM David Arthur
 wrote:

> Jun, I updated the KIP with the "disable" CLI.
>
> For 16, I think you're asking how we can safely introduce the
> initial version of other feature flags in the future. This will probably
> depend on the nature of the feature that the new flag is gating. We can
> probably take a similar approach and say version 1 is backwards compatible
> to some point (possibly an IBP or metadata.version?).
>
> -David
>
> On Fri, Nov 19, 2021 at 3:00 PM Jun Rao  wrote:
>
> > Hi, David,
> >
> > Thanks for the reply. The new CLI sounds reasonable to me.
> >
> > 16.
> > For case C, choosing the latest version sounds good to me.
> > For case B, for metadata.version, we pick version 1 since it just happens
> > that for metadata.version version 1 is backward compatible. How do we
> make
> > this more general for other features?
> >
> > 21. Do you still plan to change "delete" to "disable" in the CLI?
> >
> > Thanks,
> >
> > Jun
> >
> >
> >
> > On Thu, Nov 18, 2021 at 11:50 AM David Arthur
> >  wrote:
> >
> > > Jun,
> > >
> > > The KIP has some changes to the CLI for KIP-584. With Jason's
> suggestion
> > > incorporated, these three commands would look like:
> > >
> > > 1) kafka-features.sh upgrade --release latest
> > > upgrades all known features to their defaults in the latest release
> > >
> > > 2) kafka-features.sh downgrade --release 3.x
> > > downgrade all known features to the default versions from 3.x
> > >
> > > 3) kafka-features.sh describe --release latest
> > > Describe the known features of the latest release
> > >
> > > The --upgrade-all and --downgrade-all are replaced by specific each
> > > feature+version or giving the --release argument. One problem with
> > > --downgrade-all is it's not clear what the target versions should be:
> the
> > > previous version before the last upgrade -- or the lowest supported
> > > version. Since downgrades will be less common, I think it's better to
> > make
> > > the operator be more explicit about the desired downgrade version
> (either
> > > through [--feature X --version Y] or [--release 3.1]). Does that seem
> > > reasonable?
> > >
> > > 16. For case C, I think we will want to always use the latest
> > > metadata.version for new clusters (like we do for IBP). We can always
> > > change what we mean by "default" down the road. Also, this will likely
> > > become a bit of information we include in release and upgrade notes
> with
> > > each release.
> > >
> > > -David
> > >
> > > On Thu, Nov 18, 2021 at 1:14 PM Jun Rao 
> > wrote:
> > >
> > > > Hi, Jason, David,
> > > >
> > > > Just to clarify on the interaction with the end user, the design in
> > > KIP-584
> > > > allows one to do the following.
> > > > (1) kafka-features.sh  --upgrade-all --bootstrap-server
> > > > kafka-broker0.prn1:9071 to upgrade all features to the latest version
> > > known
> > > > by the tool. The user doesn't need to know a specific feature
> version.
> > > > (2) kafka-features.sh  --downgrade-all --bootstrap-server
> > > > kafka-broker0.prn1:9071 to downgrade all features to the latest
> version
> > > > known by the tool. The user doesn't need to know a specific feature
> > > > version.
> > > > (3) kafka-features.sh  --describe --bootstrap-server
> > > > kafka-broker0.prn1:9071 to find out the supported version for each
> > > feature.
> > > > This allows the user to upgrade each feature individually.
> > > >
> > > > Most users will be doing (1) and occasionally (2), and won't need to
> > know
> > > > the exact version of each feature.
> > > >
> > > > 16. For case C, what's the default version? Is it always the latest?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Nov 18, 2021 at 8:15 AM David Arthur
> > > >  wrote:
> > > >
> > > > > Colin, thanks for the detailed response. I understand what you're
> > > saying
> > > > > and I agree with your rationale.
> > > > >
> > > > > It seems like we could just initialize their cluster.metadata to 1
> > when
> > > > the
> > > > > > software is upgraded to 3.2.
> > > > > >
> > > > >
> > > > > Concretely, this means the controller would see that there is no
> > > > > FeatureLevelRecord in the log, and so it would bootstrap one.
> > Normally
> > > > for
> > > > > cluster initialization, this would be read from meta.properties,
> but
> > in
> > > > the
> > > > > case of preview clusters we would need to special case it to
> > initialize
> > > > the
> > > > > version to 1.
> > > > >
> > > > > Once the new FeatureLevelRecord has been committed, any nodes
> > (brokers
> > > or
> > > > > controllers) that are running the latest software will react to the
> > new
> > > > > metadata.version. This means we will need to make this initial
> > version
> > > > of 1
> > > > > be 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #26

2021-11-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 290027 lines...]
[2021-11-30T19:06:36.407Z] AdminZkClientTest > testGetBrokerMetadatas() PASSED
[2021-11-30T19:06:36.407Z] 
[2021-11-30T19:06:36.407Z] AdminZkClientTest > testBootstrapClientIdConfig() 
STARTED
[2021-11-30T19:06:38.647Z] 
[2021-11-30T19:06:38.647Z] AdminZkClientTest > testBootstrapClientIdConfig() 
PASSED
[2021-11-30T19:06:38.647Z] 
[2021-11-30T19:06:38.647Z] AdminZkClientTest > testTopicConfigChange() STARTED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > testTopicConfigChange() PASSED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > testManualReplicaAssignment() 
STARTED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > testManualReplicaAssignment() 
PASSED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > testConcurrentTopicCreation() 
STARTED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > testConcurrentTopicCreation() 
PASSED
[2021-11-30T19:06:41.953Z] 
[2021-11-30T19:06:41.953Z] AdminZkClientTest > 
shouldPropagateDynamicBrokerConfigs() STARTED
[2021-11-30T19:06:44.128Z] 
[2021-11-30T19:06:44.128Z] AdminZkClientTest > 
shouldPropagateDynamicBrokerConfigs() PASSED
[2021-11-30T19:06:44.128Z] 
[2021-11-30T19:06:44.128Z] AdminZkClientTest > 
testMarkedDeletionTopicCreation() STARTED
[2021-11-30T19:06:44.128Z] 
[2021-11-30T19:06:44.128Z] AdminZkClientTest > 
testMarkedDeletionTopicCreation() PASSED
[2021-11-30T19:06:44.128Z] 
[2021-11-30T19:06:44.128Z] AdminZkClientTest > testTopicCreationWithCollision() 
STARTED
[2021-11-30T19:06:45.346Z] 
[2021-11-30T19:06:45.346Z] AdminZkClientTest > testTopicCreationWithCollision() 
PASSED
[2021-11-30T19:06:45.346Z] 
[2021-11-30T19:06:45.346Z] AdminZkClientTest > testTopicCreationInZK() STARTED
[2021-11-30T19:06:45.346Z] 
[2021-11-30T19:06:45.346Z] AdminZkClientTest > testTopicCreationInZK() PASSED
[2021-11-30T19:06:45.346Z] 
[2021-11-30T19:06:45.346Z] BrokerApiVersionsCommandTest > 
checkBrokerApiVersionCommandOutput() STARTED
[2021-11-30T19:06:47.487Z] 
[2021-11-30T19:06:47.487Z] BrokerApiVersionsCommandTest > 
checkBrokerApiVersionCommandOutput() PASSED
[2021-11-30T19:06:47.487Z] 
[2021-11-30T19:06:47.487Z] ListConsumerGroupTest > 
testListConsumerGroupsWithStates() STARTED
[2021-11-30T19:06:48.532Z] 
[2021-11-30T19:06:48.532Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[4]
 PASSED
[2021-11-30T19:06:48.532Z] 
[2021-11-30T19:06:48.532Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[5]
 STARTED
[2021-11-30T19:06:56.299Z] 
[2021-11-30T19:06:56.299Z] ListConsumerGroupTest > 
testListConsumerGroupsWithStates() PASSED
[2021-11-30T19:06:56.299Z] 
[2021-11-30T19:06:56.299Z] ListConsumerGroupTest > 
testListWithUnrecognizedNewConsumerOption() STARTED
[2021-11-30T19:06:58.273Z] 
[2021-11-30T19:06:58.273Z] ListConsumerGroupTest > 
testListWithUnrecognizedNewConsumerOption() PASSED
[2021-11-30T19:06:58.273Z] 
[2021-11-30T19:06:58.273Z] ListConsumerGroupTest > testListConsumerGroups() 
STARTED
[2021-11-30T19:07:07.264Z] 
[2021-11-30T19:07:07.264Z] ListConsumerGroupTest > testListConsumerGroups() 
PASSED
[2021-11-30T19:07:07.264Z] 
[2021-11-30T19:07:07.264Z] ListConsumerGroupTest > 
testConsumerGroupStatesFromString() STARTED
[2021-11-30T19:07:07.264Z] 
[2021-11-30T19:07:07.264Z] LogCleanerParameterizedIntegrationTest > 
cleanerTest(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(CompressionType)[5]
 PASSED
[2021-11-30T19:07:07.264Z] 
[2021-11-30T19:07:07.264Z] LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleanerWithMessageFormatV0(CompressionType)[1]
 STARTED
[2021-11-30T19:07:09.250Z] 
[2021-11-30T19:07:09.250Z] ListConsumerGroupTest > 
testConsumerGroupStatesFromString() PASSED
[2021-11-30T19:07:09.250Z] 
[2021-11-30T19:07:09.250Z] ListConsumerGroupTest > testListGroupCommand() 
STARTED
[2021-11-30T19:07:11.329Z] 
[2021-11-30T19:07:11.329Z] > Task :streams:integrationTest
[2021-11-30T19:07:11.329Z] 
[2021-11-30T19:07:11.329Z] 
org.apache.kafka.streams.integration.StandbyTaskEOSIntegrationTest > 
shouldWipeOutStandbyStateDirectoryIfCheckpointIsMissing[exactly_once] PASSED
[2021-11-30T19:07:11.329Z] 
[2021-11-30T19:07:11.329Z] 
org.apache.kafka.streams.integration.StandbyTaskEOSIntegrationTest > 
shouldSurviveWithOneTaskAsStandby[exactly_once] STARTED
[2021-11-30T19:07:17.310Z] 
[2021-11-30T19:07:17.310Z] 
org.apache.kafka.streams.integration.StandbyTaskEOSIntegrationTest > 
shouldSurviveWithOneTaskAsStandby[exactly_once] PASSED

[jira] [Created] (KAFKA-13490) Fix some problems with KRaft createTopics and incrementalAlterConfigs

2021-11-30 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13490:


 Summary: Fix some problems with KRaft createTopics and 
incrementalAlterConfigs
 Key: KAFKA-13490
 URL: https://issues.apache.org/jira/browse/KAFKA-13490
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin McCabe
Assignee: Colin McCabe
 Fix For: 3.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13489) Support different compression type for snapshots

2021-11-30 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-13489:
--

 Summary: Support different compression type for snapshots
 Key: KAFKA-13489
 URL: https://issues.apache.org/jira/browse/KAFKA-13489
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-12932) Interfaces for SnapshotReader and SnapshotWriter

2021-11-30 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-12932.

Resolution: Fixed

> Interfaces for SnapshotReader and SnapshotWriter
> 
>
> Key: KAFKA-12932
> URL: https://issues.apache.org/jira/browse/KAFKA-12932
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jose Armando Garcia Sancio
>Assignee: loboxu
>Priority: Major
>
> Change the snapshot API so that SnapshotWriter and SnapshotReader are 
> interfaces. Change the existing types SnapshotWriter and SnapshotReader to 
> use a different name and to implement the interfaces introduced by this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: Filtering support on Fetch API

2021-11-30 Thread Eric Azama
Something to keep in mind with your proposal is that you're moving the
Decompression and Filtering costs into the Brokers. It probably also adds a
new Compression cost if you want the Broker to send compressed data over
the network. Centralizing that cost on the cluster may not be desirable and
would likely increase latency across the board.

Additionally, because header values are byte arrays, the Brokers probably
would not be able to do very sophisticated filtering. Support for basic
comparisons of the built-in Serdes might be simple enough, but anything
more complex or involving custom Serdes would probably require a new
plug-in type on the broker.

On Mon, Nov 29, 2021 at 10:49 AM Talat Uyarer 
wrote:

> Hi All,
>
> I want to get your advice about one subject. I want to create a KIP for
> message header base filtering on Fetch API.
>
> Our current use case We have 1k+ topics and per topic, have 10+ consumers
> for different use cases. However all consumers are interested in different
> sets of messages on the same topic. Currently  We read all messages from a
> given topic and drop logs on the consumer side. To reduce our stream
> processing cost I want to drop logs on the broker side. So far my
> understanding
>
> *Broker send messages as is (No serilization cost) -> Network Transfer ->
> > Consumer Deserialize Messages(User side deserilization cost) -> User
> Space
> > drop or use messages (User Sidefiltering cost)*
>
>
> If I can drop messages based on their headers without serialization and
> deserialization messages. It will help us save network bandwidth and as
> well as consumer side cpu cost.
>
> My approach is building a header index. Consumer clients will define
> their filter in the fetch call. If the filter is matching, the broker will
> send the messages. I would like to hear your suggestions about my solution.
>
> Thanks
>


Re: [VOTE] KIP-799: Align behaviour for producer callbacks with documented behaviour

2021-11-30 Thread Luke Chen
Hi Séamus,
Thanks for the KIP!
We definitely want to keep the producer callback consistent for all types
of errors.

Just one comment for the KIP:
In the "Proposed Changes" section, could you please "explicitly" describe
what placeholder you'll return, in addition to adding a hyperlink to other
places, to make it clear.

+1 (non-binding)

Thank you.
Luke

On Tue, Nov 30, 2021 at 1:17 PM John Roesler  wrote:

> Thanks, Séamus!
>
> I'm +1 (binding).
>
> On Mon, 2021-11-29 at 16:14 +, Séamus Ó Ceanainn wrote:
> > Hi everyone,
> >
> > I'd like to start a vote for KIP-799: Align behaviour for producer
> > callbacks with documented behaviour
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-799%3A+Align+behaviour+for+producer+callbacks+with+documented+behaviour
> >
> > .
> >
> > The KIP proposes a breaking change in the behaviour of producer client
> > callbacks. The breaking change would align the behaviour of callbacks
> with
> > the documented behaviour for the method, and makes it consistent with
> > similar methods for producer client interceptors.
> >
> > Regards,
> > Séamus.
>
>


Re: [VOTE] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-11-30 Thread Tom Bentley
Hi Knowles,

No, just the update to the KIP page itself and the KIP index.

Cheers,

Tom

On Mon, Nov 29, 2021 at 6:26 PM Knowles Atchison Jr 
wrote:

> Thank you all for voting!
>
> KIP-779 has been approved:
>
> 3 binding votes (John, Mickael, Tom)
> 4 non-binding votes (Knowles, Chris S., Chris E., Arjun)
>
> The vote is now closed. Other than modifying the wiki, is anything
> additional I need to do vote wise?
>
> Knowles
>
> On Mon, Nov 29, 2021 at 10:49 AM Tom Bentley  wrote:
>
> > Hi Knowles,
> >
> > Thanks for the KIP, +1 (binding)
> >
> > Kind regards,
> >
> > Tom
> >
> > On 11/29/21, Mickael Maison  wrote:
> > > Hi Knowles,
> > >
> > > +1 (binding)
> > >
> > > Thanks for the KIP!
> > >
> > > On Mon, Nov 29, 2021 at 12:56 PM Knowles Atchison Jr
> > >  wrote:
> > >>
> > >> Good morning,
> > >>
> > >> Bringing this back to the top.
> > >>
> > >> We currently have
> > >>
> > >> 1 binding
> > >> 4 non-binding
> > >>
> > >> Knowles
> > >>
> > >> On Fri, Nov 19, 2021 at 10:02 AM Knowles Atchison Jr
> > >> 
> > >> wrote:
> > >>
> > >> > Thank you all for voting. We still need two more binding votes.
> > >> >
> > >> > I have rebased and updated the PR to be ready to go once this vote
> > >> > passes:
> > >> >
> > >> > https://github.com/apache/kafka/pull/11382
> > >> >
> > >> > Knowles
> > >> >
> > >> > On Tue, Nov 16, 2021 at 3:43 PM Chris Egerton
> > >> > 
> > >> > wrote:
> > >> >
> > >> >> +1 (non-binding). Thanks Knowles!
> > >> >>
> > >> >> On Tue, Nov 16, 2021 at 10:48 AM Arjun Satish <
> > arjun.sat...@gmail.com>
> > >> >> wrote:
> > >> >>
> > >> >> > +1 (non-binding). Thanks for the KIP, Knowles! and appreciate the
> > >> >> > follow-ups!
> > >> >> >
> > >> >> > On Thu, Nov 11, 2021 at 2:55 PM John Roesler <
> vvcep...@apache.org>
> > >> >> wrote:
> > >> >> >
> > >> >> > > Thanks, Knowles!
> > >> >> > >
> > >> >> > > I'm +1 (binding)
> > >> >> > >
> > >> >> > > -John
> > >> >> > >
> > >> >> > > On Wed, 2021-11-10 at 12:42 -0500, Christopher Shannon
> > >> >> > > wrote:
> > >> >> > > > +1 (non-binding). This looks good to me and will be useful
> as a
> > >> >> > > > way
> > >> >> to
> > >> >> > > > handle producer errors.
> > >> >> > > >
> > >> >> > > > On Mon, Nov 8, 2021 at 8:55 AM Knowles Atchison Jr <
> > >> >> > > katchiso...@gmail.com>
> > >> >> > > > wrote:
> > >> >> > > >
> > >> >> > > > > Good morning,
> > >> >> > > > >
> > >> >> > > > > I'd like to start a vote for KIP-779: Allow Source Tasks to
> > >> >> > > > > Handle
> > >> >> > > Producer
> > >> >> > > > > Exceptions:
> > >> >> > > > >
> > >> >> > > > >
> > >> >> > > > >
> > >> >> > >
> > >> >> >
> > >> >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-779%3A+Allow+Source+Tasks+to+Handle+Producer+Exceptions
> > >> >> > > > >
> > >> >> > > > > The purpose of this KIP is to allow Source Tasks the option
> > to
> > >> >> > "ignore"
> > >> >> > > > > kafka producer exceptions. After a few iterations, this is
> > now
> > >> >> part
> > >> >> > of
> > >> >> > > the
> > >> >> > > > > "errors.tolerance" configuration and provides a null
> > >> >> RecordMetadata
> > >> >> > to
> > >> >> > > > > commitRecord() in lieu of a new SourceTask interface method
> > or
> > >> >> worker
> > >> >> > > > > configuration item.
> > >> >> > > > >
> > >> >> > > > > PR is here:
> > >> >> > > > >
> > >> >> > > > > https://github.com/apache/kafka/pull/11382
> > >> >> > > > >
> > >> >> > > > > Any comments and feedback are welcome.
> > >> >> > > > >
> > >> >> > > > > Knowles
> > >> >> > > > >
> > >> >> > >
> > >> >> > >
> > >> >> > >
> > >> >> >
> > >> >>
> > >> >
> > >
> > >
> >
> >
>