Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #189

2023-08-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 417461 lines...]
org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStoresMultiStreamThreadsNamedTopology PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testSelfJoin[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testSelfJoin[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testSelfJoin[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testSelfJoin[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest
 > shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = all] 
STARTED

org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest
 > shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = all] 
PASSED

org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest
 > shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = none] 
STARTED

org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest
 > shouldDoStreamTableJoinWithDifferentNumberOfPartitions[Optimization = none] 
PASSED

org.apache.kafka.streams.integration.StreamsUpgradeTestIntegrationTest > 
testVersionProbingUpgrade STARTED

org.apache

Re: Re: Re: [DISCUSS] KIP-972: Add the metric of the current running version of kafka

2023-08-30 Thread hudeqi
Thank you for your answer, Mickael. 
If set the value of gauge to a constant value of 1, adding that tag key is 
"version" and value is the version value of the obtained string type, does this 
solve the problem? We can get the version by tag in prometheus.

best,
hudeqi

"Mickael Maison" 写道:
> Hi,
> 
> Prometheus only support numeric values for metrics. This means it's
> not able to handle the kafka.server:type=app-info metric since Kafka
> versions are not valid numbers (3.5.0).
> As a workaround we could create a metric with the version without the
> dots, for example with value 350 for Kafka 3.5.0.
> 
> Also in between releases Kafka uses the -SNAPSHOT suffix (for example
> trunk is currently 3.7.0-SNAPSHOT) so we should also consider a way to
> handle those.
> 
> Thanks,
> Mickael
> 
> On Wed, Aug 30, 2023 at 2:51 PM hudeqi <16120...@bjtu.edu.cn> wrote:
> >
> > Hi, Kamal, thanks your reminding, but I have a question: It seems that I 
> > can't get this metric through "jmx_prometheus"? Although I observed this 
> > metric through other tools.
> >
> > best,
> > hudeqi
> >
> > "Kamal Chandraprakash" 写道:
> > > Hi Hudeqi,
> > >
> > > Kafka already emits the version metric. Can you check whether the below
> > > metric satisfies your requirement?
> > >
> > > kafka.server:type=app-info,id=0
> > >
> > > --
> > > Kamal
> > >
> > > On Mon, Aug 28, 2023 at 2:29 PM hudeqi <16120...@bjtu.edu.cn> wrote:
> > >
> > > > Hi, all, I want to submit a minor kip to add a metric, which supports to
> > > > get the running kafka server verison, the wiki url is here
> > > >
> > > > Motivation
> > > >
> > > > At present, it is impossible to perceive the Kafka version that the 
> > > > broker
> > > > is running from the perspective of metrics. If multiple Kafka versions 
> > > > are
> > > > deployed in a cluster due to various reasons, it is difficult for us to
> > > > intuitively understand the version distribution.
> > > >
> > > > So, I want to add a kafka version metric indicating the version of the
> > > > current running kafka server, it can help us to perceive the mixed
> > > > distribution of multiple versions, and to perceive the progress of 
> > > > version
> > > > upgrade in the cluster in real time.
> > > >
> > > > Proposed Changes
> > > >
> > > > When instantiating kafkaServer/BrokerServer, register `KafkaVersion` 
> > > > gauge
> > > > metric, whose value is obtained by `VersionInfo.getVersion`. And remove 
> > > > all
> > > > related metrics when kafkaServer/BrokerServer shutdown.
> > > >
> > > >
> > > >
> > > >
> > > > best,
> > > >
> > > > hudeqi
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >


Re: Re: [VOTE] KIP-965: Support disaster recovery between clusters by MirrorMaker

2023-08-30 Thread hudeqi
Thank you for your thoughtfulness, greg.
1. This configuration only relevant to the MirrorSourceConnector.
3. "disaster recovery" is what I saw Mickael Maison set up mentioned in this 
jira (https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-15172)). If 
you have any good suggestions, I can change the title of this KIP, thank you.
5. user scram credential: The topic acl information contains all related users, 
which can be parsed out, and then the SCRAM information of the source cluster 
and the target cluster can be obtained through 
"ApiKeys.DESCRIBE_USER_SCRAM_CREDENTIALS" and 
"ApiKeys.ALTER_USER_SCRAM_CREDENTIALS" respectively.
group ACLs: List all the group acl of the source cluster, filter out all the 
group acl related to the relevant user in the previous step, and then use the 
admin interface to replicate to the target cluster.
6. I think it's a good idea to classify replication types, so it's more 
flexible to use, but I'm a little confused about what scenarios replicate only 
a few of them. 
(but it doesn't matter, it depends on the user)

I'm sorry, I don't understand your second and fourth questions. Could you 
describe them in detail?

best,
hudeqi


"Greg Harris" 写道:
> Hi hudeqi,
> 
> Thanks for the KIP! I think the original behavior (removing WRITE
> permissions during the sync) is a good default, but is not acceptable
> in every situation. I think providing a configuration for this
> behavior is the right idea.
> 
> I had a few questions:
> 
> 1. Is this configuration only relevant to the MirrorSourceConnector?
> Since we split the different connector configurations, we can omit
> this configuration from the Checkpoint and Heartbeat connectors when
> deployed in a connect cluster.
> 2. Is this configuration only able to be configured globally for an
> entire Dedicated MirrorMaker2? Can it be configured for one flow in a
> dedicated deployment and not another by specifying
> `source->target.sync.full.acl.enabled`?
> 3. Is the documentation going to include the "disaster recovery"
> language, or is that a left-over from an earlier revision in the KIP?
> I don't think that "disaster recovery" is a very clear term in this
> situation, and we should probably be very specific in the
> documentation about what this configuration is changing.
> 4. Did you consider any use-cases where a more restrictive ACL sync
> would be desirable? Right now we are downgrading ALL/removing WRITE,
> but leaving CREATE/DELETE/ALTER/etc ACLs as-is. Perhaps users would
> like to choose between an ACL sync which is more locked-down, the
> current behavior, or more permissive.
> 5. Currently MM2 only syncs topic ACLs, and not group ACLs or SCRAM
> credentials, so those would be new capabilities. Can you here (or in
> the KIP) go into more detail about how these would work?
> 6. Is there a reason to have one configuration control these three
> different syncs? Could users want to change the topic ACL sync
> semantics, while not using the group sync or the SCRAM sync?
> 
> Thanks,
> Greg
> 
> On Mon, Aug 28, 2023 at 2:10 AM hudeqi <16120...@bjtu.edu.cn> wrote:
> >
> > Hi, all, this is a vote about kip-965, thanks.
> >
> > best,
> > hudeqi
> >
> >
> > > -原始邮件-
> > > 发件人: hudeqi <16120...@bjtu.edu.cn>
> > > 发送时间: 2023-08-17 18:03:49 (星期四)
> > > 收件人: dev@kafka.apache.org
> > > 抄送:
> > > 主题: Re: [DISCUSSION] KIP-965: Support disaster recovery between 
clusters by MirrorMaker
> > >


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #67

2023-08-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 282901 lines...]
Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChangeNotTriggered() 
PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testMixedPipeline() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testMixedPipeline() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testGetDataExistingZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testGetDataExistingZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testDeleteExistingZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testDeleteExistingZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testSessionExpiry() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testSessionExpiry() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testSetDataNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testSetDataNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testConnectionViaNettyClient() STARTED

> Task :streams:integrationTest

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargeNumConsumers 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargeNumConsumers 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargeNumConsumers STARTED

> Task :core:integrationTest

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testConnectionViaNettyClient() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testDeleteNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testDeleteNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testExistsExistingZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testExistsExistingZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testGetAclNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testGetAclNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 177 > 
ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() PASSED

> Task :streams:integrationTest

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargeNumConsumers PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 180 > 
HighAvailability

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #12

2023-08-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2156

2023-08-30 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #66

2023-08-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 282637 lines...]
> Task :raft:testClasses UP-TO-DATE
> Task :connect:json:testJar
> Task :group-coordinator:compileTestJava UP-TO-DATE
> Task :group-coordinator:testClasses UP-TO-DATE
> Task :streams:generateMetadataFileForMavenJavaPublication
> Task :connect:json:testSrcJar
> Task :metadata:compileTestJava UP-TO-DATE
> Task :metadata:testClasses UP-TO-DATE
> Task :clients:generateMetadataFileForMavenJavaPublication

> Task :connect:api:javadoc
/home/jenkins/workspace/Kafka_kafka_3.5/connect/api/src/main/java/org/apache/kafka/connect/source/SourceRecord.java:44:
 warning - Tag @link: reference not found: org.apache.kafka.connect.data
1 warning

> Task :connect:api:copyDependantLibs UP-TO-DATE
> Task :connect:api:jar UP-TO-DATE
> Task :connect:api:generateMetadataFileForMavenJavaPublication
> Task :connect:json:copyDependantLibs UP-TO-DATE
> Task :connect:json:jar UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:api:javadocJar
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :streams:javadoc
> Task :streams:javadocJar

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_3.5/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API";>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/workspace/Kafka_kafka_3.5/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
2 warnings

> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:compileScala
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava UP-TO-DATE
> Task :streams:testClasses UP-TO-DATE
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.0.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 3m 1s
89 actionable tasks: 33 executed, 56 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_3.5/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.5.2-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.5.2-SNAPSHOT/streams-quickstart-3.5.2-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.5.2-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.4 #163

2023-08-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #11

2023-08-30 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2155

2023-08-30 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-970: Deprecate and remove Connect's redundant task configurations endpoint

2023-08-30 Thread Sagar
+1 (non - binding).

Thanks !
Sagar.

On Wed, 30 Aug 2023 at 11:09 PM, Chris Egerton 
wrote:

> +1 (binding), thanks Yash!
>
> On Wed, Aug 30, 2023 at 1:34 PM Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
>
> > Thanks for the KIP. Looks good to me.
> >
> > +1 (non-binding).
> >
> > Andrew
> >
> > > On 30 Aug 2023, at 18:07, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> > hgerald...@bloomberg.net> wrote:
> > >
> > > This makes sense to me, +1 (non-binding)
> > >
> > > From: dev@kafka.apache.org At: 08/30/23 02:58:59 UTC-4:00To:
> > dev@kafka.apache.org
> > > Subject: [VOTE] KIP-970: Deprecate and remove Connect's redundant task
> > configurations endpoint
> > >
> > > Hi all,
> > >
> > > This is the vote thread for KIP-970 which proposes deprecating (in the
> > > Apache Kafka 3.7 release) and eventually removing (in the next major
> > Apache
> > > Kafka release - 4.0) Connect's redundant task configurations endpoint.
> > >
> > > KIP -
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-970%3A+Deprecate+and+remov
> > > e+Connect%27s+redundant+task+configurations+endpoint
> > >
> > > Discussion thread -
> > > https://lists.apache.org/thread/997qg9oz58kho3c19mdrjodv0n98plvj
> > >
> > > Thanks,
> > > Yash
> > >
> > >
> >
> >
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2154

2023-08-30 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-970: Deprecate and remove Connect's redundant task configurations endpoint

2023-08-30 Thread Chris Egerton
+1 (binding), thanks Yash!

On Wed, Aug 30, 2023 at 1:34 PM Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> Thanks for the KIP. Looks good to me.
>
> +1 (non-binding).
>
> Andrew
>
> > On 30 Aug 2023, at 18:07, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> hgerald...@bloomberg.net> wrote:
> >
> > This makes sense to me, +1 (non-binding)
> >
> > From: dev@kafka.apache.org At: 08/30/23 02:58:59 UTC-4:00To:
> dev@kafka.apache.org
> > Subject: [VOTE] KIP-970: Deprecate and remove Connect's redundant task
> configurations endpoint
> >
> > Hi all,
> >
> > This is the vote thread for KIP-970 which proposes deprecating (in the
> > Apache Kafka 3.7 release) and eventually removing (in the next major
> Apache
> > Kafka release - 4.0) Connect's redundant task configurations endpoint.
> >
> > KIP -
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-970%3A+Deprecate+and+remov
> > e+Connect%27s+redundant+task+configurations+endpoint
> >
> > Discussion thread -
> > https://lists.apache.org/thread/997qg9oz58kho3c19mdrjodv0n98plvj
> >
> > Thanks,
> > Yash
> >
> >
>
>


Re: [VOTE] KIP-970: Deprecate and remove Connect's redundant task configurations endpoint

2023-08-30 Thread Andrew Schofield
Thanks for the KIP. Looks good to me.

+1 (non-binding).

Andrew

> On 30 Aug 2023, at 18:07, Hector Geraldino (BLOOMBERG/ 919 3RD A) 
>  wrote:
>
> This makes sense to me, +1 (non-binding)
>
> From: dev@kafka.apache.org At: 08/30/23 02:58:59 UTC-4:00To:  
> dev@kafka.apache.org
> Subject: [VOTE] KIP-970: Deprecate and remove Connect's redundant task 
> configurations endpoint
>
> Hi all,
>
> This is the vote thread for KIP-970 which proposes deprecating (in the
> Apache Kafka 3.7 release) and eventually removing (in the next major Apache
> Kafka release - 4.0) Connect's redundant task configurations endpoint.
>
> KIP -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-970%3A+Deprecate+and+remov
> e+Connect%27s+redundant+task+configurations+endpoint
>
> Discussion thread -
> https://lists.apache.org/thread/997qg9oz58kho3c19mdrjodv0n98plvj
>
> Thanks,
> Yash
>
>



Re:[VOTE] KIP-970: Deprecate and remove Connect's redundant task configurations endpoint

2023-08-30 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
This makes sense to me, +1 (non-binding)

From: dev@kafka.apache.org At: 08/30/23 02:58:59 UTC-4:00To:  
dev@kafka.apache.org
Subject: [VOTE] KIP-970: Deprecate and remove Connect's redundant task 
configurations endpoint

Hi all,

This is the vote thread for KIP-970 which proposes deprecating (in the
Apache Kafka 3.7 release) and eventually removing (in the next major Apache
Kafka release - 4.0) Connect's redundant task configurations endpoint.

KIP -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-970%3A+Deprecate+and+remov
e+Connect%27s+redundant+task+configurations+endpoint

Discussion thread -
https://lists.apache.org/thread/997qg9oz58kho3c19mdrjodv0n98plvj

Thanks,
Yash




[DISCUSS] KIP-973 Expose per topic replication rate metrics

2023-08-30 Thread Nelson Bighetti
Relatively minor change that fixes a mismatch between documentation and
implementation.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-973%3A+Expose+per+topic+replication+rate+metrics


Re: Apache Kafka 3.6.0 release

2023-08-30 Thread Chris Egerton
Hi Satish,

Wanted to let you know that KAFKA-12879 (
https://issues.apache.org/jira/browse/KAFKA-12879), a breaking change in
Admin::listOffsets, has been reintroduced into the code base. Since we
haven't yet published a release with this change (at least, not the more
recent instance of it), I was hoping we could treat it as a blocker for
3.6.0. I'd also like to solicit the input of people familiar with the admin
client to weigh in on the Jira ticket about whether we should continue to
preserve the current behavior (if the consensus is that we should, I'm
happy to file a fix).

Please let me know if you agree that this qualifies as a blocker. I plan on
publishing a potential fix sometime this week.

Cheers,

Chris

On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana 
wrote:

> Hi,
> Please plan to continue merging pull requests associated with any
> outstanding minor features and stabilization changes to 3.6 branch
> before September 3rd. Kindly update the KIP's implementation status in
> the 3.6.0 release notes.
>
> Thanks,
> Satish.
>
> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
>  wrote:
> >
> > Hey Satish,
> > Everything should be in 3.6, and I will update the release plan wiki.
> > Thanks!
> >
> > On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana  >
> > wrote:
> >
> > > Hi Justine,
> > > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This part looks
> > > to be addressing a critical issue of consumers getting stuck. Please
> > > update the release plan wiki and merge all the required changes to 3.6
> > > branch.
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Thu, 24 Aug 2023 at 22:19, Justine Olshan
> > >  wrote:
> > > >
> > > > Hey Satish,
> > > > Does it make sense to include KIP-890 part 1? It prevents hanging
> > > > transactions for older clients. (An optimization and stronger EOS
> > > > guarantees will be included in part 2)
> > > >
> > > > Thanks,
> > > > Justine
> > > >
> > > > On Mon, Aug 21, 2023 at 3:29 AM Satish Duggana <
> satish.dugg...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > > 3.6 branch is created. Please make sure any PRs targeted for 3.6.0
> > > > > should be merged to 3.6 branch once those are merged to trunk.
> > > > >
> > > > > Thanks,
> > > > > Satish.
> > > > >
> > > > > On Wed, 16 Aug 2023 at 15:58, Satish Duggana <
> satish.dugg...@gmail.com
> > > >
> > > > > wrote:
> > > > > >
> > > > > > Hi,
> > > > > > Please plan to merge PRs(including the major features) targeted
> for
> > > > > > 3.6.0 by the end of Aug 20th UTC. Starting from August 21st, any
> pull
> > > > > > requests intended for the 3.6.0 release must include the changes
> > > > > > merged into the 3.6 branch as mentioned in the release plan.
> > > > > >
> > > > > > Thanks,
> > > > > > Satish.
> > > > > >
> > > > > > On Fri, 4 Aug 2023 at 18:39, Chris Egerton
> 
> > > > > wrote:
> > > > > > >
> > > > > > > Thanks for adding KIP-949, Satish!
> > > > > > >
> > > > > > > On Fri, Aug 4, 2023 at 7:06 AM Satish Duggana <
> > > > > satish.dugg...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > > Myself and Divij discussed and added the wiki for Kafka
> > > TieredStorage
> > > > > > > > Early Access Release[1]. If you have any comments or
> feedback,
> > > please
> > > > > > > > feel free to share them.
> > > > > > > >
> > > > > > > > 1.
> > > > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Satish.
> > > > > > > >
> > > > > > > > On Fri, 4 Aug 2023 at 08:40, Satish Duggana <
> > > > > satish.dugg...@gmail.com>
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi Chris,
> > > > > > > > > Thanks for the update. This looks to be a minor change and
> is
> > > also
> > > > > > > > > useful for backward compatibility. I added it to the
> release
> > > plan
> > > > > as
> > > > > > > > > an exceptional case.
> > > > > > > > >
> > > > > > > > > ~Satish.
> > > > > > > > >
> > > > > > > > > On Thu, 3 Aug 2023 at 21:34, Chris Egerton
> > >  > > > > >
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > Hi Satish,
> > > > > > > > > >
> > > > > > > > > > Would it be possible to include KIP-949 (
> > > > > > > > > >
> > > > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy
> > > > > > > > )
> > > > > > > > > > in the 3.6.0 release? It passed voting yesterday, and is
> a
> > > very
> > > > > small,
> > > > > > > > > > low-risk change that we'd like to put out as soon as
> > > possible in
> > > > > order
> > > > > > > > to
> > > > > > > > > > patch an accidental break in backwards compatibility
> caused
> > > a few
> > > > > > > > versions
> > > > > > > > > > ago.
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > >
> > > > > > > > > > Chris
> > > > > > > > > >
> > > > > > > > > > On Fri, Jul 28, 2023

[jira] [Reopened] (KAFKA-12879) Compatibility break in Admin.listOffsets()

2023-08-30 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton reopened KAFKA-12879:
---
  Assignee: (was: Philip Nee)

Reopening due to https://github.com/apache/kafka/pull/13432

> Compatibility break in Admin.listOffsets()
> --
>
> Key: KAFKA-12879
> URL: https://issues.apache.org/jira/browse/KAFKA-12879
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Tom Bentley
>Priority: Major
> Fix For: 2.5.2, 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4
>
>
> KAFKA-12339 incompatibly changed the semantics of Admin.listOffsets(). 
> Previously it would fail with {{UnknownTopicOrPartitionException}} when a 
> topic didn't exist. Now it will (eventually) fail with {{TimeoutException}}. 
> It seems this was more or less intentional, even though it would break code 
> which was expecting and handling the {{UnknownTopicOrPartitionException}}. A 
> workaround is to use {{retries=1}} and inspect the cause of the 
> {{TimeoutException}}, but this isn't really suitable for cases where the same 
> Admin client instance is being used for other calls where retries is 
> desirable.
> Furthermore as well as the intended effect on {{listOffsets()}} it seems that 
> the change could actually affect other methods of Admin.
> More generally, the Admin client API is vague about which exceptions can 
> propagate from which methods. This means that it's not possible to say, in 
> cases like this, whether the calling code _should_ have been relying on the 
> {{UnknownTopicOrPartitionException}} or not.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: [DISCUSS] KIP-972: Add the metric of the current running version of kafka

2023-08-30 Thread Mickael Maison
Hi,

Prometheus only support numeric values for metrics. This means it's
not able to handle the kafka.server:type=app-info metric since Kafka
versions are not valid numbers (3.5.0).
As a workaround we could create a metric with the version without the
dots, for example with value 350 for Kafka 3.5.0.

Also in between releases Kafka uses the -SNAPSHOT suffix (for example
trunk is currently 3.7.0-SNAPSHOT) so we should also consider a way to
handle those.

Thanks,
Mickael

On Wed, Aug 30, 2023 at 2:51 PM hudeqi <16120...@bjtu.edu.cn> wrote:
>
> Hi, Kamal, thanks your reminding, but I have a question: It seems that I 
> can't get this metric through "jmx_prometheus"? Although I observed this 
> metric through other tools.
>
> best,
> hudeqi
>
> "Kamal Chandraprakash" 写道:
> > Hi Hudeqi,
> >
> > Kafka already emits the version metric. Can you check whether the below
> > metric satisfies your requirement?
> >
> > kafka.server:type=app-info,id=0
> >
> > --
> > Kamal
> >
> > On Mon, Aug 28, 2023 at 2:29 PM hudeqi <16120...@bjtu.edu.cn> wrote:
> >
> > > Hi, all, I want to submit a minor kip to add a metric, which supports to
> > > get the running kafka server verison, the wiki url is here
> > >
> > > Motivation
> > >
> > > At present, it is impossible to perceive the Kafka version that the broker
> > > is running from the perspective of metrics. If multiple Kafka versions are
> > > deployed in a cluster due to various reasons, it is difficult for us to
> > > intuitively understand the version distribution.
> > >
> > > So, I want to add a kafka version metric indicating the version of the
> > > current running kafka server, it can help us to perceive the mixed
> > > distribution of multiple versions, and to perceive the progress of version
> > > upgrade in the cluster in real time.
> > >
> > > Proposed Changes
> > >
> > > When instantiating kafkaServer/BrokerServer, register `KafkaVersion` gauge
> > > metric, whose value is obtained by `VersionInfo.getVersion`. And remove 
> > > all
> > > related metrics when kafkaServer/BrokerServer shutdown.
> > >
> > >
> > >
> > >
> > > best,
> > >
> > > hudeqi
> > >
> > >
> > >
> > >
> > >
> > >


Re: Apache Kafka 3.6.0 release

2023-08-30 Thread Satish Duggana
Hi,
Please plan to continue merging pull requests associated with any
outstanding minor features and stabilization changes to 3.6 branch
before September 3rd. Kindly update the KIP's implementation status in
the 3.6.0 release notes.

Thanks,
Satish.

On Fri, 25 Aug 2023 at 21:37, Justine Olshan
 wrote:
>
> Hey Satish,
> Everything should be in 3.6, and I will update the release plan wiki.
> Thanks!
>
> On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana 
> wrote:
>
> > Hi Justine,
> > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This part looks
> > to be addressing a critical issue of consumers getting stuck. Please
> > update the release plan wiki and merge all the required changes to 3.6
> > branch.
> >
> > Thanks,
> > Satish.
> >
> > On Thu, 24 Aug 2023 at 22:19, Justine Olshan
> >  wrote:
> > >
> > > Hey Satish,
> > > Does it make sense to include KIP-890 part 1? It prevents hanging
> > > transactions for older clients. (An optimization and stronger EOS
> > > guarantees will be included in part 2)
> > >
> > > Thanks,
> > > Justine
> > >
> > > On Mon, Aug 21, 2023 at 3:29 AM Satish Duggana  > >
> > > wrote:
> > >
> > > > Hi,
> > > > 3.6 branch is created. Please make sure any PRs targeted for 3.6.0
> > > > should be merged to 3.6 branch once those are merged to trunk.
> > > >
> > > > Thanks,
> > > > Satish.
> > > >
> > > > On Wed, 16 Aug 2023 at 15:58, Satish Duggana  > >
> > > > wrote:
> > > > >
> > > > > Hi,
> > > > > Please plan to merge PRs(including the major features) targeted for
> > > > > 3.6.0 by the end of Aug 20th UTC. Starting from August 21st, any pull
> > > > > requests intended for the 3.6.0 release must include the changes
> > > > > merged into the 3.6 branch as mentioned in the release plan.
> > > > >
> > > > > Thanks,
> > > > > Satish.
> > > > >
> > > > > On Fri, 4 Aug 2023 at 18:39, Chris Egerton 
> > > > wrote:
> > > > > >
> > > > > > Thanks for adding KIP-949, Satish!
> > > > > >
> > > > > > On Fri, Aug 4, 2023 at 7:06 AM Satish Duggana <
> > > > satish.dugg...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > > Myself and Divij discussed and added the wiki for Kafka
> > TieredStorage
> > > > > > > Early Access Release[1]. If you have any comments or feedback,
> > please
> > > > > > > feel free to share them.
> > > > > > >
> > > > > > > 1.
> > > > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Satish.
> > > > > > >
> > > > > > > On Fri, 4 Aug 2023 at 08:40, Satish Duggana <
> > > > satish.dugg...@gmail.com>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi Chris,
> > > > > > > > Thanks for the update. This looks to be a minor change and is
> > also
> > > > > > > > useful for backward compatibility. I added it to the release
> > plan
> > > > as
> > > > > > > > an exceptional case.
> > > > > > > >
> > > > > > > > ~Satish.
> > > > > > > >
> > > > > > > > On Thu, 3 Aug 2023 at 21:34, Chris Egerton
> >  > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi Satish,
> > > > > > > > >
> > > > > > > > > Would it be possible to include KIP-949 (
> > > > > > > > >
> > > > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy
> > > > > > > )
> > > > > > > > > in the 3.6.0 release? It passed voting yesterday, and is a
> > very
> > > > small,
> > > > > > > > > low-risk change that we'd like to put out as soon as
> > possible in
> > > > order
> > > > > > > to
> > > > > > > > > patch an accidental break in backwards compatibility caused
> > a few
> > > > > > > versions
> > > > > > > > > ago.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > >
> > > > > > > > > Chris
> > > > > > > > >
> > > > > > > > > On Fri, Jul 28, 2023 at 2:35 AM Satish Duggana <
> > > > > > > satish.dugg...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi All,
> > > > > > > > > > Whoever has KIP entries in the 3.6.0 release plan. Please
> > > > update it
> > > > > > > > > > with the latest status by tomorrow(end of the day 29th Jul
> > UTC
> > > > ).
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > > Satish.
> > > > > > > > > >
> > > > > > > > > > On Fri, 28 Jul 2023 at 12:01, Satish Duggana <
> > > > > > > satish.dugg...@gmail.com>
> > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > Thanks Ismael and Divij for the suggestions.
> > > > > > > > > > >
> > > > > > > > > > > One way was to follow the earlier guidelines that we set
> > for
> > > > any
> > > > > > > early
> > > > > > > > > > > access release. It looks Ismael already mentioned the
> > > > example of
> > > > > > > > > > > KRaft.
> > > > > > > > > > >
> > > > > > > > > > > KIP-405 mentions upgrade/downgrade and limitations
> > sections.
> > > > We can
> > > > > > > > > > > clarify that in the release notes for users on how this
> > > 

Re: Re: [DISCUSS] KIP-972: Add the metric of the current running version of kafka

2023-08-30 Thread hudeqi
Hi, Kamal, thanks your reminding, but I have a question: It seems that I can't 
get this metric through "jmx_prometheus"? Although I observed this metric 
through other tools.

best,
hudeqi

"Kamal Chandraprakash" 写道:
> Hi Hudeqi,
> 
> Kafka already emits the version metric. Can you check whether the below
> metric satisfies your requirement?
> 
> kafka.server:type=app-info,id=0
> 
> --
> Kamal
> 
> On Mon, Aug 28, 2023 at 2:29 PM hudeqi <16120...@bjtu.edu.cn> wrote:
> 
> > Hi, all, I want to submit a minor kip to add a metric, which supports to
> > get the running kafka server verison, the wiki url is here
> >
> > Motivation
> >
> > At present, it is impossible to perceive the Kafka version that the broker
> > is running from the perspective of metrics. If multiple Kafka versions are
> > deployed in a cluster due to various reasons, it is difficult for us to
> > intuitively understand the version distribution.
> >
> > So, I want to add a kafka version metric indicating the version of the
> > current running kafka server, it can help us to perceive the mixed
> > distribution of multiple versions, and to perceive the progress of version
> > upgrade in the cluster in real time.
> >
> > Proposed Changes
> >
> > When instantiating kafkaServer/BrokerServer, register `KafkaVersion` gauge
> > metric, whose value is obtained by `VersionInfo.getVersion`. And remove all
> > related metrics when kafkaServer/BrokerServer shutdown.
> >
> >
> >
> >
> > best,
> >
> > hudeqi
> >
> >
> >
> >
> >
> >


Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-08-30 Thread hudeqi
Hi Elkhan,
I have also done work similar to the partition replication lag mentioned in 
this kip. After going online, I discovered a problem: when `MirrorSourceTask` 
executes `poll`, it obtains the LEO of the source topic through the consumer. 
This logic may take seconds to jitter. , I think this may affect the 
performance of the replication itself.
As for the `replication-latency-ms` metric, it is sometimes inaccurate. For 
details, see: https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-15068

best,
hudeqi


"Viktor Somogyi-Vass" 写道:
> Hi Elkhan,
> 
> I think this is quite a useful improvement. A few questions, suggestions:
> 1. How do you calculate the min, max and avg variants? If I understand
> correctly then the metric itself is partition based
> (where replication-offset-lag is the lag of the replica that is being
> consumed) and these are min, max, avg across replicas?
> 2. You briefly mention replication-latency-ms at the end but I think it'd
> be worth writing a bit more about it, what it does currently, how it is
> calculated and therefore why it doesn't fit.
> 
> Thanks,
> Viktor
> 
> On Sat, Aug 26, 2023 at 3:49 PM Elxan Eminov 
> wrote:
> 
> > Relatively minor change with a new metric for MM2
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> >


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-08-30 Thread Viktor Somogyi-Vass
Hi Elkhan,

I think this is quite a useful improvement. A few questions, suggestions:
1. How do you calculate the min, max and avg variants? If I understand
correctly then the metric itself is partition based
(where replication-offset-lag is the lag of the replica that is being
consumed) and these are min, max, avg across replicas?
2. You briefly mention replication-latency-ms at the end but I think it'd
be worth writing a bit more about it, what it does currently, how it is
calculated and therefore why it doesn't fit.

Thanks,
Viktor

On Sat, Aug 26, 2023 at 3:49 PM Elxan Eminov 
wrote:

> Relatively minor change with a new metric for MM2
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
>


Re: [DISCUSS] KIP-936: Throttle number of active PIDs

2023-08-30 Thread Omnia Ibrahim
Hi Claude, sorry for the late reply was out for some time. Thanks for your
response.

>   - To ensure that all produced ids are tracked for 1 hour regardless of
> whether they were produced by userA or userB.
Not really we need to track producer ids created by userA separately from
producer ids created by userB.
The primary API purpose is to limit the number of producer ids per user as
the quota is set per user; for example, userA might have 100 producerIDs
quota while userB might have only a quota of 50.
And these quotas are dynamic configs so they can change at any time. This
is why am asking how will we update the max entries for the Shape.

> Do you need to keep a list for each principal?  are the PID's supposed to
be globally unique?  If the question you are asking is has principal_1 seen
pid_2 then hashing principal_1 and pid_2 together and creating a bloom
filter will tell you using one LayeredBloomFilter.

The PID should be unique however the problem we are trying to solve here is
to throttle the owner of the vast majority of them which in this case is
the principal. The main concern I have is
how to update the LayedBloomFilter's max which in theory should be the
value of dynamic config `quota` which is set per principal.
If we used one LayedBloomFilter then every time I need to update the max
entries to match the quota I'll need to replace the bloom for all
principals however if they are separated like I suggested then replacing
the LayedBloomFilter of max entries X with another one with max entries Y
will only impact one user and not everyone. Does this make sense?



On Fri, Aug 18, 2023 at 3:03 PM Claude Warren  wrote:

> Sorry for taking so long to get back to you, somehow I missed your message.
>
> I am not sure how this will work when we have different producer-id-rate
> > for different KafkaPrincipal as proposed in the KIP.
> > For example `userA` had producer-id-rate of 1000 per hour while `user2`
> has
> > a quota of 100 producer ids per hour. How will we configure the max
> entries
> > for the Shape?
> >
>
> I am not certain I have a full understanding of your network.  However, I
> am assuming that you want:
>
>- To ensure that all produced ids are tracked for 1 hour regardless of
>whether they were produced by userA or userB.
>- To use a sliding window with 1 minute resolution.
>
>
> There is a tradeoff in the Layered Bloom filter -- larger max entries (N)
> or greater depth.
>
> So the simplest calculation would be 1100 messages per hour / 60 minutes
> per hour = 18.3, let's round that to 20.
> With an N=20 if more than 20 ids are produced in a minute a second filter
> will be created to accept all those over 20.
> Let's assume that the first filter was created at time 0:00:00  and the
> 21st id comes in at 0:00:45.  When the first insert after 1:00:59 occurs
> (one hour after start + window time) the first filter will be removed.
> When the first insert after 1:01:44 occurs the filter created at 0:00:45
> will be removed.
>
> So if you have a period of high usage the number of filters (depth of the
> layers) increases, as the usage decreases, the numbers go back to expected
> depths.  You could set the N to a much larger number and each filter would
> handle more ids before an extra layer was added.  However, if they are
> vastly too big then there will be significant wasted space.
>
> The only thing that comes to my mind to maintain this desired behavior in
> > the KIP is to NOT hash PID with KafkaPrincipal and keep a
> > Map
> > then each one of these bloom filters is controlled with
> > `Shape(, 0.1)`.
> >
>
> Do you need to keep a list for each principal?  are the PID's supposed to
> be globally unique?  If the question you are asking is has principal_1 seen
> pid_2 then hashing principal_1 and pid_2 together and creating a bloom
> filter will tell you using one LayeredBloomFilter.  If you also need to
> ask: "has anybody seen pid_2?", then there are some other solutions.   You
> solution will work and may be appropriate in some cases where there is a
> wide range of principal message rates.  But in that case I would probably
> still use the principal+pid solution and just split the filters by
> estimated size so that all the ones that need a large filter go into one
> system, and the smaller ones go into another. I do note that the hurst
> calculator [1] shows that for (1000, 0.1) you need  599 bytes and 3 hash
> functions.  (100,0.1) you need 60 bytes and 3 hash functions, for (1100,
> 0.1) you need 659 bytes and 3 hash functions.  I would probably pick 704
> bytes and 3 hash functions which gives you (1176, 0.1).  I would pick this
> because 704 divides evenly into 64bit long blocks that are used internally
> for the SimpleBloomFilter so there is no wasted space.
>
> Maybe am missing something here but I can't find anything in the
> > `LayerManager` code that point to how often will the eviction function
> > runs. Do you mean that the eviction function runs every m

[jira] [Resolved] (KAFKA-15412) Reading an unknown version of quorum-state-file should trigger an error

2023-08-30 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15412.
---
Fix Version/s: 3.7.0
   Resolution: Fixed

> Reading an unknown version of quorum-state-file should trigger an error
> ---
>
> Key: KAFKA-15412
> URL: https://issues.apache.org/jira/browse/KAFKA-15412
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: John Mannooparambil
>Priority: Minor
> Fix For: 3.7.0
>
>
> Reading an unknown version of quorum-state-file should trigger an error. 
> Currently the only known version is 0. Reading any other version should cause 
> an error. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)