[jira] [Assigned] (KAFKA-15156) Update cipherInformation correctly in DefaultChannelMetadataRegistry

2024-04-28 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15156:


Assignee: (was: Walter Hernandez)

> Update cipherInformation correctly in DefaultChannelMetadataRegistry
> 
>
> Key: KAFKA-15156
> URL: https://issues.apache.org/jira/browse/KAFKA-15156
> Project: Kafka
>  Issue Type: Bug
>Reporter: Divij Vaidya
>Priority: Minor
>  Labels: newbie
>
> At 
> [https://github.com/apache/kafka/blob/4a61b48d3dca81e28b57f10af6052f36c50a05e3/clients/src/main/java/org/apache/kafka/common/network/DefaultChannelMetadataRegistry.java#L25]
>  
> we do not end up assigning the new value of cipherInformation to the member 
> variable. 
> The code over here, should be the following so that we can update the cipher 
> information.
> {noformat}
> if (cipherInformation == null) {     
> throw Illegal exception.
> }
> this.cipherInformation = cipherInformation{noformat}
>  
>  
> While this class is only used in tests, we should still fix this. It's a 
> minor bug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15749) KRaft support in KafkaMetricReporterClusterIdTest

2024-04-24 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840439#comment-17840439
 ] 

Walter Hernandez commented on KAFKA-15749:
--

PR is officially stale: [https://github.com/apache/kafka/pull/15181]

[~anishlukk123] can I assign this to you?

Please review his PR and other Jira tickets with the "KRaft support in" in 
their name that were resolved for example PR's.

> KRaft support in KafkaMetricReporterClusterIdTest
> -
>
> Key: KAFKA-15749
> URL: https://issues.apache.org/jira/browse/KAFKA-15749
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Chirag Wadhwa
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in KafkaMetricReporterClusterIdTest in 
> core/src/test/scala/unit/kafka/server/KafkaMetricReporterClusterIdTest.scala 
> need to be updated to support KRaft
> 96 : def testClusterIdPresent(): Unit = {
> Scanned 119 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16567:


Assignee: (was: Walter Hernandez)

> Add New Stream Metrics based on KIP-869
> ---
>
> Key: KAFKA-16567
> URL: https://issues.apache.org/jira/browse/KAFKA-16567
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Walter Hernandez
>Priority: Blocker
>  Labels: kip
> Fix For: 4.0.0
>
>
> Add the following metrics to the state updater:
>  * restoring-active-tasks: count
>  * restoring-standby-tasks: count
>  * paused-active-tasks: count
>  * paused-standby-tasks: count
>  * idle-ratio: percentage
>  * restore-ratio: percentage
>  * checkpoint-ratio: percentage
>  * restore-records-total: count
>  * restore-records-rate: rate
>  * restore-call-rate: rate



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16336:


Assignee: (was: Walter Hernandez)

> Remove Deprecated metric standby-process-ratio
> --
>
> Key: KAFKA-16336
> URL: https://issues.apache.org/jira/browse/KAFKA-16336
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Metric "standby-process-ratio" was deprecated in 3.5 release via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15156) Update cipherInformation correctly in DefaultChannelMetadataRegistry

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15156:


Assignee: Walter Hernandez

> Update cipherInformation correctly in DefaultChannelMetadataRegistry
> 
>
> Key: KAFKA-15156
> URL: https://issues.apache.org/jira/browse/KAFKA-15156
> Project: Kafka
>  Issue Type: Bug
>Reporter: Divij Vaidya
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: newbie
>
> At 
> [https://github.com/apache/kafka/blob/4a61b48d3dca81e28b57f10af6052f36c50a05e3/clients/src/main/java/org/apache/kafka/common/network/DefaultChannelMetadataRegistry.java#L25]
>  
> we do not end up assigning the new value of cipherInformation to the member 
> variable. 
> The code over here, should be the following so that we can update the cipher 
> information.
> {noformat}
> if (cipherInformation == null) {     
> throw Illegal exception.
> }
> this.cipherInformation = cipherInformation{noformat}
>  
>  
> While this class is only used in tests, we should still fix this. It's a 
> minor bug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2111) Command Line Standardization - Add Help Arguments & List Required Fields

2024-04-22 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839609#comment-17839609
 ] 

Walter Hernandez commented on KAFKA-2111:
-

*[mimaison|https://github.com/mimaison]* commented [on Feb 
20|https://github.com/apache/kafka/pull/3605#issuecomment-1954006952]
|This PR is very old and the tools are being rewritten in Java, so I'll close 
this PR.|

The above was comment on the linked PR for this ticket, where they're indeed 
written in Java.

In addition, the referenced KIP hasn't seen an update since 2015.

I believe it's safe to close this ticket for cleanup.

> Command Line Standardization - Add Help Arguments & List Required Fields
> 
>
> Key: KAFKA-2111
> URL: https://issues.apache.org/jira/browse/KAFKA-2111
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Matt Warhaftig
>Priority: Minor
>  Labels: newbie
>
> KIP-14 is the standardization of tool command line arguments.  As an offshoot 
> of that proposal there are standardization changes that don't need to be part 
> of the KIP since they are less invasive.  They are:
> - Properly format argument descriptions (into sentences) and add any missing 
> "REQUIRED" notes.
> - Add 'help' argument to any top-level tool scripts that were missing it.
> This JIRA is for tracking them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15715) KRaft support in UpdateFeaturesTest

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15715:


Assignee: Walter Hernandez

> KRaft support in UpdateFeaturesTest
> ---
>
> Key: KAFKA-15715
> URL: https://issues.apache.org/jira/browse/KAFKA-15715
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in UpdateFeaturesTest in 
> core/src/test/scala/unit/kafka/server/UpdateFeaturesTest.scala need to be 
> updated to support KRaft
> 176 : def testShouldFailRequestIfNotController(): Unit = {
> 210 : def testShouldFailRequestWhenDowngradeFlagIsNotSetDuringDowngrade(): 
> Unit = {
> 223 : def 
> testShouldFailRequestWhenDowngradeToHigherVersionLevelIsAttempted(): Unit = {
> 236 : def 
> testShouldFailRequestInServerWhenDowngradeFlagIsNotSetDuringDeletion(): Unit 
> = {
> 280 : def testShouldFailRequestDuringDeletionOfNonExistingFeature(): Unit = {
> 292 : def testShouldFailRequestWhenUpgradingToSameVersionLevel(): Unit = {
> 354 : def 
> testShouldFailRequestDuringBrokerMaxVersionLevelIncompatibilityForExistingFinalizedFeature():
>  Unit = {
> 368 : def 
> testShouldFailRequestDuringBrokerMaxVersionLevelIncompatibilityWithNoExistingFinalizedFeature():
>  Unit = {
> 381 : def testSuccessfulFeatureUpgradeAndWithNoExistingFinalizedFeatures(): 
> Unit = {
> 417 : def testSuccessfulFeatureUpgradeAndDowngrade(): Unit = {
> 459 : def testPartialSuccessDuringValidFeatureUpgradeAndInvalidDowngrade(): 
> Unit = {
> 509 : def testPartialSuccessDuringInvalidFeatureUpgradeAndValidDowngrade(): 
> Unit = {
> Scanned 567 lines. Found 0 KRaft tests out of 12 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-15736.
--
Resolution: Done

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-22 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839604#comment-17839604
 ] 

Walter Hernandez commented on KAFKA-15736:
--

KRaft support seems supported:

[https://github.com/apache/kafka/blob/trunk/core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala]

Going to close for clean up.

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-6689) Kafka not release .deleted file.

2024-04-22 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez updated KAFKA-6689:

Labels:   (was: newbie)

> Kafka not release .deleted file.
> 
>
> Key: KAFKA-6689
> URL: https://issues.apache.org/jira/browse/KAFKA-6689
> Project: Kafka
>  Issue Type: Bug
>  Components: config, controller, log
>Affects Versions: 0.10.1.1
> Environment: 3 Kafka Broker running on CentOS6.9(rungi3 VMs) 
>Reporter: A
>Priority: Critical
> Fix For: 0.10.1.1
>
> Attachments: Screenshot 2020-12-06 at 12.53.20 PM.png
>
>
>          After Kafka cleaned log  .timeindex / .index files based on topic 
> retention. I can
> still lsof a lot of .index.deleted and .timeindex.deleted files. 
>        We have 3 Brokers on 3 VMs ,  It's happen only 2 brokers.  
> [brokeer-01 ~]$ lsof -p 24324 | grep deleted | wc -l
> 28
> [broker-02 ~]$ lsof -p 12131 | grep deleted | wc -l
> 14599
> [broker-03 ~]$ lsof -p 3349 | grep deleted | wc -l
> 14523
>  
> Configuration on 3 broker is same.  (Rolllog every hour, Retention time 11 
> Hours)
>  * INFO KafkaConfig values:
>  advertised.host.name = null
>  advertised.listeners = PLAINTEXT://Broker-02:9092
>  advertised.port = null
>  authorizer.class.name =
>  auto.create.topics.enable = true
>  auto.leader.rebalance.enable = true
>  background.threads = 10
>  broker.id = 2
>  broker.id.generation.enable = true
>  broker.rack = null
>  compression.type = producer
>  connections.max.idle.ms = 60
>  controlled.shutdown.enable = true
>  controlled.shutdown.max.retries = 3
>  controlled.shutdown.retry.backoff.ms = 5000
>  controller.socket.timeout.ms = 3
>  default.replication.factor = 3
>  delete.topic.enable = true
>  fetch.purgatory.purge.interval.requests = 1000
>  group.max.session.timeout.ms = 30
>  group.min.session.timeout.ms = 6000
>  host.name =
>  inter.broker.protocol.version = 0.10.1-IV2
>  leader.imbalance.check.interval.seconds = 300
>  leader.imbalance.per.broker.percentage = 10
>  listeners = null
>  log.cleaner.backoff.ms = 15000
>  log.cleaner.dedupe.buffer.size = 134217728
>  log.cleaner.delete.retention.ms = 8640
>  log.cleaner.enable = true
>  log.cleaner.io.buffer.load.factor = 0.9
>  log.cleaner.io.buffer.size = 524288
>  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>  log.cleaner.min.cleanable.ratio = 0.5
>  log.cleaner.min.compaction.lag.ms = 0
>  log.cleaner.threads = 1
>  log.cleanup.policy = [delete]
>  log.dir = /tmp/kafka-logs
>  log.dirs = /data/appdata/kafka/data
>  log.flush.interval.messages = 9223372036854775807
>  log.flush.interval.ms = null
>  log.flush.offset.checkpoint.interval.ms = 6
>  log.flush.scheduler.interval.ms = 9223372036854775807
>  log.index.interval.bytes = 4096
>  log.index.size.max.bytes = 10485760
>  log.message.format.version = 0.10.1-IV2
>  log.message.timestamp.difference.max.ms = 9223372036854775807
>  log.message.timestamp.type = CreateTime
>  log.preallocate = false
>  log.retention.bytes = -1
>  log.retention.check.interval.ms = 30
>  log.retention.hours = 11
>  log.retention.minutes = 660
>  log.retention.ms = 3960
>  log.roll.hours = 1
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  max.connections.per.ip = 2147483647
>  max.connections.per.ip.overrides =
>  message.max.bytes = 112
>  metric.reporters = []
>  metrics.num.samples = 2
>  metrics.sample.window.ms = 3
>  min.insync.replicas = 2
>  num.io.threads = 16
>  num.network.threads = 16
>  num.partitions = 10
>  num.recovery.threads.per.data.dir = 3
>  num.replica.fetchers = 1
>  offset.metadata.max.bytes = 4096
>  offsets.commit.required.acks = -1
>  offsets.commit.timeout.ms = 5000
>  offsets.load.buffer.size = 5242880
>  offsets.retention.check.interval.ms = 60
>  offsets.retention.minutes = 1440
>  offsets.topic.compression.codec = 0
>  offsets.topic.num.partitions = 50
>  offsets.topic.replication.factor = 3
>  offsets.topic.segment.bytes = 104857600
>  port = 9092
>  principal.builder.class = class 
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>  producer.purgatory.purge.interval.requests = 1000
>  queued.max.requests = 3
>  quota.consumer.default = 9223372036854775807
>  quota.producer.default = 9223372036854775807
>  quota.window.num = 11
>  quota.window.size.seconds = 1
>  replica.fetch.backoff.ms = 1000
>  replica.fetch.max.bytes = 1048576
>  replica.fetch.min.bytes = 1
>  replica.fetch.response.max.bytes = 10485760
>  replica.fetch.wait.max.ms = 500
>  replica.high.watermark.checkpoint.interval.ms = 5000
>  replica.lag.time.max.ms = 1
>  replica.socket.receive.buffer.bytes = 65536
>  replica.socket.timeout.ms = 3
>  

[jira] [Commented] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-20 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839225#comment-17839225
 ] 

Walter Hernandez commented on KAFKA-16567:
--

I noticed that KIP-869 not only introduced these metrics, but also referred to 
the deprecation of another metric (KAFKA-16336).

Since KAFKA-16336 was a blocker, I assumed that this fell in that category, but 
I see that KAFKA-16336 is a subtasks of a larger ticket (and thus is a blocker 
for the main task?).

What would be a moreappropriate Priority for this?

> Add New Stream Metrics based on KIP-869
> ---
>
> Key: KAFKA-16567
> URL: https://issues.apache.org/jira/browse/KAFKA-16567
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Walter Hernandez
>Assignee: Walter Hernandez
>Priority: Blocker
>  Labels: kip
> Fix For: 4.0.0
>
>
> Add the following metrics to the state updater:
>  * restoring-active-tasks: count
>  * restoring-standby-tasks: count
>  * paused-active-tasks: count
>  * paused-standby-tasks: count
>  * idle-ratio: percentage
>  * restore-ratio: percentage
>  * checkpoint-ratio: percentage
>  * restore-records-total: count
>  * restore-records-rate: rate
>  * restore-call-rate: rate



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-20 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839225#comment-17839225
 ] 

Walter Hernandez edited comment on KAFKA-16567 at 4/20/24 11:30 AM:


I noticed that KIP-869 not only introduced these metrics, but also referred to 
the deprecation of another metric (KAFKA-16336).

Since KAFKA-16336 was a blocker, I assumed that this fell in that category, but 
I see that KAFKA-16336 is a subtasks of a larger ticket (and thus is a blocker 
for the main task?).

What would be a more appropriate Priority for this?


was (Author: JIRAUSER305029):
I noticed that KIP-869 not only introduced these metrics, but also referred to 
the deprecation of another metric (KAFKA-16336).

Since KAFKA-16336 was a blocker, I assumed that this fell in that category, but 
I see that KAFKA-16336 is a subtasks of a larger ticket (and thus is a blocker 
for the main task?).

What would be a moreappropriate Priority for this?

> Add New Stream Metrics based on KIP-869
> ---
>
> Key: KAFKA-16567
> URL: https://issues.apache.org/jira/browse/KAFKA-16567
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Walter Hernandez
>Assignee: Walter Hernandez
>Priority: Blocker
>  Labels: kip
> Fix For: 4.0.0
>
>
> Add the following metrics to the state updater:
>  * restoring-active-tasks: count
>  * restoring-standby-tasks: count
>  * paused-active-tasks: count
>  * paused-standby-tasks: count
>  * idle-ratio: percentage
>  * restore-ratio: percentage
>  * checkpoint-ratio: percentage
>  * restore-records-total: count
>  * restore-records-rate: rate
>  * restore-call-rate: rate



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16567:


Assignee: Walter Hernandez

> Add New Stream Metrics based on KIP-869
> ---
>
> Key: KAFKA-16567
> URL: https://issues.apache.org/jira/browse/KAFKA-16567
> Project: Kafka
>  Issue Type: Task
>Reporter: Walter Hernandez
>Assignee: Walter Hernandez
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Add the following metrics to the state updater:
>  * restoring-active-tasks: count
>  * restoring-standby-tasks: count
>  * paused-active-tasks: count
>  * paused-standby-tasks: count
>  * idle-ratio: percentage
>  * restore-ratio: percentage
>  * checkpoint-ratio: percentage
>  * restore-records-total: count
>  * restore-records-rate: rate
>  * restore-call-rate: rate



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16567) Add New Stream Metrics based on KIP-869

2024-04-16 Thread Walter Hernandez (Jira)
Walter Hernandez created KAFKA-16567:


 Summary: Add New Stream Metrics based on KIP-869
 Key: KAFKA-16567
 URL: https://issues.apache.org/jira/browse/KAFKA-16567
 Project: Kafka
  Issue Type: Task
Reporter: Walter Hernandez
 Fix For: 4.0.0


Add the following metrics to the state updater:
 * restoring-active-tasks: count
 * restoring-standby-tasks: count
 * paused-active-tasks: count
 * paused-standby-tasks: count
 * idle-ratio: percentage
 * restore-ratio: percentage
 * checkpoint-ratio: percentage
 * restore-records-total: count
 * restore-records-rate: rate
 * restore-call-rate: rate



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16336:


Assignee: Walter Hernandez

> Remove Deprecated metric standby-process-ratio
> --
>
> Key: KAFKA-16336
> URL: https://issues.apache.org/jira/browse/KAFKA-16336
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Walter Hernandez
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Metric "standby-process-ratio" was deprecated in 3.5 release via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16336:


Assignee: (was: Walter Hernandez)

> Remove Deprecated metric standby-process-ratio
> --
>
> Key: KAFKA-16336
> URL: https://issues.apache.org/jira/browse/KAFKA-16336
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Metric "standby-process-ratio" was deprecated in 3.5 release via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16336) Remove Deprecated metric standby-process-ratio

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16336:


Assignee: Walter Hernandez

> Remove Deprecated metric standby-process-ratio
> --
>
> Key: KAFKA-16336
> URL: https://issues.apache.org/jira/browse/KAFKA-16336
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Walter Hernandez
>Priority: Blocker
> Fix For: 4.0.0
>
>
> Metric "standby-process-ratio" was deprecated in 3.5 release via 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-869%3A+Improve+Streams+State+Restoration+Visibility



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-04-16 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837762#comment-17837762
 ] 

Walter Hernandez edited comment on KAFKA-16263 at 4/16/24 3:25 PM:
---

Are you referring to:
 * 
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]}}
 * 
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]}}
 * 
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]}}
 ** An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
 * 
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]}}
 ** Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]

 

There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 


was (Author: JIRAUSER305029):
Are you referring to:
 * 
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]}}
 * 
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]}}
 * 
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]}}
 ** An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{}}
 * 
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]}}
 ** Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]

 

There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 

> Add Kafka Streams docs about available listeners/callback
> -
>
> Key: KAFKA-16263
> URL: https://issues.apache.org/jira/browse/KAFKA-16263
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Kuan Po Tseng
>Priority: Minor
>  Labels: beginner, newbie
>
> Kafka Streams allows to register all kind of listeners and callback (eg, 
> uncaught-exception-handler, restore-listeners, etc) but those are not in the 
> documentation.
> A good place might be 
> [https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-04-16 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837762#comment-17837762
 ] 

Walter Hernandez commented on KAFKA-16263:
--

Are you referring to:
 * 
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]}}
 * 
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]}}
 * 
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]}}
 ** An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{}}
 * 
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]}}
 ** Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]

 

There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 

> Add Kafka Streams docs about available listeners/callback
> -
>
> Key: KAFKA-16263
> URL: https://issues.apache.org/jira/browse/KAFKA-16263
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Kuan Po Tseng
>Priority: Minor
>  Labels: beginner, newbie
>
> Kafka Streams allows to register all kind of listeners and callback (eg, 
> uncaught-exception-handler, restore-listeners, etc) but those are not in the 
> documentation.
> A good place might be 
> [https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-04-16 Thread Walter Hernandez (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-16263 ]


Walter Hernandez deleted comment on KAFKA-16263:
--

was (Author: JIRAUSER305029):
Are you referring to:
{{void}}
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]([StateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateRestoreListener.html]
 globalStateRestoreListener)}}
Set the listener which is triggered whenever a 
[{{StateStore}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateStore.html]
 is being restored in order to resume processing.
{{void}}
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]([StandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StandbyUpdateListener.html]
 standbyListener)}}
Set the listener which is triggered whenever a standby task is updated
{{void}}
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]([KafkaStreams.StateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 listener)}}
An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)]([Thread.UncaughtExceptionHandler|https://docs.oracle.com/en/java/javase/18/docs/api/java.base/java/lang/Thread.UncaughtExceptionHandler.html]
 uncaughtExceptionHandler)}}
Deprecated.
Since 2.8.0.
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]([StreamsUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/errors/StreamsUncaughtExceptionHandler.html]
 userStreamsUncaughtExceptionHandler)}}
Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]

There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 

> Add Kafka Streams docs about available listeners/callback
> -
>
> Key: KAFKA-16263
> URL: https://issues.apache.org/jira/browse/KAFKA-16263
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Kuan Po Tseng
>Priority: Minor
>  Labels: beginner, newbie
>
> Kafka Streams allows to register all kind of listeners and callback (eg, 
> uncaught-exception-handler, restore-listeners, etc) but those are not in the 
> documentation.
> A good place might be 
> [https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-04-16 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837760#comment-17837760
 ] 

Walter Hernandez commented on KAFKA-16263:
--

Are you referring to:

 
{{void}}
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]([StateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateRestoreListener.html]
 globalStateRestoreListener)}}
Set the listener which is triggered whenever a 
[{{StateStore}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateStore.html]
 is being restored in order to resume processing.
{{void}}
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]([StandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StandbyUpdateListener.html]
 standbyListener)}}
Set the listener which is triggered whenever a standby task is updated
{{void}}
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]([KafkaStreams.StateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 listener)}}
An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{}}
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]([StreamsUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/errors/StreamsUncaughtExceptionHandler.html]
 userStreamsUncaughtExceptionHandler)}}
Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]
{{}}
{{}}
There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 

> Add Kafka Streams docs about available listeners/callback
> -
>
> Key: KAFKA-16263
> URL: https://issues.apache.org/jira/browse/KAFKA-16263
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Kuan Po Tseng
>Priority: Minor
>  Labels: beginner, newbie
>
> Kafka Streams allows to register all kind of listeners and callback (eg, 
> uncaught-exception-handler, restore-listeners, etc) but those are not in the 
> documentation.
> A good place might be 
> [https://kafka.apache.org/documentation/streams/developer-guide/running-app.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16263) Add Kafka Streams docs about available listeners/callback

2024-04-16 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837760#comment-17837760
 ] 

Walter Hernandez edited comment on KAFKA-16263 at 4/16/24 3:22 PM:
---

Are you referring to:
{{void}}
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]([StateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateRestoreListener.html]
 globalStateRestoreListener)}}
Set the listener which is triggered whenever a 
[{{StateStore}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateStore.html]
 is being restored in order to resume processing.
{{void}}
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]([StandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StandbyUpdateListener.html]
 standbyListener)}}
Set the listener which is triggered whenever a standby task is updated
{{void}}
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]([KafkaStreams.StateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 listener)}}
An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)]([Thread.UncaughtExceptionHandler|https://docs.oracle.com/en/java/javase/18/docs/api/java.base/java/lang/Thread.UncaughtExceptionHandler.html]
 uncaughtExceptionHandler)}}
Deprecated.
Since 2.8.0.
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]([StreamsUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/errors/StreamsUncaughtExceptionHandler.html]
 userStreamsUncaughtExceptionHandler)}}
Set the handler invoked when an internal 
[{{stream}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/StreamsConfig.html#NUM_STREAM_THREADS_CONFIG]

There's also this really nice tutorial on the uncaught-exception-handler 
feature:
[https://github.com/bbejeck/streams-uncaught-exceptions-workshop]

 


was (Author: JIRAUSER305029):
Are you referring to:

 
{{void}}
{{[setGlobalStateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setGlobalStateRestoreListener(org.apache.kafka.streams.processor.StateRestoreListener)]([StateRestoreListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateRestoreListener.html]
 globalStateRestoreListener)}}
Set the listener which is triggered whenever a 
[{{StateStore}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StateStore.html]
 is being restored in order to resume processing.
{{void}}
{{[setStandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStandbyUpdateListener(org.apache.kafka.streams.processor.StandbyUpdateListener)]([StandbyUpdateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/processor/StandbyUpdateListener.html]
 standbyListener)}}
Set the listener which is triggered whenever a standby task is updated
{{void}}
{{[setStateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setStateListener(org.apache.kafka.streams.KafkaStreams.StateListener)]([KafkaStreams.StateListener|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 listener)}}
An app can set a single 
[{{KafkaStreams.StateListener}}|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.StateListener.html]
 so that the app is notified when state changes.
{{}}
{{void}}
{{[setUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/KafkaStreams.html#setUncaughtExceptionHandler(org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler)]([StreamsUncaughtExceptionHandler|https://kafka.apache.org/37/javadoc/org/apache/kafka/streams/errors/StreamsUncaughtExceptionHandler.html]
 userStreamsUncaughtExceptionHandler)}}
Set the handler invoked when an internal 

[jira] [Commented] (KAFKA-15341) Enabling TS for a topic during rolling restart causes problems

2024-04-16 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837750#comment-17837750
 ] 

Walter Hernandez commented on KAFKA-15341:
--

[~ckamal] Great work on your merged PR! With it merged, is this issue not 
resolved?

> Enabling TS for a topic during rolling restart causes problems
> --
>
> Key: KAFKA-15341
> URL: https://issues.apache.org/jira/browse/KAFKA-15341
> Project: Kafka
>  Issue Type: Bug
>Reporter: Divij Vaidya
>Priority: Major
>  Labels: KIP-405
> Fix For: 3.8.0
>
>
> When we are in a rolling restart to enable TS at system level, some brokers 
> have TS enabled on them and some don't. We send an alter config call to 
> enable TS for a topic, it hits a broker which has TS enabled, this broker 
> forwards it to the controller and controller will send the config update to 
> all brokers. When another broker which doesn't have TS enabled (because it 
> hasn't undergone the restart yet) gets this config change, it "should" fail 
> to apply it. But failing now is too late since alterConfig has already 
> succeeded since controller->broker config propagation is done async.
> With this JIRA, we want to have controller check if TS is enabled on all 
> brokers before applying alter config to turn on TS for a topic.
> Context: https://github.com/apache/kafka/pull/14176#discussion_r1291265129



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16417) When initializeResources throws an exception in TopicBasedRemoteLogMetadataManager.scala, initializationFailed needs to be set to true

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16417:


Assignee: zhaobo

> When initializeResources throws an exception in 
> TopicBasedRemoteLogMetadataManager.scala, initializationFailed needs to be 
> set to true
> --
>
> Key: KAFKA-16417
> URL: https://issues.apache.org/jira/browse/KAFKA-16417
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: zhaobo
>Assignee: zhaobo
>Priority: Major
>
> If the initializing producer/consumer fails, the broker cannot actually read 
> and write the remote storage system normally. We need to mark 
> initializationFailed= true to sense this event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15615) Improve handling of fetching during metadata updates

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15615:


Assignee: appchemist

> Improve handling of fetching during metadata updates
> 
>
> Key: KAFKA-15615
> URL: https://issues.apache.org/jira/browse/KAFKA-15615
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: appchemist
>Priority: Major
>  Labels: consumer-threading-refactor, fetcher
> Fix For: 3.8.0
>
>
> [During a review of the new 
> fetcher|https://github.com/apache/kafka/pull/14406#discussion_r193941], 
> [~junrao] found what appears to be an opportunity for optimization.
> When a fetch response receives an error about partition leadership, fencing, 
> etc. a metadata refresh is triggered. However, it takes time for that refresh 
> to occur, and in the interim, it appears that the consumer will blindly 
> attempt to fetch data for the partition again, in kind of a "definition of 
> insanity" type of way. Ideally, the consumer would have a way to temporarily 
> ignore those partitions, in a way somewhat like the "pausing" approach so 
> that they are skipped until the metadata refresh response is fully processed.
> This affects both the existing KafkaConsumer and the new 
> PrototypeAsyncConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16554) Online downgrade triggering and group type conversion

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16554:


Assignee: Dongnuo Lyu

> Online downgrade triggering and group type conversion
> -
>
> Key: KAFKA-16554
> URL: https://issues.apache.org/jira/browse/KAFKA-16554
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16561) Disable `allow.auto.create.topics` in MirrorMaker2 Consumer Config

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16561:


Assignee: Yangkun Ai

> Disable `allow.auto.create.topics` in MirrorMaker2 Consumer Config
> --
>
> Key: KAFKA-16561
> URL: https://issues.apache.org/jira/browse/KAFKA-16561
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Yangkun Ai
>Assignee: Yangkun Ai
>Priority: Major
>
> While using MirrorMaker 2.0 (MM2), I noticed that the consumer used by the 
> connector does not disable the ALLOW_AUTO_CREATE_TOPICS_CONFIG option. This 
> leads to the possibility of a topic being immediately recreated if I attempt 
> to delete it from the source cluster while MirrorMaker 2.0 is running. I 
> believe that automatic creation of new topics in this scenario is 
> unreasonable, hence I think it is necessary to explicitly disable this option 
> in the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16262:


Assignee: Suprem Vanam

> Add IQv2 to Kafka Streams documentation
> ---
>
> Key: KAFKA-16262
> URL: https://issues.apache.org/jira/browse/KAFKA-16262
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Suprem Vanam
>Priority: Minor
>  Labels: beginner, newbie
>
> The new IQv2 API was added many release ago. While it is still not feature 
> complete, we should add it to the docs 
> ([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
>  to make users aware of the new API so they can start to try it out, report 
> issue and provide feedback / feature requests.
> We might still state that IQv2 is not yet feature complete, but should change 
> the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-04-16 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16262:


Assignee: (was: Walter Hernandez)

> Add IQv2 to Kafka Streams documentation
> ---
>
> Key: KAFKA-16262
> URL: https://issues.apache.org/jira/browse/KAFKA-16262
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Priority: Minor
>  Labels: beginner, newbie
>
> The new IQv2 API was added many release ago. While it is still not feature 
> complete, we should add it to the docs 
> ([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
>  to make users aware of the new API so they can start to try it out, report 
> issue and provide feedback / feature requests.
> We might still state that IQv2 is not yet feature complete, but should change 
> the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15224) Automate version change to snapshot

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836954#comment-17836954
 ] 

Walter Hernandez commented on KAFKA-15224:
--

It looks like some great headway was made, but the branches have gone stale 
since.

Any update on this? I ask since I am looking for tickets to pick up, and this 
part of a bigger improvement (KAFKA-15198)

> Automate version change to snapshot 
> 
>
> Key: KAFKA-15224
> URL: https://issues.apache.org/jira/browse/KAFKA-15224
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Tanay Karmarkar
>Priority: Minor
>
> We require changing to SNAPSHOT version as part of the release process [1]. 
> The specific manual steps are:
> Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
>  * 
>  ** docs/js/templateData.js
>  ** gradle.properties
>  ** kafka-merge-pr.py
>  ** streams/quickstart/java/pom.xml
>  ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
>  ** streams/quickstart/pom.xml
>  ** tests/kafkatest/_{_}init{_}_.py (note: this version name can't follow the 
> -SNAPSHOT convention due to python version naming restrictions, instead
> update it to 0.10.0.1.dev0)
>  ** tests/kafkatest/version.py
> The diff of the changes look like 
> [https://github.com/apache/kafka/commit/484a86feb562f645bdbec74b18f8a28395a686f7#diff-21a0ab11b8bbdab9930ad18d4bca2d943bbdf40d29d68ab8a96f765bd1f9]
>  
>  
> It would be nice if we could run a script to automatically do it. Note that 
> release.py (line 550) already does something similar where it replaces 
> SNAPSHOT with actual version. We need to do the opposite here. We can 
> repurpose that code in release.py and extract into a new script to perform 
> this opertaion.
> [1] [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15201) When git fails, script goes into a loop

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836953#comment-17836953
 ] 

Walter Hernandez commented on KAFKA-15201:
--

This PR was merged, and others developing on Forks that use this code.

I see that a there is a test failure post merge: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14645/4/pipeline]

Is this something to be alarmed about? If not, it seems like this should be 
resolved.

> When git fails, script goes into a loop
> ---
>
> Key: KAFKA-15201
> URL: https://issues.apache.org/jira/browse/KAFKA-15201
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Owen C.H. Leung
>Priority: Major
>
> When the git push to remote fails (let's say with unauthenticated exception), 
> then the script runs into a loop. It should not retry and fail gracefully 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15201) When git fails, script goes into a loop

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15201:


Assignee: Owen C.H. Leung

> When git fails, script goes into a loop
> ---
>
> Key: KAFKA-15201
> URL: https://issues.apache.org/jira/browse/KAFKA-15201
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Owen C.H. Leung
>Priority: Major
>
> When the git push to remote fails (let's say with unauthenticated exception), 
> then the script runs into a loop. It should not retry and fail gracefully 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-4560) Min / Max Partitions Fetch Records params

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-4560.
-
Resolution: Abandoned

> Min / Max Partitions Fetch Records params
> -
>
> Key: KAFKA-4560
> URL: https://issues.apache.org/jira/browse/KAFKA-4560
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Stephane Maarek
>Priority: Major
>  Labels: features, newbie
>
> There is currently a `max.partition.fetch.bytes` parameter to limit the total 
> size of the fetch call (also a min).
> Sometimes I'd like to control how many records altogether I'm getting at the 
> time and I'd like to see a `max.partition.fetch.records` (also a min).
> If both are specified the first condition that is met would complete the 
> fetch call. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-14 Thread Walter Hernandez (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-15736 ]


Walter Hernandez deleted comment on KAFKA-15736:
--

was (Author: JIRAUSER305029):
https://github.com/apache/kafka/pull/14295

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16262:


Assignee: Walter Hernandez

> Add IQv2 to Kafka Streams documentation
> ---
>
> Key: KAFKA-16262
> URL: https://issues.apache.org/jira/browse/KAFKA-16262
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: beginner, newbie
>
> The new IQv2 API was added many release ago. While it is still not feature 
> complete, we should add it to the docs 
> ([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
>  to make users aware of the new API so they can start to try it out, report 
> issue and provide feedback / feature requests.
> We might still state that IQv2 is not yet feature complete, but should change 
> the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-2499:
---

Assignee: (was: Walter Hernandez)

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15748:


Assignee: (was: Walter Hernandez)

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-15748.
--
Resolution: Fixed

Changes can be found here:
[https://github.com/mannoopj/kafka/blob/6796517858cc2789b61b3419bfbcfc6199ccd43f/core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala]

, where all tests passed:
https://app.harness.io/ng/#/account/vpCkHKsDSxK9_KYfjCTMKA/ci/orgs/default/projects/TI_ML_Replays/pipelines/Test_Pipeline/executions/EUcKsJjtRYu8b1ei0rBFaA/pipeline?stage=CNacpu3LQT2iZqMKpnxHxQ=PiOPgL8gTlKXYFQJScvk8Q

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836950#comment-17836950
 ] 

Walter Hernandez commented on KAFKA-15736:
--

https://github.com/apache/kafka/pull/14295

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836949#comment-17836949
 ] 

Walter Hernandez commented on KAFKA-15748:
--

I will verify that the merged PR commented above does indeed resolve the issue 
with the specified unit test above.

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15748:


Assignee: Walter Hernandez

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-2499.
-
Resolution: Invalid

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836947#comment-17836947
 ] 

Walter Hernandez commented on KAFKA-2499:
-

After researching, KAFKA-6921 removed all Scala related code and tests 
regarding the ProducerPerformance.*code.

Tickets that were referenced are about modifying the Producer Performance tests 
payloads, but are no longer relevant to any supported code base.

This leads me to closing this ticket, and beginning cleaning up older open 
tickets regarding this feature enhancement.

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836506#comment-17836506
 ] 

Walter Hernandez edited comment on KAFKA-2499 at 4/12/24 8:45 AM:
--

This is indeed duplicated by the referenced earlier ticket, where there is a 
patch available; however, the patch needs a rebase (goes back to v0.9.0.1).

In addition, the changes broke some system tests that were not resolved.

With the referenced ticket not having any activity since 2018 and already has 
an associated PR, I would like to take on this effort for the latest version.


was (Author: JIRAUSER305029):
This is indeed duplicated by the referenced earlier ticket, where there is a 
patch available; however, the patch needs a rebase (goes back to v 1.0.0 ).

 

In addition, the changes broke some system tests that were not resolved.

 

With the referenced ticket not having any activity since 2018 and already has 
an associated PR, I would like to take on this effort for the latest version.

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836506#comment-17836506
 ] 

Walter Hernandez commented on KAFKA-2499:
-

This is indeed duplicated by the referenced earlier ticket, where there is a 
patch available; however, the patch needs a rebase (goes back to v 1.0.0 ).

 

In addition, the changes broke some system tests that were not resolved.

 

With the referenced ticket not having any activity since 2018 and already has 
an associated PR, I would like to take on this effort for the latest version.

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez updated KAFKA-2499:

Issue Type: Improvement  (was: Bug)

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-2499:
---

Assignee: Walter Hernandez

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-2499 ]


Walter Hernandez deleted comment on KAFKA-2499:
-

was (Author: JIRAUSER305029):
This indeed duplicates the referenced earlier ticket, where there is a patch 
available.

However, this patch needs a rebase (goes back to v2.1.0).

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-12 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836503#comment-17836503
 ] 

Walter Hernandez commented on KAFKA-2499:
-

This indeed duplicates the referenced earlier ticket, where there is a patch 
available.

However, this patch needs a rebase (goes back to v2.1.0).

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)