Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 commented on code in PR #15904: URL: https://github.com/apache/kafka/pull/15904#discussion_r1599412941 ## core/src/test/scala/unit/kafka/log/LogValidatorTest.scala: ## @@ -488,8 +496,11 @@ class LogValidatorTest { } assertEquals(now + 1, validatingResults.maxTimestampMs, s"Max timestamp should be ${now + 1}") -assertEquals(2, validatingResults.shallowOffsetOfMaxTimestamp, - "Shallow offset of max timestamp should be 2") + +// Both V2 and V1 has single branch in the record when compression is enable, and hence their shallow OffsetOfMaxTimestamp +// is the last offset of the single branch +assertEquals(1, records.batches().asScala.size) Review Comment: @vincent81jiang you are totally right. I have filed PR (https://github.com/apache/kafka/pull/15948) to fix that -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
junrao commented on code in PR #15904: URL: https://github.com/apache/kafka/pull/15904#discussion_r1599139111 ## core/src/test/scala/unit/kafka/log/LogValidatorTest.scala: ## @@ -488,8 +496,11 @@ class LogValidatorTest { } assertEquals(now + 1, validatingResults.maxTimestampMs, s"Max timestamp should be ${now + 1}") -assertEquals(2, validatingResults.shallowOffsetOfMaxTimestamp, - "Shallow offset of max timestamp should be 2") + +// Both V2 and V1 has single branch in the record when compression is enable, and hence their shallow OffsetOfMaxTimestamp +// is the last offset of the single branch +assertEquals(1, records.batches().asScala.size) Review Comment: @vincent81jiang : Thanks for the comment. The line at 455 seems to be wrong. What do you think @chia7712 ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
vincent81jiang commented on code in PR #15904: URL: https://github.com/apache/kafka/pull/15904#discussion_r1598902252 ## core/src/test/scala/unit/kafka/log/LogValidatorTest.scala: ## @@ -488,8 +496,11 @@ class LogValidatorTest { } assertEquals(now + 1, validatingResults.maxTimestampMs, s"Max timestamp should be ${now + 1}") -assertEquals(2, validatingResults.shallowOffsetOfMaxTimestamp, - "Shallow offset of max timestamp should be 2") + +// Both V2 and V1 has single branch in the record when compression is enable, and hence their shallow OffsetOfMaxTimestamp +// is the last offset of the single branch +assertEquals(1, records.batches().asScala.size) Review Comment: If I understand this test case correctly, there might still be a minor issue with "checkRecompression": 1. at line 455, `records` should be created with "CompressionType.NONE". 2. at line 502, the check should be `assertEquals(1, validatedRecords.batches().asScala.size)` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
vincent81jiang commented on code in PR #15904: URL: https://github.com/apache/kafka/pull/15904#discussion_r1598902252 ## core/src/test/scala/unit/kafka/log/LogValidatorTest.scala: ## @@ -488,8 +496,11 @@ class LogValidatorTest { } assertEquals(now + 1, validatingResults.maxTimestampMs, s"Max timestamp should be ${now + 1}") -assertEquals(2, validatingResults.shallowOffsetOfMaxTimestamp, - "Shallow offset of max timestamp should be 2") + +// Both V2 and V1 has single branch in the record when compression is enable, and hence their shallow OffsetOfMaxTimestamp +// is the last offset of the single branch +assertEquals(1, records.batches().asScala.size) Review Comment: there is a minor issue with "checkRecompression": 1. at line 455, `records` should be created with "CompressionType.NONE". 2. at line 502, the check should be `assertEquals(1, validatedRecords.batches().asScala.size)` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 merged PR #15904: URL: https://github.com/apache/kafka/pull/15904 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 commented on PR #15904: URL: https://github.com/apache/kafka/pull/15904#issuecomment-2105442338 |test|jira| |-|-| |testReplicateFromLatest|https://issues.apache.org/jira/browse/KAFKA-16383| |testTaskRequestWithOldStartMsGetsUpdated|https://issues.apache.org/jira/browse/KAFKA-16136| |testIndexFileAlreadyExistOnDiskButNotInCache|https://issues.apache.org/jira/browse/KAFKA-16704| |testMigrateTopicDeletions|https://issues.apache.org/jira/browse/KAFKA-16045| |testReplicateSourceDefault|https://issues.apache.org/jira/browse/KAFKA-15292| |testFenceMultipleBrokers|https://issues.apache.org/jira/browse/KAFKA-16634| |testUnregisterBroker|https://issues.apache.org/jira/browse/KAFKA-13966| |testSyncTopicConfigs|https://issues.apache.org/jira/browse/KAFKA-15945| |testSeparateOffsetsTopic|https://issues.apache.org/jira/browse/KAFKA-14089| -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 commented on PR #15904: URL: https://github.com/apache/kafka/pull/15904#issuecomment-2104903142 > Have the 30 test failures been triaged? |test|jira| | | | |testSyncTopicConfigs|https://issues.apache.org/jira/browse/KAFKA-15945| |testReplicateSourceDefault|https://issues.apache.org/jira/browse/KAFKA-15292| |testProduceConsumeWithWildcardAcls|https://issues.apache.org/jira/browse/KAFKA-16697| |testFenceMultipleBrokers|https://issues.apache.org/jira/browse/KAFKA-16634| |testSeparateOffsetsTopic|https://issues.apache.org/jira/browse/KAFKA-14089| |testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl|https://issues.apache.org/jira/browse/KAFKA-8250| |testDescribeQuorumReplicationSuccessful|https://issues.apache.org/jira/browse/KAFKA-15104| |testDescribeQuorumStatusSuccessful|https://issues.apache.org/jira/browse/KAFKA-16174| |testTaskRequestWithOldStartMsGetsUpdated|https://issues.apache.org/jira/browse/KAFKA-16136| |testConsumptionWithBrokerFailures|https://issues.apache.org/jira/browse/KAFKA-15146| |testCoordinatorFailover|https://issues.apache.org/jira/browse/KAFKA-16024| |testBrokerHeartbeatDuringMigration|https://issues.apache.org/jira/browse/KAFKA-15963| |testAbortTransactionTimeout|https://issues.apache.org/jira/browse/KAFKA-15772| |testMultiConsumerStickyAssignor|https://issues.apache.org/jira/browse/KAFKA-15934| |testDynamicIpConnectionRateQuota|https://issues.apache.org/jira/browse/KAFKA-16698| |testCreateClusterAndPerformReassignment|https://issues.apache.org/jira/browse/KAFKA-15103| |shouldBootstrapTwoBrokersWithFollowerThrottle|https://issues.apache.org/jira/browse/KAFKA-4184| Besides, the thread leaks should be fixed by #15886. Hence, I will rebase code to trigger QA again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 commented on PR #15904: URL: https://github.com/apache/kafka/pull/15904#issuecomment-2103720759 @junrao thanks for reviewing this rough patch :( -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
junrao commented on code in PR #15904: URL: https://github.com/apache/kafka/pull/15904#discussion_r1595670550 ## core/src/test/scala/unit/kafka/log/LogValidatorTest.scala: ## @@ -414,7 +414,15 @@ class LogValidatorTest { assertEquals(now + 1, validatingResults.maxTimestampMs, s"Max timestamp should be ${now + 1}") -assertEquals(2, validatingResults.shallowOffsetOfMaxTimestamp) +// V2: Only one batch is in the records, so the shallow OffsetOfMaxTimestamp is the last offset of the single batch +// V1: 3 batches are in the records, so the shallow OffsetOfMaxTimestamp is the timestamp of branch-1 Review Comment: Hmm, what is branch? ## core/src/main/scala/kafka/log/UnifiedLog.scala: ## @@ -1177,6 +1177,8 @@ class UnifiedLog(@volatile var logStartOffset: Long, validBytesCount += batchSize val batchCompression = CompressionType.forId(batch.compressionType.id) + // V2: only one batch regardless of compression Review Comment: This is not very accurate. `analyzeAndValidateRecords()` is called in the follower append too, which could include more than one batch. It's just that sourceCompression is only used on the leader path, which only contains one batch currently. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] MINOR: fix LogValidatorTest#checkNonCompressed [kafka]
chia7712 opened a new pull request, #15904: URL: https://github.com/apache/kafka/pull/15904 from: https://github.com/apache/kafka/pull/15621#discussion_r1592859538 This is a follow-up of #15621 ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org