This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch test-catalog
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/test-catalog by this push:
new 8752167060f Update test catalog data for GHA workflow run 22211975171
8752167060f is described below
commit 8752167060f447bf5fbd1f644502a7b07c782320
Author: github-actions[bot]
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Feb 20 07:21:04 2026 +0000
Update test catalog data for GHA workflow run 22211975171
Commit:
https://github.com/apache/kafka/commit/da0c20756ead60aa912e6ebf1cbea2e75252af50
GitHub Run: https://github.com/apache/kafka/actions/runs/22211975171
---
test-catalog/streams/tests.yaml | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/test-catalog/streams/tests.yaml b/test-catalog/streams/tests.yaml
index c8aae8effc3..0dedf454532 100644
--- a/test-catalog/streams/tests.yaml
+++ b/test-catalog/streams/tests.yaml
@@ -5328,6 +5328,26 @@
org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTe
- shouldRestoreToByteStoreForActiveTask
- shouldRestoreToByteStoreForStandbyTask
- shouldRollSegments
+org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreWithHeadersTest:
+- shouldBeAbleToWriteToReInitializedStore
+- shouldCreateWriteBatches
+- shouldFetchAllSegments
+- shouldFindValuesWithinRange
+- shouldGetAllSegments
+- shouldHandleTombstoneRecords
+- shouldLoadSegmentsWithOldStyleColonFormattedName
+- shouldLoadSegmentsWithOldStyleDateFormattedName
+- shouldMatchPositionAfterPut
+- shouldMeasureExpiredRecords
+- shouldNotThrowWhenRestoringOnMissingHeaders
+- shouldPutAndBackwardFetch
+- shouldPutAndFetch
+- shouldRemove
+- shouldRestoreRecordsAndConsistencyVectorMultipleTopics
+- shouldRestoreRecordsAndConsistencyVectorSingleTopic
+- shouldRestoreToByteStoreForActiveTask
+- shouldRestoreToByteStoreForStandbyTask
+- shouldRollSegments
org.apache.kafka.streams.state.internals.RocksDBTimestampedStoreTest:
- prefixScanShouldNotThrowConcurrentModificationException
-
shouldAddValueProvidersWithStatisticsToInjectedMetricsRecorderWhenRecordingLevelDebug
@@ -5842,6 +5862,11 @@
org.apache.kafka.streams.state.internals.TimestampedSegmentTest:
- shouldCompareSegmentIdOnly
- shouldDeleteStateDirectoryOnDestroy
- shouldHashOnSegmentIdOnly
+org.apache.kafka.streams.state.internals.TimestampedSegmentWithHeadersTest:
+- shouldBeEqualIfIdIsEqual
+- shouldCompareSegmentIdOnly
+- shouldDeleteStateDirectoryOnDestroy
+- shouldHashOnSegmentIdOnly
org.apache.kafka.streams.state.internals.TimestampedSegmentsTest:
- futureEventsShouldNotCauseSegmentRoll
- shouldBaseSegmentIntervalOnRetentionAndNumSegments
@@ -5862,6 +5887,18 @@
org.apache.kafka.streams.state.internals.TimestampedSegmentsTest:
- shouldRollSegments
- shouldUpdateSegmentFileNameFromOldColonFormatToNewFormat
- shouldUpdateSegmentFileNameFromOldDateFormatToNewFormat
+org.apache.kafka.streams.state.internals.TimestampedSegmentsWithHeadersTest:
+- shouldCleanupSegmentsThatHaveExpired
+- shouldClearSegmentsOnClose
+- shouldCloseAllOpenSegments
+- shouldCreateSegments
+- shouldGetCorrectSegmentString
+- shouldGetSegmentForTimestamp
+- shouldGetSegmentIdsFromTimestamp
+- shouldGetSegmentNameFromId
+- shouldGetSegmentsWithinTimeRange
+- shouldNotCreateSegmentThatIsAlreadyExpired
+- shouldOpenExistingSegments
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilderTest:
- shouldDisableCachingWithRetainDuplicates
- shouldHaveCachingAndChangeLoggingWhenBothEnabled