[jira] [Resolved] (KAFKA-15434) Tiered Storage Quotas
[ https://issues.apache.org/jira/browse/KAFKA-15434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15434. Resolution: Duplicate Duplicate of https://issues.apache.org/jira/browse/KAFKA-15265 > Tiered Storage Quotas > -- > > Key: KAFKA-15434 > URL: https://issues.apache.org/jira/browse/KAFKA-15434 > Project: Kafka > Issue Type: Improvement >Reporter: Satish Duggana >Assignee: Abhijeet Kumar >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16887) document remote copy/fetch quotas metrics
[ https://issues.apache.org/jira/browse/KAFKA-16887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-16887. Resolution: Fixed > document remote copy/fetch quotas metrics > - > > Key: KAFKA-16887 > URL: https://issues.apache.org/jira/browse/KAFKA-16887 > Project: Kafka > Issue Type: Sub-task >Reporter: Luke Chen >Assignee: Ksolves India Limited >Priority: Major > > Context: https://github.com/apache/kafka/pull/15820#discussion_r1625304008 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15420) Kafka Tiered Storage V1
[ https://issues.apache.org/jira/browse/KAFKA-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15420. Resolution: Fixed > Kafka Tiered Storage V1 > --- > > Key: KAFKA-15420 > URL: https://issues.apache.org/jira/browse/KAFKA-15420 > Project: Kafka > Issue Type: Improvement >Affects Versions: 3.6.0 >Reporter: Satish Duggana >Assignee: Satish Duggana >Priority: Major > Labels: KIP-405 > Fix For: 3.8.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16977) remote log manager dynamic configs are not available after broker restart.
Satish Duggana created KAFKA-16977: -- Summary: remote log manager dynamic configs are not available after broker restart. Key: KAFKA-16977 URL: https://issues.apache.org/jira/browse/KAFKA-16977 Project: Kafka Issue Type: Bug Reporter: Satish Duggana The below remote log configs can be configured dynamically: remote.log.manager.copy.max.bytes.per.second remote.log.manager.fetch.max.bytes.per.second and remote.log.index.file.cache.total.size.bytes If those values are dynamically configured, after the broker restart, it loads the static value from the config file instead of the dynamic value. Note that the issue happens only when running the server with ZooKeeper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16976) Improve the dynamic config handling for RemoteLogManagerConfig when a broker is restarted.
Satish Duggana created KAFKA-16976: -- Summary: Improve the dynamic config handling for RemoteLogManagerConfig when a broker is restarted. Key: KAFKA-16976 URL: https://issues.apache.org/jira/browse/KAFKA-16976 Project: Kafka Issue Type: Task Reporter: Satish Duggana Assignee: Kamal Chandraprakash Fix For: 3.9.0 This is a followup on the discussion: https://github.com/apache/kafka/pull/16353#pullrequestreview-2121953295 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-14877) refactor InMemoryLeaderEpochCheckpoint
[ https://issues.apache.org/jira/browse/KAFKA-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-14877. Resolution: Invalid > refactor InMemoryLeaderEpochCheckpoint > -- > > Key: KAFKA-14877 > URL: https://issues.apache.org/jira/browse/KAFKA-14877 > Project: Kafka > Issue Type: Improvement >Reporter: Luke Chen >Priority: Minor > Fix For: 3.8.0 > > > follow up with this comment: > https://github.com/apache/kafka/pull/13456#discussion_r1154306477 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16947) Kafka Tiered Storage V2
Satish Duggana created KAFKA-16947: -- Summary: Kafka Tiered Storage V2 Key: KAFKA-16947 URL: https://issues.apache.org/jira/browse/KAFKA-16947 Project: Kafka Issue Type: Improvement Affects Versions: 3.6.0 Reporter: Satish Duggana Assignee: Satish Duggana Fix For: 3.8.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16890) Failing to build aux state on broker failover
[ https://issues.apache.org/jira/browse/KAFKA-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-16890. Resolution: Fixed > Failing to build aux state on broker failover > - > > Key: KAFKA-16890 > URL: https://issues.apache.org/jira/browse/KAFKA-16890 > Project: Kafka > Issue Type: Bug > Components: Tiered-Storage >Affects Versions: 3.7.0, 3.7.1 >Reporter: Francois Visconte >Assignee: Kamal Chandraprakash >Priority: Major > Fix For: 3.8.0 > > > We have clusters where we replace machines often falling into a state where > we keep having "Error building remote log auxiliary state for > loadtest_topic-22" and the partition being under-replicated until the leader > is manually restarted. > Looking into a specific case, here is what we observed in > __remote_log_metadata topic: > {code:java} > > partition: 29, offset: 183593, value: > RemoteLogSegmentMetadata{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=GZeRTXLMSNe2BQjRXkg6hQ}, startOffset=10823, endOffset=11536, > brokerId=10013, maxTimestampMs=1715774588597, eventTimestampMs=1715781657604, > segmentLeaderEpochs={125=10823, 126=10968, 128=11047, 130=11048, 131=11324, > 133=11442, 134=11443, 135=11445, 136=11521, 137=11533, 139=11535}, > segmentSizeInBytes=704895, customMetadata=Optional.empty, > state=COPY_SEGMENT_STARTED} > partition: 29, offset: 183594, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=GZeRTXLMSNe2BQjRXkg6hQ}, customMetadata=Optional.empty, > state=COPY_SEGMENT_FINISHED, eventTimestampMs=1715781658183, brokerId=10013} > partition: 29, offset: 183669, value: > RemoteLogSegmentMetadata{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=L1TYzx0lQkagRIF86Kp0QQ}, startOffset=10823, endOffset=11544, > brokerId=10008, maxTimestampMs=1715781445270, eventTimestampMs=1715782717593, > segmentLeaderEpochs={125=10823, 126=10968, 128=11047, 130=11048, 131=11324, > 133=11442, 134=11443, 135=11445, 136=11521, 137=11533, 139=11535, 140=11537, > 142=11543}, segmentSizeInBytes=713088, customMetadata=Optional.empty, > state=COPY_SEGMENT_STARTED} > partition: 29, offset: 183670, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=L1TYzx0lQkagRIF86Kp0QQ}, customMetadata=Optional.empty, > state=COPY_SEGMENT_FINISHED, eventTimestampMs=1715782718370, brokerId=10008} > partition: 29, offset: 186215, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=L1TYzx0lQkagRIF86Kp0QQ}, customMetadata=Optional.empty, > state=DELETE_SEGMENT_STARTED, eventTimestampMs=1715867874617, brokerId=10008} > partition: 29, offset: 186216, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=L1TYzx0lQkagRIF86Kp0QQ}, customMetadata=Optional.empty, > state=DELETE_SEGMENT_FINISHED, eventTimestampMs=1715867874725, brokerId=10008} > partition: 29, offset: 186217, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=GZeRTXLMSNe2BQjRXkg6hQ}, customMetadata=Optional.empty, > state=DELETE_SEGMENT_STARTED, eventTimestampMs=1715867874729, brokerId=10008} > partition: 29, offset: 186218, value: > RemoteLogSegmentMetadataUpdate{remoteLogSegmentId=RemoteLogSegmentId{topicIdPartition=ClnIeN0MQsi_d4FAOFKaDA:loadtest_topic-22, > id=GZeRTXLMSNe2BQjRXkg6hQ}, customMetadata=Optional.empty, > state=DELETE_SEGMENT_FINISHED, eventTimestampMs=1715867874817, brokerId=10008} > {code} > > It seems that at the time the leader is restarted (10013), a second copy of > the same segment is tiered by the new leader (10008). Interestingly the > segment doesn't have the same end offset, which is concerning. > Then the follower sees the following error repeatedly until the leader is > restarted: > > {code:java} > [2024-05-17 20:46:42,133] DEBUG [ReplicaFetcher replicaId=10013, > leaderId=10008, fetcherId=0] Handling errors in processFetchRequest for > partitions HashSet(loadtest_topic-22) (kafka.server.ReplicaFetcherThread) > [2024-05-17 20:46:43,174] DEBUG [ReplicaFetcher replicaId=10013, > leaderId=10008, fetcherId=0] Received error OFFSET_MOVED_TO_TIERED_STORAGE, > at fetch offset: 11537, topic-partition: loadtest_topic-22 > (kafka.server.ReplicaFetcherThread) > [2024-05-17 20:46:43,175] ERROR [ReplicaFetcher replicaId=10013, > leaderId=10008, fetcherId=0] Error bu
[jira] [Created] (KAFKA-16161) Avoid creating remote log metadata snapshot file in partition data directory.
Satish Duggana created KAFKA-16161: -- Summary: Avoid creating remote log metadata snapshot file in partition data directory. Key: KAFKA-16161 URL: https://issues.apache.org/jira/browse/KAFKA-16161 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Avoid creating remote log metadata snapshot file in a partition data directory. This can be added when the snapshots implementation related functionality is enabled end to end. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15864) Add more tests asserting the log-start-offset, local-log-start-offset, and HW/LSO/LEO in rolling over segments with tiered storage.
Satish Duggana created KAFKA-15864: -- Summary: Add more tests asserting the log-start-offset, local-log-start-offset, and HW/LSO/LEO in rolling over segments with tiered storage. Key: KAFKA-15864 URL: https://issues.apache.org/jira/browse/KAFKA-15864 Project: Kafka Issue Type: Improvement Components: core Reporter: Satish Duggana Fix For: 3.7.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15857) Introduce LocalLogStartOffset and TieredOffset in OffsetSpec.
Satish Duggana created KAFKA-15857: -- Summary: Introduce LocalLogStartOffset and TieredOffset in OffsetSpec. Key: KAFKA-15857 URL: https://issues.apache.org/jira/browse/KAFKA-15857 Project: Kafka Issue Type: Improvement Components: core Reporter: Satish Duggana Introduce EarliestLocalOffset and TieredOffset in OffsetSpec which will help in finding respective offsets while using AdminClient#listOffsets(). EarliestLocalOffset - local log start offset of a topic partition. TieredOffset - Highest offset up to which the segments were copied to remote storage. We can discuss further on naming and semantics of these offset specs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15612) Followup on whether the indexes need to be materialized before they are passed to RSM for writing them to tiered storage.
Satish Duggana created KAFKA-15612: -- Summary: Followup on whether the indexes need to be materialized before they are passed to RSM for writing them to tiered storage. Key: KAFKA-15612 URL: https://issues.apache.org/jira/browse/KAFKA-15612 Project: Kafka Issue Type: Task Reporter: Satish Duggana Followup on the [PR comment|https://github.com/apache/kafka/pull/14529#discussion_r1355263700] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15594) Add 3.6.0 to streams upgrade/compatibility tests
Satish Duggana created KAFKA-15594: -- Summary: Add 3.6.0 to streams upgrade/compatibility tests Key: KAFKA-15594 URL: https://issues.apache.org/jira/browse/KAFKA-15594 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15593) Add 3.6.0 to broker/client upgrade/compatibility tests
Satish Duggana created KAFKA-15593: -- Summary: Add 3.6.0 to broker/client upgrade/compatibility tests Key: KAFKA-15593 URL: https://issues.apache.org/jira/browse/KAFKA-15593 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15576) Add 3.2.0 to broker/client and streams upgrade/compatibility tests
Satish Duggana created KAFKA-15576: -- Summary: Add 3.2.0 to broker/client and streams upgrade/compatibility tests Key: KAFKA-15576 URL: https://issues.apache.org/jira/browse/KAFKA-15576 Project: Kafka Issue Type: Task Reporter: Satish Duggana Fix For: 3.7.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15535) Add documentation of "remote.log.index.file.cache.total.size.bytes" configuration property.
[ https://issues.apache.org/jira/browse/KAFKA-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15535. Resolution: Fixed > Add documentation of "remote.log.index.file.cache.total.size.bytes" > configuration property. > > > Key: KAFKA-15535 > URL: https://issues.apache.org/jira/browse/KAFKA-15535 > Project: Kafka > Issue Type: Task > Components: documentation >Reporter: Satish Duggana >Assignee: hudeqi >Priority: Major > Labels: tiered-storage > Fix For: 3.7.0 > > > Add documentation of "remote.log.index.file.cache.total.size.bytes" > configuration property. > Please double check all the existing public tiered storage configurations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15535) Add documentation of "remote.log.index.file.cache.total.size.bytes" configuration property.
Satish Duggana created KAFKA-15535: -- Summary: Add documentation of "remote.log.index.file.cache.total.size.bytes" configuration property. Key: KAFKA-15535 URL: https://issues.apache.org/jira/browse/KAFKA-15535 Project: Kafka Issue Type: Task Components: documentation Reporter: Satish Duggana Fix For: 3.7.0 Add documentation of "remote.log.index.file.cache.total.size.bytes" configuration property. Please double check all the existing public tiered storage configurations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15530) Add missing documentation of metrics introduced as part of KAFKA-15196
Satish Duggana created KAFKA-15530: -- Summary: Add missing documentation of metrics introduced as part of KAFKA-15196 Key: KAFKA-15530 URL: https://issues.apache.org/jira/browse/KAFKA-15530 Project: Kafka Issue Type: Task Reporter: Satish Duggana This is a followup to the 3.6.0 RC2 verification email [thread|https://lists.apache.org/thread/js2nmq3ggn46qg122h4jg5p2fcq5hr2s]. Add the missing documentation of a few metrics added as part of the [change|https://github.com/apache/kafka/commit/2f71708955b293658cec3b27e9a5588d39c38d7e]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15503) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1
Satish Duggana created KAFKA-15503: -- Summary: CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1 Key: KAFKA-15503 URL: https://issues.apache.org/jira/browse/KAFKA-15503 Project: Kafka Issue Type: Bug Affects Versions: 2.7.0, 2.6.1, 3.4.1, 3.6.0, 3.5.1 Reporter: Rafael Rios Saavedra Assignee: Divij Vaidya Fix For: 2.8.0, 2.7.1, 2.6.2, 3.0.0 CVE-2023-40167 and CVE-2023-36479 vulnerabilities affects Jetty version {*}9.4.51{*}. For more information see [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40167] [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-364749] Upgrading to Jetty version *9.4.52, 10.0.16, 11.0.16, 12.0.1* should address this issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15483) Update metrics documentation for the new metrics implemented as part of KIP-938
Satish Duggana created KAFKA-15483: -- Summary: Update metrics documentation for the new metrics implemented as part of KIP-938 Key: KAFKA-15483 URL: https://issues.apache.org/jira/browse/KAFKA-15483 Project: Kafka Issue Type: Task Components: docs, documentation Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15092) KafkaClusterTestKit in test jar depends on MockFaultHandler
[ https://issues.apache.org/jira/browse/KAFKA-15092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15092. Resolution: Invalid > KafkaClusterTestKit in test jar depends on MockFaultHandler > --- > > Key: KAFKA-15092 > URL: https://issues.apache.org/jira/browse/KAFKA-15092 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.5.0 >Reporter: Gary Russell >Priority: Major > > {noformat} > java.lang.NoClassDefFoundError: org/apache/kafka/server/fault/MockFaultHandler > at > kafka.testkit.KafkaClusterTestKit$SimpleFaultHandlerFactory.(KafkaClusterTestKit.java:119) > at > kafka.testkit.KafkaClusterTestKit$Builder.(KafkaClusterTestKit.java:143) > {noformat} > MockFaultHandler is missing from the test jar. > This PR https://github.com/apache/kafka/pull/13375/files seems to work around > it by adding the {code}server-common sourcesets.test.output{code} to the > class path. > The class needs to be available for third parties to create an embedded KRaft > broker. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15482) kafka.utils.TestUtils Depends on MockTime Which is Not in Any Jar
[ https://issues.apache.org/jira/browse/KAFKA-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15482. Resolution: Invalid > kafka.utils.TestUtils Depends on MockTime Which is Not in Any Jar > - > > Key: KAFKA-15482 > URL: https://issues.apache.org/jira/browse/KAFKA-15482 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.6.0 >Reporter: Gary Russell >Priority: Major > > Commit > 7eea2a3908fdcee1627c18827e6dcb5ed0089fdd > Moved it to server-commons, but it is not included in the jar. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-7739) Kafka Tiered Storage
[ https://issues.apache.org/jira/browse/KAFKA-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-7739. --- Resolution: Fixed > Kafka Tiered Storage > > > Key: KAFKA-7739 > URL: https://issues.apache.org/jira/browse/KAFKA-7739 > Project: Kafka > Issue Type: New Feature > Components: core >Reporter: Harsha >Assignee: Satish Duggana >Priority: Major > Labels: needs-kip > Fix For: 3.6.0 > > > KIP: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-14993) Improve TransactionIndex instance handling while copying to and fetching from RSM.
[ https://issues.apache.org/jira/browse/KAFKA-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-14993. Resolution: Fixed > Improve TransactionIndex instance handling while copying to and fetching from > RSM. > -- > > Key: KAFKA-14993 > URL: https://issues.apache.org/jira/browse/KAFKA-14993 > Project: Kafka > Issue Type: Improvement > Components: core >Reporter: Satish Duggana >Assignee: Abhijeet Kumar >Priority: Blocker > Labels: tiered-storage > Fix For: 3.6.0 > > > RSM should throw a ResourceNotFoundException if it does not have > TransactionIndex. Currently, it expects an empty InputStream and creates an > unnecessary file in the cache. This can be avoided by catching > ResourceNotFoundException and not creating an instance. There are minor > cleanups needed in RemoteIndexCache and other TransactionIndex usages. > Also, update the LocalTieredStorage, see > [this|https://github.com/apache/kafka/pull/13837#discussion_r1258917584] > comment. > Note, please remember to update the javadoc in RSM after the fix. See: > https://github.com/apache/kafka/pull/14352 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15453) Enable `testFencingOnTransactionExpiration` in TransactionsWithTieredStoreTest
Satish Duggana created KAFKA-15453: -- Summary: Enable `testFencingOnTransactionExpiration` in TransactionsWithTieredStoreTest Key: KAFKA-15453 URL: https://issues.apache.org/jira/browse/KAFKA-15453 Project: Kafka Issue Type: Test Components: core Reporter: Satish Duggana Fix For: 3.7.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15352) Ensure consistency while deleting the remote log segments
[ https://issues.apache.org/jira/browse/KAFKA-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15352. Resolution: Fixed > Ensure consistency while deleting the remote log segments > - > > Key: KAFKA-15352 > URL: https://issues.apache.org/jira/browse/KAFKA-15352 > Project: Kafka > Issue Type: Sub-task >Reporter: Kamal Chandraprakash >Assignee: Christo Lolov >Priority: Blocker > Fix For: 3.6.0 > > > In Kafka-14888, the remote log segments are deleted which breaches the > retention time/size before updating the log-start-offset. In middle of > deletion, if the consumer starts to read from the beginning of the topic, > then it will fail to read the messages and UNKNOWN_SERVER_ERROR will be > thrown back to the consumer. > To ensure consistency, similar to local log segments where the actual > segments are deleted after {{{}segment.delete.delay.ms{}}}, we should update > the log-start-offset first before deleting the remote log segment. > See the [PR#13561|https://github.com/apache/kafka/pull/13561] and > [comment|https://github.com/apache/kafka/pull/13561#discussion_r1293086543] > for more details. > Case-2: > The old-leader (follower) can delete the remote log segment in the middle of > leader election. We need to update the log-start-offset metadata for this > case. > See this comment > [https://github.com/apache/kafka/pull/13561#discussion_r1293081560] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15245) Improve Tiered Storage Metrics
[ https://issues.apache.org/jira/browse/KAFKA-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15245. Resolution: Fixed > Improve Tiered Storage Metrics > -- > > Key: KAFKA-15245 > URL: https://issues.apache.org/jira/browse/KAFKA-15245 > Project: Kafka > Issue Type: Improvement >Reporter: Abhijeet Kumar >Assignee: Abhijeet Kumar >Priority: Major > Fix For: 3.7.0 > > > Rename existing tiered storage metrics to remove ambiguity and add metrics > for the RemoteIndexCache. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15439) Add transaction tests enabled with tiered storage
Satish Duggana created KAFKA-15439: -- Summary: Add transaction tests enabled with tiered storage Key: KAFKA-15439 URL: https://issues.apache.org/jira/browse/KAFKA-15439 Project: Kafka Issue Type: Test Components: core Reporter: Satish Duggana Assignee: Kamal Chandraprakash -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15293) Update metrics doc to add tiered storage metrics
[ https://issues.apache.org/jira/browse/KAFKA-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15293. Resolution: Fixed > Update metrics doc to add tiered storage metrics > > > Key: KAFKA-15293 > URL: https://issues.apache.org/jira/browse/KAFKA-15293 > Project: Kafka > Issue Type: Sub-task > Components: documentation >Reporter: Abhijeet Kumar >Assignee: Abhijeet Kumar >Priority: Critical > Fix For: 3.6.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15434) Tiered Storage Quotas
Satish Duggana created KAFKA-15434: -- Summary: Tiered Storage Quotas Key: KAFKA-15434 URL: https://issues.apache.org/jira/browse/KAFKA-15434 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Assignee: Abhijeet Kumar -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15433) Follower fetch from tiered offset
Satish Duggana created KAFKA-15433: -- Summary: Follower fetch from tiered offset Key: KAFKA-15433 URL: https://issues.apache.org/jira/browse/KAFKA-15433 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Assignee: Abhijeet Kumar -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15260) RLM Task should wait until RLMM is initialized before copying segments to remote
[ https://issues.apache.org/jira/browse/KAFKA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15260. Resolution: Fixed > RLM Task should wait until RLMM is initialized before copying segments to > remote > > > Key: KAFKA-15260 > URL: https://issues.apache.org/jira/browse/KAFKA-15260 > Project: Kafka > Issue Type: Sub-task >Reporter: Abhijeet Kumar >Assignee: Abhijeet Kumar >Priority: Blocker > Fix For: 3.6.0 > > > The RLM Task uploads segment to the remote storage for its leader partitions > and after each upload it sends a message 'COPY_SEGMENT_STARTED' to the Topic > based RLMM topic and then waits for the TBRLMM to consume the message before > continuing. > If the RLMM is not initialized, TBRLMM may not be able to consume the message > within the stipulated time and timeout and RLMM will repeat later. It make > take a few mins for the TBRLMM to initialize during which RLM Task will keep > timing out. > Instead the RLM task should wait until RLMM is initialized before attempting > to copy segments to remote storage. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-9564) Integration Test framework for Tiered Storage
[ https://issues.apache.org/jira/browse/KAFKA-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-9564. --- Resolution: Fixed > Integration Test framework for Tiered Storage > - > > Key: KAFKA-9564 > URL: https://issues.apache.org/jira/browse/KAFKA-9564 > Project: Kafka > Issue Type: Sub-task >Reporter: Alexandre Dupriez >Assignee: Alexandre Dupriez >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-9565) Implementation of Tiered Storage SPI to integrate with S3
[ https://issues.apache.org/jira/browse/KAFKA-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-9565. --- Resolution: Won't Fix > Implementation of Tiered Storage SPI to integrate with S3 > - > > Key: KAFKA-9565 > URL: https://issues.apache.org/jira/browse/KAFKA-9565 > Project: Kafka > Issue Type: Sub-task >Reporter: Alexandre Dupriez >Assignee: Ivan Yurchenko >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-12458) Implementation of Tiered Storage Integration with Azure Storage (ADLS + Blob Storage)
[ https://issues.apache.org/jira/browse/KAFKA-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-12458. Resolution: Won't Do > Implementation of Tiered Storage Integration with Azure Storage (ADLS + Blob > Storage) > - > > Key: KAFKA-12458 > URL: https://issues.apache.org/jira/browse/KAFKA-12458 > Project: Kafka > Issue Type: Sub-task >Reporter: Israel Ekpo >Assignee: Israel Ekpo >Priority: Major > > Task to cover integration support for Azure Storage > * Azure Blob Storage > * Azure Data Lake Store > Will split task up later into distinct tracks and components -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15420) Kafka Tiered Storage V1
Satish Duggana created KAFKA-15420: -- Summary: Kafka Tiered Storage V1 Key: KAFKA-15420 URL: https://issues.apache.org/jira/browse/KAFKA-15420 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Assignee: Satish Duggana Fix For: 3.7.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-13097) Handle the requests gracefully to publish the events in TopicBasedRemoteLogMetadataManager when it is not yet initialized.
[ https://issues.apache.org/jira/browse/KAFKA-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-13097. Resolution: Invalid > Handle the requests gracefully to publish the events in > TopicBasedRemoteLogMetadataManager when it is not yet initialized. > -- > > Key: KAFKA-13097 > URL: https://issues.apache.org/jira/browse/KAFKA-13097 > Project: Kafka > Issue Type: Sub-task >Reporter: Satish Duggana >Assignee: Satish Duggana >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15388) Handle topics that were having compaction as retention earlier are changed to delete only retention policy and onboarded to tiered storage.
Satish Duggana created KAFKA-15388: -- Summary: Handle topics that were having compaction as retention earlier are changed to delete only retention policy and onboarded to tiered storage. Key: KAFKA-15388 URL: https://issues.apache.org/jira/browse/KAFKA-15388 Project: Kafka Issue Type: Task Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15386) Update log-start-offset in RLM task for a leader partition for the earliest leader epoch of the current leader's leader epoch lineage available in the remote storage.
[ https://issues.apache.org/jira/browse/KAFKA-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-15386. Resolution: Duplicate > Update log-start-offset in RLM task for a leader partition for the earliest > leader epoch of the current leader's leader epoch lineage available in the > remote storage. > --- > > Key: KAFKA-15386 > URL: https://issues.apache.org/jira/browse/KAFKA-15386 > Project: Kafka > Issue Type: Task >Reporter: Satish Duggana >Assignee: Kamal Chandraprakash >Priority: Major > > Update log-start-offset in RLM task for a leader partition for the earliest > leader epoch of the current leader's leader epoch lineage available in the > remote storage. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15386) Update log-start-offset in RLM task for a leader partition for the earliest leader epoch of the current leader's leader epoch lineage available in the remote storage.
Satish Duggana created KAFKA-15386: -- Summary: Update log-start-offset in RLM task for a leader partition for the earliest leader epoch of the current leader's leader epoch lineage available in the remote storage. Key: KAFKA-15386 URL: https://issues.apache.org/jira/browse/KAFKA-15386 Project: Kafka Issue Type: Task Reporter: Satish Duggana Assignee: Kamal Chandraprakash Update log-start-offset in RLM task for a leader partition for the earliest leader epoch of the current leader's leader epoch lineage available in the remote storage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15376) Revisit removing data earlier to the current leader for topics enabled with tiered storage.
Satish Duggana created KAFKA-15376: -- Summary: Revisit removing data earlier to the current leader for topics enabled with tiered storage. Key: KAFKA-15376 URL: https://issues.apache.org/jira/browse/KAFKA-15376 Project: Kafka Issue Type: Task Components: core Reporter: Satish Duggana Followup on the discussion thread: [https://github.com/apache/kafka/pull/13561#discussion_r1288778006] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15313) Delete remote log segments partition asynchronously when a partition is deleted.
Satish Duggana created KAFKA-15313: -- Summary: Delete remote log segments partition asynchronously when a partition is deleted. Key: KAFKA-15313 URL: https://issues.apache.org/jira/browse/KAFKA-15313 Project: Kafka Issue Type: Task Components: core Reporter: Satish Duggana Assignee: Abhijeet Kumar KIP-405 already covers the approach to delete remote log segments asynchronously through controller and RLMM layers. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15300) Include remotelog size in complete log size and also add local log size and remote log size separately in kafka-log-dirs tool.
Satish Duggana created KAFKA-15300: -- Summary: Include remotelog size in complete log size and also add local log size and remote log size separately in kafka-log-dirs tool. Key: KAFKA-15300 URL: https://issues.apache.org/jira/browse/KAFKA-15300 Project: Kafka Issue Type: Task Components: core Reporter: Satish Duggana Include remotelog size in complete log size and also add local log size and remote log size separately in kafka-log-dirs tool. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15285) https://github.com/apache/kafka/pull/13990#issuecomment-1659256150
Satish Duggana created KAFKA-15285: -- Summary: https://github.com/apache/kafka/pull/13990#issuecomment-1659256150 Key: KAFKA-15285 URL: https://issues.apache.org/jira/browse/KAFKA-15285 Project: Kafka Issue Type: Task Reporter: Satish Duggana storage module is intermittently failing for the last couple of weeks as mentioned in PR [thread|https://github.com/apache/kafka/pull/13990#issuecomment-1659256150]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15265) Remote copy/fetch quotas for tiered storage.
Satish Duggana created KAFKA-15265: -- Summary: Remote copy/fetch quotas for tiered storage. Key: KAFKA-15265 URL: https://issues.apache.org/jira/browse/KAFKA-15265 Project: Kafka Issue Type: Improvement Components: core Reporter: Satish Duggana Assignee: Abhijeet Kumar -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15241) Compute tiered offset by keeping the respective epochs in scope.
Satish Duggana created KAFKA-15241: -- Summary: Compute tiered offset by keeping the respective epochs in scope. Key: KAFKA-15241 URL: https://issues.apache.org/jira/browse/KAFKA-15241 Project: Kafka Issue Type: Improvement Components: core Affects Versions: 3.6.0 Reporter: Satish Duggana Assignee: Kamal Chandraprakash This is a followup on the discussion [thread|https://github.com/apache/kafka/pull/14004#discussion_r1268911909] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15169) Add a test to make sure the remote index file is overwritten for the earlier existing(corrupted) files.
Satish Duggana created KAFKA-15169: -- Summary: Add a test to make sure the remote index file is overwritten for the earlier existing(corrupted) files. Key: KAFKA-15169 URL: https://issues.apache.org/jira/browse/KAFKA-15169 Project: Kafka Issue Type: Test Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15094) Add RemoteIndexCache metrics like misses/evictions/load-failures.
Satish Duggana created KAFKA-15094: -- Summary: Add RemoteIndexCache metrics like misses/evictions/load-failures. Key: KAFKA-15094 URL: https://issues.apache.org/jira/browse/KAFKA-15094 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Add metrics like hits/misses/evictions/load-failures for RemoteIndexCache. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15047) Handle rolling segments when the active segment's retention is breached incase of tiered storage is enabled.
Satish Duggana created KAFKA-15047: -- Summary: Handle rolling segments when the active segment's retention is breached incase of tiered storage is enabled. Key: KAFKA-15047 URL: https://issues.apache.org/jira/browse/KAFKA-15047 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Kamal Chandraprakash -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (KAFKA-12384) Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch
[ https://issues.apache.org/jira/browse/KAFKA-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana reopened KAFKA-12384: > Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch > - > > Key: KAFKA-12384 > URL: https://issues.apache.org/jira/browse/KAFKA-12384 > Project: Kafka > Issue Type: Test > Components: core, unit tests >Reporter: Matthias J. Sax >Assignee: Chia-Ping Tsai >Priority: Critical > Labels: flaky-test > Fix For: 3.0.0 > > > {quote}org.opentest4j.AssertionFailedError: expected: <(0,0)> but was: > <(-1,-1)> at > org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at > org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62) at > org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182) at > org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177) at > org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1124) at > kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:172){quote} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14993) Improve TransactionIndex instance handling while copying to and fetching from RSM.
Satish Duggana created KAFKA-14993: -- Summary: Improve TransactionIndex instance handling while copying to and fetching from RSM. Key: KAFKA-14993 URL: https://issues.apache.org/jira/browse/KAFKA-14993 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Kamal Chandraprakash -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14915) Option to consume multiple partitions that have their data in remote storage for the target offsets.
Satish Duggana created KAFKA-14915: -- Summary: Option to consume multiple partitions that have their data in remote storage for the target offsets. Key: KAFKA-14915 URL: https://issues.apache.org/jira/browse/KAFKA-14915 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14912) Introduce a configuration for remote index cache size, preferably a dynamic config.
Satish Duggana created KAFKA-14912: -- Summary: Introduce a configuration for remote index cache size, preferably a dynamic config. Key: KAFKA-14912 URL: https://issues.apache.org/jira/browse/KAFKA-14912 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Kamal Chandraprakash -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14726) Move/rewrite LogReadInfo, LogOffsetSnapshot, and LogStartOffsetIncrementReason to storage module.
Satish Duggana created KAFKA-14726: -- Summary: Move/rewrite LogReadInfo, LogOffsetSnapshot, and LogStartOffsetIncrementReason to storage module. Key: KAFKA-14726 URL: https://issues.apache.org/jira/browse/KAFKA-14726 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Satish Duggana Move/rewrite LogReadInfo, LogOffsetSnapshot, and LogStartOffsetIncrementReason to storage module. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14714) Move/Rewrite RollParams, LogAppendInfo, and LeaderHwChange to storage module.
Satish Duggana created KAFKA-14714: -- Summary: Move/Rewrite RollParams, LogAppendInfo, and LeaderHwChange to storage module. Key: KAFKA-14714 URL: https://issues.apache.org/jira/browse/KAFKA-14714 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14708) Remove kafka.examples.Consumer dependancy on ShutdownableThread
Satish Duggana created KAFKA-14708: -- Summary: Remove kafka.examples.Consumer dependancy on ShutdownableThread Key: KAFKA-14708 URL: https://issues.apache.org/jira/browse/KAFKA-14708 Project: Kafka Issue Type: Task Reporter: Satish Duggana Remove "kafka.examples.Consumer" dependency on ShutdownableThread. "examples" module should be dependant only on public APIs but not to be dependent upon server common/internal components. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14706) Move ShutdownableThread to server-commons module.
Satish Duggana created KAFKA-14706: -- Summary: Move ShutdownableThread to server-commons module. Key: KAFKA-14706 URL: https://issues.apache.org/jira/browse/KAFKA-14706 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14688) Move org.apache.kafka.server.log.internals to org.apache.kafka.storage.internals.log
Satish Duggana created KAFKA-14688: -- Summary: Move org.apache.kafka.server.log.internals to org.apache.kafka.storage.internals.log Key: KAFKA-14688 URL: https://issues.apache.org/jira/browse/KAFKA-14688 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Fix For: 3.5.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14625) CheckpointFile read and write API consistency
Satish Duggana created KAFKA-14625: -- Summary: CheckpointFile read and write API consistency Key: KAFKA-14625 URL: https://issues.apache.org/jira/browse/KAFKA-14625 Project: Kafka Issue Type: Improvement Components: core Reporter: Satish Duggana ` CheckpointFile` has the below read and write APIs, write expects a Collection of items, but read returns a List of elements. It is better to look into these APIs and its usages and see whether consistency can be brought without introducing any extra collection conversions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-14613) Move BrokerReconfigurable/KafkaConfig to server-common module.
[ https://issues.apache.org/jira/browse/KAFKA-14613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-14613. Resolution: Won't Do > Move BrokerReconfigurable/KafkaConfig to server-common module. > -- > > Key: KAFKA-14613 > URL: https://issues.apache.org/jira/browse/KAFKA-14613 > Project: Kafka > Issue Type: Sub-task >Reporter: Satish Duggana >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14613) Move BrokerReconfigurable/KafkaConfig to server-common module.
Satish Duggana created KAFKA-14613: -- Summary: Move BrokerReconfigurable/KafkaConfig to server-common module. Key: KAFKA-14613 URL: https://issues.apache.org/jira/browse/KAFKA-14613 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14603) Move KafkaMetricsGroup to server-common module.
Satish Duggana created KAFKA-14603: -- Summary: Move KafkaMetricsGroup to server-common module. Key: KAFKA-14603 URL: https://issues.apache.org/jira/browse/KAFKA-14603 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14602) offsetDelta in BatchMetadata is an int but the values are computed as difference of offsets which are longs.
Satish Duggana created KAFKA-14602: -- Summary: offsetDelta in BatchMetadata is an int but the values are computed as difference of offsets which are longs. Key: KAFKA-14602 URL: https://issues.apache.org/jira/browse/KAFKA-14602 Project: Kafka Issue Type: Bug Components: core Reporter: Satish Duggana This is a followup of the discussion in https://github.com/apache/kafka/pull/13043#discussion_r1063071578 offsetDelta in BatchMetadata is an int. Becasue of which, ProducerAppendInfo may set a value that can overflow. Ideally, this data type should be long instead of int. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14574) Add doc generation as part of the CI build process
Satish Duggana created KAFKA-14574: -- Summary: Add doc generation as part of the CI build process Key: KAFKA-14574 URL: https://issues.apache.org/jira/browse/KAFKA-14574 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Add doc generation as part of the CI build process so that any doc generation errors are caught while build is generated. https://github.com/apache/kafka/pull/13079#issuecomment-1371663717 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14558) Move LastRecord, TxnMetadata, BatchMetadata, ProducerStateEntry, and ProducerAppendInfo to the storage module.
Satish Duggana created KAFKA-14558: -- Summary: Move LastRecord, TxnMetadata, BatchMetadata, ProducerStateEntry, and ProducerAppendInfo to the storage module. Key: KAFKA-14558 URL: https://issues.apache.org/jira/browse/KAFKA-14558 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Fix For: 3.5.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14551) Move LeaderEpochFileCache to storage module
Satish Duggana created KAFKA-14551: -- Summary: Move LeaderEpochFileCache to storage module Key: KAFKA-14551 URL: https://issues.apache.org/jira/browse/KAFKA-14551 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Move LeaderEpochFileCache and its dependencies to storage module. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14550) MoveSnapshotFile and CorruptSnapshotException to storage module
Satish Duggana created KAFKA-14550: -- Summary: MoveSnapshotFile and CorruptSnapshotException to storage module Key: KAFKA-14550 URL: https://issues.apache.org/jira/browse/KAFKA-14550 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-9990) Supporting transactions in tiered storage
[ https://issues.apache.org/jira/browse/KAFKA-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-9990. --- Resolution: Fixed > Supporting transactions in tiered storage > - > > Key: KAFKA-9990 > URL: https://issues.apache.org/jira/browse/KAFKA-9990 > Project: Kafka > Issue Type: Sub-task >Reporter: Satish Duggana >Assignee: Satish Duggana >Priority: Major > Fix For: 3.5.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14467) Add a test to validate the replica state after processing the OFFSET_MOVED_TO_TIERED_STORAGE error, especially for the transactional state
Satish Duggana created KAFKA-14467: -- Summary: Add a test to validate the replica state after processing the OFFSET_MOVED_TO_TIERED_STORAGE error, especially for the transactional state Key: KAFKA-14467 URL: https://issues.apache.org/jira/browse/KAFKA-14467 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Satish Duggana https://github.com/apache/kafka/pull/11390#pullrequestreview-1210993072 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14466) Refactor ClassLoaderAwareRemoteStorageManager.scala to ClassLoaderAwareRemoteStorageManager.java and move it to storage module.
Satish Duggana created KAFKA-14466: -- Summary: Refactor ClassLoaderAwareRemoteStorageManager.scala to ClassLoaderAwareRemoteStorageManager.java and move it to storage module. Key: KAFKA-14466 URL: https://issues.apache.org/jira/browse/KAFKA-14466 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana https://github.com/apache/kafka/pull/11390#discussion_r1043982906 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-13560) Load indexes and data in async manner in the critical path of replica fetcher threads.
Satish Duggana created KAFKA-13560: -- Summary: Load indexes and data in async manner in the critical path of replica fetcher threads. Key: KAFKA-13560 URL: https://issues.apache.org/jira/browse/KAFKA-13560 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Fix For: 3.2.0 https://github.com/apache/kafka/pull/11390#discussion_r762366976 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13369) Follower fetch protocol enhancements for tiered storage.
Satish Duggana created KAFKA-13369: -- Summary: Follower fetch protocol enhancements for tiered storage. Key: KAFKA-13369 URL: https://issues.apache.org/jira/browse/KAFKA-13369 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13355) Shutdown broker eventually when unrecoverable exceptions like IOException encountered in RLMM.
Satish Duggana created KAFKA-13355: -- Summary: Shutdown broker eventually when unrecoverable exceptions like IOException encountered in RLMM. Key: KAFKA-13355 URL: https://issues.apache.org/jira/browse/KAFKA-13355 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Have mechanism to catch unrecoverable exceptions like IOException from RLMM and shutdown the broker like it is done in log layer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-9569) RemoteStorageManager implementation for HDFS storage.
[ https://issues.apache.org/jira/browse/KAFKA-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-9569. --- Resolution: Fixed > RemoteStorageManager implementation for HDFS storage. > - > > Key: KAFKA-9569 > URL: https://issues.apache.org/jira/browse/KAFKA-9569 > Project: Kafka > Issue Type: Sub-task > Components: core >Reporter: Satish Duggana >Assignee: Ying Zheng >Priority: Major > > This is about implementing `RemoteStorageManager` for HDFS to verify the > proposed SPIs are sufficient. It looks like the existing RSM interface should > be sufficient. If needed, we will discuss any required changes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13097) Handle the requests gracefully to publish the events in TopicBasedRemoteLogMetadataManager when it is not yet initialized.
Satish Duggana created KAFKA-13097: -- Summary: Handle the requests gracefully to publish the events in TopicBasedRemoteLogMetadataManager when it is not yet initialized. Key: KAFKA-13097 URL: https://issues.apache.org/jira/browse/KAFKA-13097 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12970) Make tiered storage related schemas adopt flexible versions feature.
[ https://issues.apache.org/jira/browse/KAFKA-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-12970. Resolution: Fixed This is already addressed as mentioned in the comment: https://issues.apache.org/jira/browse/KAFKA-12970?focusedCommentId=17365231&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17365231 > Make tiered storage related schemas adopt flexible versions feature. > - > > Key: KAFKA-12970 > URL: https://issues.apache.org/jira/browse/KAFKA-12970 > Project: Kafka > Issue Type: Sub-task >Reporter: Satish Duggana >Assignee: Satish Duggana >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12988) RLMM add/updateRemoteLogSegmentMetadata and putRemotePartitionDeleteMetadata to be changed with asynchronous APIs.
Satish Duggana created KAFKA-12988: -- Summary: RLMM add/updateRemoteLogSegmentMetadata and putRemotePartitionDeleteMetadata to be changed with asynchronous APIs. Key: KAFKA-12988 URL: https://issues.apache.org/jira/browse/KAFKA-12988 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12970) Make tiered storage related schemas adopt flexible versions feature.
Satish Duggana created KAFKA-12970: -- Summary: Make tiered storage related schemas adopt flexible versions feature. Key: KAFKA-12970 URL: https://issues.apache.org/jira/browse/KAFKA-12970 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12969) Add cluster or broker level config for topic level tiered storage confgs.
Satish Duggana created KAFKA-12969: -- Summary: Add cluster or broker level config for topic level tiered storage confgs. Key: KAFKA-12969 URL: https://issues.apache.org/jira/browse/KAFKA-12969 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12816) Add tier storage configs.
Satish Duggana created KAFKA-12816: -- Summary: Add tier storage configs. Key: KAFKA-12816 URL: https://issues.apache.org/jira/browse/KAFKA-12816 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Add all the tier storage related configuration including remote log manager, remote storage manager, and remote log metadata manager. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12802) Add a file based cache for consumed remote log metadata for each partition to avoid consuming again incase of broker restarts.
Satish Duggana created KAFKA-12802: -- Summary: Add a file based cache for consumed remote log metadata for each partition to avoid consuming again incase of broker restarts. Key: KAFKA-12802 URL: https://issues.apache.org/jira/browse/KAFKA-12802 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Fix For: 3.0.0 Add a file based cache for consumed remote log metadata for each partition to avoid consuming again in case of broker restarts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12758) Create a new `server-common` module and move ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and BytesApiMessageSerde to that module.
Satish Duggana created KAFKA-12758: -- Summary: Create a new `server-common` module and move ApiMessageAndVersion, RecordSerde, AbstractApiMessageSerde, and BytesApiMessageSerde to that module. Key: KAFKA-12758 URL: https://issues.apache.org/jira/browse/KAFKA-12758 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12757) Move server related common classes into a separate `server-common` module.
Satish Duggana created KAFKA-12757: -- Summary: Move server related common classes into a separate `server-common` module. Key: KAFKA-12757 URL: https://issues.apache.org/jira/browse/KAFKA-12757 Project: Kafka Issue Type: Improvement Reporter: Satish Duggana Move server related common classes into a separate `server-common` module. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12641) Clear RemoteLogLeaderEpochState entry when it become empty.
Satish Duggana created KAFKA-12641: -- Summary: Clear RemoteLogLeaderEpochState entry when it become empty. Key: KAFKA-12641 URL: https://issues.apache.org/jira/browse/KAFKA-12641 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana https://github.com/apache/kafka/pull/10218#discussion_r609895193 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12429) Serdes for all message types in internal topic which is used in default implementation for RLMM.
Satish Duggana created KAFKA-12429: -- Summary: Serdes for all message types in internal topic which is used in default implementation for RLMM. Key: KAFKA-12429 URL: https://issues.apache.org/jira/browse/KAFKA-12429 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Satish Duggana RLMM default implementation is based on storing all the metadata in an internal topic. We need serdes and format of the message types that need to be stored in the topic. You can see more details in the [KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-MessageFormat] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12368) Inmemory implementation of RSM and RLMM.
Satish Duggana created KAFKA-12368: -- Summary: Inmemory implementation of RSM and RLMM. Key: KAFKA-12368 URL: https://issues.apache.org/jira/browse/KAFKA-12368 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9990) Supporting transactions in tiered storage
Satish Duggana created KAFKA-9990: - Summary: Supporting transactions in tiered storage Key: KAFKA-9990 URL: https://issues.apache.org/jira/browse/KAFKA-9990 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9579) RLM fetch implementation by adding respective purgatory
Satish Duggana created KAFKA-9579: - Summary: RLM fetch implementation by adding respective purgatory Key: KAFKA-9579 URL: https://issues.apache.org/jira/browse/KAFKA-9579 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Ying Zheng -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9569) RSM implementation for HDFS storage.
Satish Duggana created KAFKA-9569: - Summary: RSM implementation for HDFS storage. Key: KAFKA-9569 URL: https://issues.apache.org/jira/browse/KAFKA-9569 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana Assignee: Ying Zheng This is about implementing `RemoteStorageManager` for HDFS to verify the proposed SPIs are sufficient. It looks like the existing RSM interface should be sufficient. If needed, we will discuss any required changes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-9554) Define the SPI for Tiered Storage framework
[ https://issues.apache.org/jira/browse/KAFKA-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved KAFKA-9554. --- Resolution: Duplicate Duplicate of https://issues.apache.org/jira/browse/KAFKA-9548 > Define the SPI for Tiered Storage framework > --- > > Key: KAFKA-9554 > URL: https://issues.apache.org/jira/browse/KAFKA-9554 > Project: Kafka > Issue Type: Sub-task > Components: clients, core >Reporter: Alexandre Dupriez >Assignee: Alexandre Dupriez >Priority: Major > > The goal of this task is to define the SPI (service provider interfaces) > which will be used by vendors to implement plug-ins to communicate with > specific storage system. > Done means: > * Package with interfaces and key objects available and published for review. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9550) RemoteLogManager implementation
Satish Duggana created KAFKA-9550: - Summary: RemoteLogManager implementation Key: KAFKA-9550 URL: https://issues.apache.org/jira/browse/KAFKA-9550 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Satish Duggana Implementation of RLM as mentioned in the HLD section of KIP-405 [https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP-405:KafkaTieredStorage-High-leveldesign] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9549) Local storage implementations for RSM and RLMM which can be used in tests.
Satish Duggana created KAFKA-9549: - Summary: Local storage implementations for RSM and RLMM which can be used in tests. Key: KAFKA-9549 URL: https://issues.apache.org/jira/browse/KAFKA-9549 Project: Kafka Issue Type: Sub-task Reporter: Satish Duggana Assignee: Alexandre Dupriez -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9548) RemoteStorageManager and RemoteLogMetadataManager interfaces.
Satish Duggana created KAFKA-9548: - Summary: RemoteStorageManager and RemoteLogMetadataManager interfaces. Key: KAFKA-9548 URL: https://issues.apache.org/jira/browse/KAFKA-9548 Project: Kafka Issue Type: Sub-task Components: core Reporter: Satish Duggana -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-8733) Offline partitions occur when leader's disk is slow in reads while responding to follower fetch requests.
Satish Duggana created KAFKA-8733: - Summary: Offline partitions occur when leader's disk is slow in reads while responding to follower fetch requests. Key: KAFKA-8733 URL: https://issues.apache.org/jira/browse/KAFKA-8733 Project: Kafka Issue Type: Bug Components: core Affects Versions: 1.1.2, 2.4.0 Reporter: Satish Duggana Assignee: Satish Duggana We found offline partitions issue multiple times on some of the hosts in our clusters. After going through the broker logs and hosts’s disk stats, it looks like this issue occurs whenever the read/write operations take more time on that disk. In a particular case where read time is more than the replica.lag.time.max.ms, follower replicas will be out of sync as their earlier fetch requests are stuck while reading the local log and their fetch status is not yet updated as mentioned in the below code of `ReplicaManager`. If there is an issue in reading the data from the log for a duration more than replica.lag.time.max.ms then all the replicas will be out of sync and partition becomes offline if min.isr.replicas > 1 and unclean.leader.election is disabled. {code:java} def readFromLog(): Seq[(TopicPartition, LogReadResult)] = { val result = readFromLocalLog( // this call took more than `replica.lag.time.max.ms` replicaId = replicaId, fetchOnlyFromLeader = fetchOnlyFromLeader, readOnlyCommitted = fetchOnlyCommitted, fetchMaxBytes = fetchMaxBytes, hardMaxBytesLimit = hardMaxBytesLimit, readPartitionInfo = fetchInfos, quota = quota, isolationLevel = isolationLevel) if (isFromFollower) updateFollowerLogReadResults(replicaId, result). // fetch time gets updated here, but mayBeShrinkIsr should have been already called and the replica is removed from sir else result } val logReadResults = readFromLog() {code} I will raise a KIP describing options on how to handle this scenario. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (KAFKA-7742) DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API.
Satish Duggana created KAFKA-7742: - Summary: DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API. Key: KAFKA-7742 URL: https://issues.apache.org/jira/browse/KAFKA-7742 Project: Kafka Issue Type: Bug Components: security Reporter: Satish Duggana Assignee: Satish Duggana DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using `removeToken(String tokenId)`[1] API. [1] https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java#L84 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7219) Add topic/partition level metrics.
Satish Duggana created KAFKA-7219: - Summary: Add topic/partition level metrics. Key: KAFKA-7219 URL: https://issues.apache.org/jira/browse/KAFKA-7219 Project: Kafka Issue Type: Improvement Components: metrics Reporter: Satish Duggana Assignee: Satish Duggana Currently, Kafka generates different metrics for topics on a broker. - MessagesInPerSec - BytesInPerSec - BytesOutPerSec - BytesRejectedPerSec - ReplicationBytesInPerSec - ReplicationBytesOutPerSec - FailedProduceRequestsPerSec - FailedFetchRequestsPerSec - TotalProduceRequestsPerSec - TotalFetchRequestsPerSec - FetchMessageConversionsPerSec - ProduceMessageConversionsPerSec Add metrics for individual partitions instead of having only at topic level. Some of these partition level metrics are useful for monitoring applications to monitor individual topic/partitions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7216) Exception while running kafka-acls.sh from 1.0 env on target Kafka env with 1.1.1
Satish Duggana created KAFKA-7216: - Summary: Exception while running kafka-acls.sh from 1.0 env on target Kafka env with 1.1.1 Key: KAFKA-7216 URL: https://issues.apache.org/jira/browse/KAFKA-7216 Project: Kafka Issue Type: Bug Affects Versions: 1.1.1, 1.1.0 Reporter: Satish Duggana When `kafka-acls.sh` with SimpleAclAuthorizer on target Kafka cluster with 1.1.1 version, it throws the below error. {code:java} kafka.common.KafkaException: DelegationToken not a valid resourceType name. The valid names are Topic,Group,Cluster,TransactionalId at kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56) at kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56) at scala.Option.getOrElse(Option.scala:121) at kafka.security.auth.ResourceType$.fromString(ResourceType.scala:56) at kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:233) at kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:232) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply$mcV$sp(SimpleAclAuthorizer.scala:232) at kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230) at kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:216) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:224) at kafka.security.auth.SimpleAclAuthorizer.loadCache(SimpleAclAuthorizer.scala:230) at kafka.security.auth.SimpleAclAuthorizer.configure(SimpleAclAuthorizer.scala:114) at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:83) at kafka.admin.AclCommand$.addAcl(AclCommand.scala:93) at kafka.admin.AclCommand$.main(AclCommand.scala:53) at kafka.admin.AclCommand.main(AclCommand.scala) {code} This is because it tries to get all the resource types registered from ZK path and it throws error when `DelegationToken` resource is not defined in `ResourceType` of client's Kafka version(which is earlier than 1.1.x) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-4741) Memory leak in RecordAccumulator.append
[ https://issues.apache.org/jira/browse/KAFKA-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858059#comment-15858059 ] Satish Duggana commented on KAFKA-4741: --- [~ijuma] Thanks for merging the PR. Can you set me as assignee for this JIRA? It did not allow me to do that. > Memory leak in RecordAccumulator.append > --- > > Key: KAFKA-4741 > URL: https://issues.apache.org/jira/browse/KAFKA-4741 > Project: Kafka > Issue Type: Bug > Components: clients >Reporter: Satish Duggana > Fix For: 0.10.3.0 > > > RecordAccumulator creates a `ByteBuffer` from free memory pool. This should > be deallocated when invocations encounter an exception or throwing any > exceptions. > I added todo comment lines in the below code for cases to deallocate that > buffer. > {code:title=RecordProducer.java|borderStyle=solid} > ByteBuffer buffer = free.allocate(size, maxTimeToBlock); > synchronized (dq) { > // Need to check if producer is closed again after grabbing > the dequeue lock. > if (closed) >// todo buffer should be cleared. > throw new IllegalStateException("Cannot send after the > producer is closed."); > // todo buffer should be cleared up when tryAppend throws an > Exception > RecordAppendResult appendResult = tryAppend(timestamp, key, > value, callback, dq); > if (appendResult != null) { > // Somebody else found us a batch, return the one we > waited for! Hopefully this doesn't happen often... > free.deallocate(buffer); > return appendResult; > } > {code} > I will raise PR for the same soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (KAFKA-4741) Memory leak in RecordAccumulator#append
Satish Duggana created KAFKA-4741: - Summary: Memory leak in RecordAccumulator#append Key: KAFKA-4741 URL: https://issues.apache.org/jira/browse/KAFKA-4741 Project: Kafka Issue Type: Bug Components: clients Reporter: Satish Duggana RecordAccumulator creates a `ByteBuffer` from free memory pool. This should be deallocated when invocations encounter an exception or throwing any exceptions. I added todo comment lines in the below code for cases to deallocate that buffer. {code:title=RecordProducer.java|borderStyle=solid} ByteBuffer buffer = free.allocate(size, maxTimeToBlock); synchronized (dq) { // Need to check if producer is closed again after grabbing the dequeue lock. if (closed) // todo buffer should be cleared. throw new IllegalStateException("Cannot send after the producer is closed."); // todo buffer should be cleared up when tryAppend throws an Exception RecordAppendResult appendResult = tryAppend(timestamp, key, value, callback, dq); if (appendResult != null) { // Somebody else found us a batch, return the one we waited for! Hopefully this doesn't happen often... free.deallocate(buffer); return appendResult; } {code} I will raise PR for the same soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (KAFKA-4741) Memory leak in RecordAccumulator.append
[ https://issues.apache.org/jira/browse/KAFKA-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana updated KAFKA-4741: -- Summary: Memory leak in RecordAccumulator.append (was: Memory leak in RecordAccumulator#append) > Memory leak in RecordAccumulator.append > --- > > Key: KAFKA-4741 > URL: https://issues.apache.org/jira/browse/KAFKA-4741 > Project: Kafka > Issue Type: Bug > Components: clients >Reporter: Satish Duggana > > RecordAccumulator creates a `ByteBuffer` from free memory pool. This should > be deallocated when invocations encounter an exception or throwing any > exceptions. > I added todo comment lines in the below code for cases to deallocate that > buffer. > {code:title=RecordProducer.java|borderStyle=solid} > ByteBuffer buffer = free.allocate(size, maxTimeToBlock); > synchronized (dq) { > // Need to check if producer is closed again after grabbing > the dequeue lock. > if (closed) >// todo buffer should be cleared. > throw new IllegalStateException("Cannot send after the > producer is closed."); > // todo buffer should be cleared up when tryAppend throws an > Exception > RecordAppendResult appendResult = tryAppend(timestamp, key, > value, callback, dq); > if (appendResult != null) { > // Somebody else found us a batch, return the one we > waited for! Hopefully this doesn't happen often... > free.deallocate(buffer); > return appendResult; > } > {code} > I will raise PR for the same soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346)