[jira] [Updated] (KAFKA-16607) Have metrics implementation include the new state
[ https://issues.apache.org/jira/browse/KAFKA-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-16607: - Summary: Have metrics implementation include the new state (was: Update the KIP and metrics implementation to include the new state) > Have metrics implementation include the new state > - > > Key: KAFKA-16607 > URL: https://issues.apache.org/jira/browse/KAFKA-16607 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Priority: Major > > KafkaRaftMetrics exposes a current-state metric that needs to be updated to > include the prospective state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17641) Update vote RPC with new pre-vote field
[ https://issues.apache.org/jira/browse/KAFKA-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-17641: Assignee: Alyssa Huang > Update vote RPC with new pre-vote field > --- > > Key: KAFKA-17641 > URL: https://issues.apache.org/jira/browse/KAFKA-17641 > Project: Kafka > Issue Type: Sub-task >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Major > > Brush off [https://github.com/apache/kafka/pull/15231/files] and handle v2 > (addition of pre-vote field) of the vote RPC. Handle receiving pre-vote = > true. Include KafkaRaftClientPreVoteTest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17675) Add tests to RaftEventSimulationTest
Alyssa Huang created KAFKA-17675: Summary: Add tests to RaftEventSimulationTest Key: KAFKA-17675 URL: https://issues.apache.org/jira/browse/KAFKA-17675 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17642) Add ProspectiveState and ProspectiveStateTest
[ https://issues.apache.org/jira/browse/KAFKA-17642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-17642: - Description: (Most likely needs to be grouped with https://issues.apache.org/jira/browse/KAFKA-17643) The transition to ProspectiveState does not need to be included in this task (was: The transition to ProspectiveState does not need to be included in this task) > Add ProspectiveState and ProspectiveStateTest > - > > Key: KAFKA-17642 > URL: https://issues.apache.org/jira/browse/KAFKA-17642 > Project: Kafka > Issue Type: Sub-task >Reporter: Alyssa Huang >Priority: Major > > (Most likely needs to be grouped with > https://issues.apache.org/jira/browse/KAFKA-17643) The transition to > ProspectiveState does not need to be included in this task -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16607) Update the KIP and metrics implementation to include the new state
[ https://issues.apache.org/jira/browse/KAFKA-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-16607: - Description: KafkaRaftMetrics exposes a current-state metric that needs to be updated to include the prospective state. (was: KafkaRaftMetrics exposes a current-state metrics that needs to be updated to include the prospective state.) > Update the KIP and metrics implementation to include the new state > -- > > Key: KAFKA-16607 > URL: https://issues.apache.org/jira/browse/KAFKA-16607 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Priority: Major > > KafkaRaftMetrics exposes a current-state metric that needs to be updated to > include the prospective state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17641) Update vote RPC with new pre-vote field
[ https://issues.apache.org/jira/browse/KAFKA-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-17641: - Description: Brush off [https://github.com/apache/kafka/pull/15231/files] and handle v2 (addition of pre-vote field) of the vote RPC. Handle receiving pre-vote = true. Include KafkaRaftClientPreVoteTest (was: Brush off [https://github.com/apache/kafka/pull/15231/files] and handle v2 (addition of pre-vote field) of the vote RPC. Setting pre-vote = true will not be supported, but the new field will be sent. Include KafkaRaftClientPreVoteTest) > Update vote RPC with new pre-vote field > --- > > Key: KAFKA-17641 > URL: https://issues.apache.org/jira/browse/KAFKA-17641 > Project: Kafka > Issue Type: Sub-task >Reporter: Alyssa Huang >Priority: Major > > Brush off [https://github.com/apache/kafka/pull/15231/files] and handle v2 > (addition of pre-vote field) of the vote RPC. Handle receiving pre-vote = > true. Include KafkaRaftClientPreVoteTest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17644) TLA+ spec modifications
Alyssa Huang created KAFKA-17644: Summary: TLA+ spec modifications Key: KAFKA-17644 URL: https://issues.apache.org/jira/browse/KAFKA-17644 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang [~vanlightly] was helping validate pre-vote via his TLA+ spec earlier, confirm implementation matches behavior modeled/vice versa. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17643) Response handling for pre-vote set to True
Alyssa Huang created KAFKA-17643: Summary: Response handling for pre-vote set to True Key: KAFKA-17643 URL: https://issues.apache.org/jira/browse/KAFKA-17643 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang Includes epoch state transitions to Prospective, additions to QuorumStateTest, KafkaRaftMetricsTest, KafkaRaftClientTest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17642) Add ProspectiveState and ProspectiveStateTest
Alyssa Huang created KAFKA-17642: Summary: Add ProspectiveState and ProspectiveStateTest Key: KAFKA-17642 URL: https://issues.apache.org/jira/browse/KAFKA-17642 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang The transition to ProspectiveState does not need to be included in this task -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17641) Update vote RPC with new pre-vote field
Alyssa Huang created KAFKA-17641: Summary: Update vote RPC with new pre-vote field Key: KAFKA-17641 URL: https://issues.apache.org/jira/browse/KAFKA-17641 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang Brush off [https://github.com/apache/kafka/pull/15231/files] and handle v2 (addition of pre-vote field) of the vote RPC. Setting pre-vote = true will not be supported, but the new field will be sent. Include KafkaRaftClientPreVoteTest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17604) Describe quorum output missing added voters endpoints
Alyssa Huang created KAFKA-17604: Summary: Describe quorum output missing added voters endpoints Key: KAFKA-17604 URL: https://issues.apache.org/jira/browse/KAFKA-17604 Project: Kafka Issue Type: Bug Reporter: Alyssa Huang Describe quorum output will miss endpoints of voters which were added via add_raft_voter. This is due to a bug in LeaderState's updateVoterAndObserverStates which will pull replica state from observer states map (which does not include endpoints). The fix is to populate endpoints from the lastVoterSet passed into the method. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17604) Describe quorum output missing added voters endpoints
[ https://issues.apache.org/jira/browse/KAFKA-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-17604: Assignee: Alyssa Huang > Describe quorum output missing added voters endpoints > - > > Key: KAFKA-17604 > URL: https://issues.apache.org/jira/browse/KAFKA-17604 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Minor > > Describe quorum output will miss endpoints of voters which were added via > add_raft_voter. This is due to a bug in LeaderState's > updateVoterAndObserverStates which will pull replica state from observer > states map (which does not include endpoints). The fix is to populate > endpoints from the lastVoterSet passed into the method. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17282) pollUnattachedAsObserver should not consider electionTimeoutMs
[ https://issues.apache.org/jira/browse/KAFKA-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-17282: - Description: After electionTimeout expiration unattached observers will poll the message queue for 0 seconds (meaning we won't wait for inbound messages anymore). (was: After electionTimeout expiration unattached observers will send fetch requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than we want), and poll the message queue for 0 seconds (meaning we won't wait for inbound messages anymore). ) > pollUnattachedAsObserver should not consider electionTimeoutMs > -- > > Key: KAFKA-17282 > URL: https://issues.apache.org/jira/browse/KAFKA-17282 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Major > > After electionTimeout expiration unattached observers will poll the message > queue for 0 seconds (meaning we won't wait for inbound messages anymore). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17282) pollUnattachedAsObserver should not consider electionTimeoutMs
[ https://issues.apache.org/jira/browse/KAFKA-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-17282: - Priority: Minor (was: Major) > pollUnattachedAsObserver should not consider electionTimeoutMs > -- > > Key: KAFKA-17282 > URL: https://issues.apache.org/jira/browse/KAFKA-17282 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Minor > > After electionTimeout expiration unattached observers will poll the message > queue for 0 seconds (meaning we won't wait for inbound messages anymore). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-17282) pollUnattachedAsObserver should not consider electionTimeoutMs
[ https://issues.apache.org/jira/browse/KAFKA-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-17282: - Description: After electionTimeout expiration unattached observers will send fetch requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than we want), and poll the message queue for 0 seconds (meaning we won't wait for inbound messages anymore). (was: After electionTimeout expiration unattached observers will send fetch requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than we want), and poll the message queue for 0 seconds (meaning we won't be able to accept any inbound messages). Our tests don't replicate this behavior currently due to how we've mocked the message queue.) > pollUnattachedAsObserver should not consider electionTimeoutMs > -- > > Key: KAFKA-17282 > URL: https://issues.apache.org/jira/browse/KAFKA-17282 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Major > > After electionTimeout expiration unattached observers will send fetch > requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than > we want), and poll the message queue for 0 seconds (meaning we won't wait for > inbound messages anymore). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17282) pollUnattachedAsObserver should not consider electionTimeoutMs
Alyssa Huang created KAFKA-17282: Summary: pollUnattachedAsObserver should not consider electionTimeoutMs Key: KAFKA-17282 URL: https://issues.apache.org/jira/browse/KAFKA-17282 Project: Kafka Issue Type: Bug Reporter: Alyssa Huang After electionTimeout expiration unattached observers will send fetch requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than we want), and poll the message queue for 0 seconds (meaning we won't be able to accept any inbound messages). Our tests don't replicate this behavior currently due to how we've mocked the message queue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-17282) pollUnattachedAsObserver should not consider electionTimeoutMs
[ https://issues.apache.org/jira/browse/KAFKA-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-17282: Assignee: Alyssa Huang > pollUnattachedAsObserver should not consider electionTimeoutMs > -- > > Key: KAFKA-17282 > URL: https://issues.apache.org/jira/browse/KAFKA-17282 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Major > > After electionTimeout expiration unattached observers will send fetch > requests every {{QUORUM_REQUEST_TIMEOUT_MS_CONFIG}} (which can be longer than > we want), and poll the message queue for 0 seconds (meaning we won't be able > to accept any inbound messages). Our tests don't replicate this behavior > currently due to how we've mocked the message queue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17243) MetadataQuorumCommand describe to include CommittedVoters
Alyssa Huang created KAFKA-17243: Summary: MetadataQuorumCommand describe to include CommittedVoters Key: KAFKA-17243 URL: https://issues.apache.org/jira/browse/KAFKA-17243 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang kafka-metadata-quorum describe --status output should include CommittedVoters information, formatted similarly to Voters and Observers -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16521) kafka-metadata-quorum describe changes for KIP-853
[ https://issues.apache.org/jira/browse/KAFKA-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869712#comment-17869712 ] Alyssa Huang commented on KAFKA-16521: -- Hey [~nizhikov], the 3.9.0 release branch was cut and an exception was made for cherry-picking in this item, so we need this in asap - I'm free this week to work on this item so I'll start working on it. If your patch was ready though, just let me know and I can help review instead > kafka-metadata-quorum describe changes for KIP-853 > -- > > Key: KAFKA-16521 > URL: https://issues.apache.org/jira/browse/KAFKA-16521 > Project: Kafka > Issue Type: Sub-task > Components: tools >Reporter: José Armando García Sancio >Assignee: Nikolay Izhikov >Priority: Major > > # > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-describe--status] > # > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-describe--replication] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-17206) Use v1 of LeaderChangeMessage when kraft.version is 1
Alyssa Huang created KAFKA-17206: Summary: Use v1 of LeaderChangeMessage when kraft.version is 1 Key: KAFKA-17206 URL: https://issues.apache.org/jira/browse/KAFKA-17206 Project: Kafka Issue Type: Task Reporter: Alyssa Huang [https://github.com/apache/kafka/pull/16668] introduced v1 of LCM but still uses v0 of the schema. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16953) Properly implement the sending of DescribeQuorumResponse
[ https://issues.apache.org/jira/browse/KAFKA-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-16953: Assignee: Alyssa Huang > Properly implement the sending of DescribeQuorumResponse > > > Key: KAFKA-16953 > URL: https://issues.apache.org/jira/browse/KAFKA-16953 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Assignee: Alyssa Huang >Priority: Major > Fix For: 3.9.0 > > > The current implement doesn't accurately implement the different version of > the response. I removed the buggy code in > [https://github.com/apache/kafka/pull/16454] > This needs to get reimplemented properly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16927) Adding tests for restarting followers receiving leader endpoint correctly
Alyssa Huang created KAFKA-16927: Summary: Adding tests for restarting followers receiving leader endpoint correctly Key: KAFKA-16927 URL: https://issues.apache.org/jira/browse/KAFKA-16927 Project: Kafka Issue Type: Sub-task Components: kraft Reporter: Alyssa Huang We'll need to test that restarting followers are populated with correct leader endpoint after receiving BeginQuorumEpochRequest. Depends on KAFKA-16536 and voter RPC changes to be done -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16926) Optimize BeginQuorumEpoch heartbeat
[ https://issues.apache.org/jira/browse/KAFKA-16926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-16926: - Component/s: kraft > Optimize BeginQuorumEpoch heartbeat > --- > > Key: KAFKA-16926 > URL: https://issues.apache.org/jira/browse/KAFKA-16926 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: Alyssa Huang >Priority: Minor > > Instead of sending out BeginQuorum requests to every voter on a cadence, we > can save on some requests by only sending to those which have not fetched > within the fetch timeout. > Split from KAFKA-16536 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16926) Optimize BeginQuorumEpoch heartbeat
[ https://issues.apache.org/jira/browse/KAFKA-16926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-16926: - Description: Instead of sending out BeginQuorum requests to every voter on a cadence, we can save on some requests by only sending to those which have not fetched within the fetch timeout. Split from KAFKA-16536 was:Instead of sending out BeginQuorum requests to every voter on a cadence, we can save on some requests by only sending to those which have not fetched within the fetch timeout. > Optimize BeginQuorumEpoch heartbeat > --- > > Key: KAFKA-16926 > URL: https://issues.apache.org/jira/browse/KAFKA-16926 > Project: Kafka > Issue Type: Sub-task >Reporter: Alyssa Huang >Priority: Minor > > Instead of sending out BeginQuorum requests to every voter on a cadence, we > can save on some requests by only sending to those which have not fetched > within the fetch timeout. > Split from KAFKA-16536 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16926) Optimize BeginQuorumEpoch heartbeat
Alyssa Huang created KAFKA-16926: Summary: Optimize BeginQuorumEpoch heartbeat Key: KAFKA-16926 URL: https://issues.apache.org/jira/browse/KAFKA-16926 Project: Kafka Issue Type: Sub-task Reporter: Alyssa Huang Instead of sending out BeginQuorum requests to every voter on a cadence, we can save on some requests by only sending to those which have not fetched within the fetch timeout. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16530) Fix high-watermark calculation to not assume the leader is in the voter set
[ https://issues.apache.org/jira/browse/KAFKA-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849342#comment-17849342 ] Alyssa Huang commented on KAFKA-16530: -- [~jsancio] > This is not correct because this will also increase the HWM when it > shouldn't. Only the LEO of voters should be counted when computing if the HWM > has increased. Hm, was there a certain scenario you were thinking of for this? In what case wouldn't we want the HWM to consider the leader as well? For what it's worth, I don't think this solution (by itself at least) is on the table since it would not solve the case where a follower is removed from the quorum and can potentially decrease the HWM. Just want to make sure I understand your comment. > Fix high-watermark calculation to not assume the leader is in the voter set > --- > > Key: KAFKA-16530 > URL: https://issues.apache.org/jira/browse/KAFKA-16530 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Assignee: Alyssa Huang >Priority: Major > Fix For: 3.8.0 > > > When the leader is being removed from the voter set, the leader may not be in > the voter set. This means that kraft should not assume that the leader is > part of the high-watermark calculation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16833) Cluster missing topicIds from equals and hashCode, PartitionInfo missing equals and hashCode
[ https://issues.apache.org/jira/browse/KAFKA-16833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-16833: - Summary: Cluster missing topicIds from equals and hashCode, PartitionInfo missing equals and hashCode (was: PartitionInfo missing equals and hashCode methods ) > Cluster missing topicIds from equals and hashCode, PartitionInfo missing > equals and hashCode > > > Key: KAFKA-16833 > URL: https://issues.apache.org/jira/browse/KAFKA-16833 > Project: Kafka > Issue Type: Bug >Reporter: Alyssa Huang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16833) PartitionInfo missing equals and hashCode methods
Alyssa Huang created KAFKA-16833: Summary: PartitionInfo missing equals and hashCode methods Key: KAFKA-16833 URL: https://issues.apache.org/jira/browse/KAFKA-16833 Project: Kafka Issue Type: Bug Reporter: Alyssa Huang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16530) Fix high-watermark calculation to not assume the leader is in the voter set
[ https://issues.apache.org/jira/browse/KAFKA-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849121#comment-17849121 ] Alyssa Huang commented on KAFKA-16530: -- In the case the leader is removed from the voter set, and tries to update its log end offset (`updateLocalState`) because of a new removeNode record for instance, it will first update its own ReplicaState (`getOrCreateReplicaState`) which will return a _new_ Observer state if its id is no longer in the `voterStates` map. The endOffset will be updated, and then we'll consider if the high watermark can be updated (`maybeUpdateHighWatermark`). When updating the high watermark, we only look at the `voterStates` map, which means we won't count the leader's offset as part of the HW calculation. This _does_ mean it's possible for the HW to drop though. Here's a scenario: {code:java} # Before node 1 removal, voterStates contains Nodes 1, 2, 3 Node 1: Leader, LEO 100 Node 2: Follower, LEO 90 <- HW Node 3: Follower, LEO 85 # Leader processes removeNode record, voterStates contains Nodes 2, 3 Node 1: Leader, LEO 101 Node 2: Follower, LEO 90 Node 3: Follower, LEO 85 <- new HW{code} We want to make sure the HW does not decrement in this scenario. Perhaps we could revise `maybeUpdateHighWatermark` to continue to factor in the Leader's offset into the HW calculation, regardless of if it is in the voter set or not. e.g. {code:java} private boolean maybeUpdateHighWatermark() { // Find the largest offset which is replicated to a majority of replicas (the leader counts) - List followersByDescendingFetchOffset = followersByDescendingFetchOffset(); + List followersAndLeaderByDescFetchOffset = followersAndLeadersByDescFetchOffset(); - int indexOfHw = voterStates.size() / 2; + int indexOfHw = followersByDescendingFetchOffset.size() / 2; Optional highWatermarkUpdateOpt = followersByDescendingFetchOffset.get(indexOfHw).endOffset;{code} However, this does not cover the case when a follower is being removed from the voter set. {code:java} # Before node 2 removal, voterStates contains Nodes 1, 2, 3 Node 1: Leader, LEO 100 Node 2: Follower, LEO 90 <- HW Node 3: Follower, LEO 85 # Leader processes removeNode record, voterStates contains Nodes 1, 3 Node 1: Leader, LEO 101 Node 2: Follower, LEO 90 Node 3: Follower, LEO 85 <- new HW{code} > Fix high-watermark calculation to not assume the leader is in the voter set > --- > > Key: KAFKA-16530 > URL: https://issues.apache.org/jira/browse/KAFKA-16530 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Assignee: Alyssa Huang >Priority: Major > Fix For: 3.8.0 > > > When the leader is being removed from the voter set, the leader may not be in > the voter set. This means that kraft should not assume that the leader is > part of the high-watermark calculation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16530) Fix high-watermark calculation to not assume the leader is in the voter set
[ https://issues.apache.org/jira/browse/KAFKA-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-16530: Assignee: Alyssa Huang (was: José Armando García Sancio) > Fix high-watermark calculation to not assume the leader is in the voter set > --- > > Key: KAFKA-16530 > URL: https://issues.apache.org/jira/browse/KAFKA-16530 > Project: Kafka > Issue Type: Sub-task > Components: kraft >Reporter: José Armando García Sancio >Assignee: Alyssa Huang >Priority: Major > Fix For: 3.8.0 > > > When the leader is being removed from the voter set, the leader may not be in > the voter set. This means that kraft should not assume that the leader is > part of the high-watermark calculation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16655) deflake ZKMigrationIntegrationTest.testDualWrite
[ https://issues.apache.org/jira/browse/KAFKA-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842820#comment-17842820 ] Alyssa Huang commented on KAFKA-16655: -- [https://github.com/apache/kafka/pull/15845/files] > deflake ZKMigrationIntegrationTest.testDualWrite > > > Key: KAFKA-16655 > URL: https://issues.apache.org/jira/browse/KAFKA-16655 > Project: Kafka > Issue Type: Improvement >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Minor > > {code:java} > Failed to map supported failure 'org.opentest4j.AssertionFailedError: > expected: not equal but was: <0>' with mapper > 'org.gradle.api.internal.tasks.testing.failure.mappers.OpenTestAssertionFailedMapper@59b5251d': > Cannot invoke "Object.getClass()" because "obj" is null > > Task :core:test > kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ClusterInstance)[8] failed, > log available in > /Users/ahuang/ce-kafka/core/build/reports/testOutput/kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ClusterInstance)[8].test.stdout > Gradle Test Run :core:test > Gradle Test Executor 8 > > ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite > [8] Type=ZK, MetadataVersion=3.8-IV0, Security=PLAINTEXT FAILED > org.opentest4j.AssertionFailedError: expected: not equal but was: <0> > at > app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:152) > at > app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) > at > app//org.junit.jupiter.api.AssertNotEquals.failEqual(AssertNotEquals.java:277) > at > app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:119) > at > app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:111) > at > app//org.junit.jupiter.api.Assertions.assertNotEquals(Assertions.java:2121) > at > app//kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ZkMigrationIntegrationTest.scala:995) > {code} > This test occasionally fails due to stale broker epoch exceptions, which in > turn causes allocate producer ids to fail. > Also fixes {{sendAllocateProducerIds}} erroneously returning 0 as the > `producerIdStart` in error cases (because `onComplete` only accounts for > timeouts and ignores any other error code) > {code:java} > [2024-04-12 18:45:08,820] INFO [ControllerServer id=3000] > allocateProducerIds: event failed with StaleBrokerEpochException in 19 > microseconds. (org.apache.kafka.controller.QuorumController:765) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16655) deflake ZKMigrationIntegrationTest.testDualWrite
Alyssa Huang created KAFKA-16655: Summary: deflake ZKMigrationIntegrationTest.testDualWrite Key: KAFKA-16655 URL: https://issues.apache.org/jira/browse/KAFKA-16655 Project: Kafka Issue Type: Improvement Reporter: Alyssa Huang Assignee: Alyssa Huang {code:java} Failed to map supported failure 'org.opentest4j.AssertionFailedError: expected: not equal but was: <0>' with mapper 'org.gradle.api.internal.tasks.testing.failure.mappers.OpenTestAssertionFailedMapper@59b5251d': Cannot invoke "Object.getClass()" because "obj" is null > Task :core:test kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ClusterInstance)[8] failed, log available in /Users/ahuang/ce-kafka/core/build/reports/testOutput/kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ClusterInstance)[8].test.stdout Gradle Test Run :core:test > Gradle Test Executor 8 > ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite [8] Type=ZK, MetadataVersion=3.8-IV0, Security=PLAINTEXT FAILED org.opentest4j.AssertionFailedError: expected: not equal but was: <0> at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:152) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertNotEquals.failEqual(AssertNotEquals.java:277) at app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:119) at app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:111) at app//org.junit.jupiter.api.Assertions.assertNotEquals(Assertions.java:2121) at app//kafka.zk.ZkMigrationIntegrationTest.testDualWrite(ZkMigrationIntegrationTest.scala:995) {code} This test occasionally fails due to stale broker epoch exceptions, which in turn causes allocate producer ids to fail. Also fixes {{sendAllocateProducerIds}} erroneously returning 0 as the `producerIdStart` in error cases (because `onComplete` only accounts for timeouts and ignores any other error code) {code:java} [2024-04-12 18:45:08,820] INFO [ControllerServer id=3000] allocateProducerIds: event failed with StaleBrokerEpochException in 19 microseconds. (org.apache.kafka.controller.QuorumController:765) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-16164) Pre-Vote
[ https://issues.apache.org/jira/browse/KAFKA-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-16164: Assignee: Alyssa Huang > Pre-Vote > > > Key: KAFKA-16164 > URL: https://issues.apache.org/jira/browse/KAFKA-16164 > Project: Kafka > Issue Type: Improvement >Reporter: Alyssa Huang >Assignee: Alyssa Huang >Priority: Major > > Implementing pre-vote as described in > https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16616) refactor mergeWith in MetadataSnapshot
Alyssa Huang created KAFKA-16616: Summary: refactor mergeWith in MetadataSnapshot Key: KAFKA-16616 URL: https://issues.apache.org/jira/browse/KAFKA-16616 Project: Kafka Issue Type: Improvement Affects Versions: 3.7.0 Reporter: Alyssa Huang Right now we keep track of topic ids and partition metadata to add/update separately in mergeWith (e.g. two maps passed as arguments). This means we iterate over topic metadata twice which could be costly when we're dealing with a large number of updates. `updatePartitionLeadership` which calls `mergeWith` does something similarly (generates map of topic ids to update in a loop separate from the list of partition metadata to update) and should be refactored as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16447) Fix failed ReplicaManagerTest
[ https://issues.apache.org/jira/browse/KAFKA-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17832332#comment-17832332 ] Alyssa Huang commented on KAFKA-16447: -- [~yangpoan] go for it, sorry for breaking the build. I'll submit a PR tomorrow if nothing is opened by then > Fix failed ReplicaManagerTest > - > > Key: KAFKA-16447 > URL: https://issues.apache.org/jira/browse/KAFKA-16447 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Nikolay Izhikov >Priority: Major > > see comment: https://github.com/apache/kafka/pull/15373/files#r1544335647 for > root cause -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16427) KafkaConsumer#position() does not respect timeout when group protocol is CONSUMER
Alyssa Huang created KAFKA-16427: Summary: KafkaConsumer#position() does not respect timeout when group protocol is CONSUMER Key: KAFKA-16427 URL: https://issues.apache.org/jira/browse/KAFKA-16427 Project: Kafka Issue Type: Bug Affects Versions: 3.7.0 Reporter: Alyssa Huang When `long position(TopicPartition partition, final Duration timeout);` is called on an unknown topic partition (and auto creation is disabled), the method fails to adhere to the timeout supplied. e.g. the following warning is logged continuously as metadata fetches are retried [2024-03-26 11:03:48,589] WARN [Consumer clientId=ConsumerTestConsumer, groupId=my-test] Error while fetching metadata with correlation id 200 : \{nonexistingTopic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1313) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16164) Pre-Vote
Alyssa Huang created KAFKA-16164: Summary: Pre-Vote Key: KAFKA-16164 URL: https://issues.apache.org/jira/browse/KAFKA-16164 Project: Kafka Issue Type: Improvement Reporter: Alyssa Huang Implementing pre-vote as described in https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-15137) Don't log the entire request in KRaftControllerChannelManager
[ https://issues.apache.org/jira/browse/KAFKA-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17741123#comment-17741123 ] Alyssa Huang commented on KAFKA-15137: -- I'll do this in the next few days! > Don't log the entire request in KRaftControllerChannelManager > - > > Key: KAFKA-15137 > URL: https://issues.apache.org/jira/browse/KAFKA-15137 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.5.0, 3.6.0 >Reporter: David Arthur >Assignee: Alyssa Huang >Priority: Major > Fix For: 3.5.1 > > > While debugging some junit tests, I noticed some really long log lines in > KRaftControllerChannelManager. When the broker is down, we log a WARN that > includes the entire UpdateMetadataRequest or LeaderAndIsrRequest. For large > clusters, these can be really large requests, so this could potentially cause > excessive output in the log4j logs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14992) Add partition limit or improve error msg for adminzkclient
Alyssa Huang created KAFKA-14992: Summary: Add partition limit or improve error msg for adminzkclient Key: KAFKA-14992 URL: https://issues.apache.org/jira/browse/KAFKA-14992 Project: Kafka Issue Type: Improvement Reporter: Alyssa Huang Create topic requests with large number of partitions will yield an exception that's technically unrelated and confusing. `partitions should be a consecutive 0-based integer sequence` What really is happening is that we exceed maxInt at this line [https://github.com/apache/kafka/blame/trunk/core/src/main/scala/kafka/zk/AdminZkClient.scala#L154] which causes the following check to fail. We should account for this case better. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-14436) Initialize KRaft with arbitrary epoch
[ https://issues.apache.org/jira/browse/KAFKA-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang reassigned KAFKA-14436: Assignee: Alyssa Huang > Initialize KRaft with arbitrary epoch > - > > Key: KAFKA-14436 > URL: https://issues.apache.org/jira/browse/KAFKA-14436 > Project: Kafka > Issue Type: Sub-task >Reporter: David Arthur >Assignee: Alyssa Huang >Priority: Major > > For the ZK migration, we need to be able to initialize Raft with an > arbitrarily high epoch (within the size limit). This is because during the > migration, we want to write the Raft epoch as the controller epoch in ZK. We > require that epochs in /controller_epoch are monotonic in order for brokers > to behave normally. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14153) UnknownTopicOrPartitionException should include the topic/partition in the returned exception message
[ https://issues.apache.org/jira/browse/KAFKA-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-14153: - Description: Exception would be more useful if it included the topic or partition that was not found. Message right now is just `This server does not host this topic-partition.` Background: [https://github.com/apache/kafka/pull/12479#discussion_r938988993] was: Exception would be more useful if it included the topic or partition that was not found. Background: https://github.com/apache/kafka/pull/12479#discussion_r938988993 > UnknownTopicOrPartitionException should include the topic/partition in the > returned exception message > - > > Key: KAFKA-14153 > URL: https://issues.apache.org/jira/browse/KAFKA-14153 > Project: Kafka > Issue Type: Improvement >Reporter: Alyssa Huang >Priority: Minor > > Exception would be more useful if it included the topic or partition that was > not found. Message right now is just > `This server does not host this topic-partition.` > Background: [https://github.com/apache/kafka/pull/12479#discussion_r938988993] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14153) UnknownTopicOrPartitionException should include the topic/partition in the returned exception message
Alyssa Huang created KAFKA-14153: Summary: UnknownTopicOrPartitionException should include the topic/partition in the returned exception message Key: KAFKA-14153 URL: https://issues.apache.org/jira/browse/KAFKA-14153 Project: Kafka Issue Type: Improvement Reporter: Alyssa Huang Exception would be more useful if it included the topic or partition that was not found. Background: https://github.com/apache/kafka/pull/12479#discussion_r938988993 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-13867) Improve JavaDoc for MetadataVersion.java
[ https://issues.apache.org/jira/browse/KAFKA-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17530971#comment-17530971 ] Alyssa Huang commented on KAFKA-13867: -- ^ just confirming that you mean `MetadataVersion#ibpVersion`? > Improve JavaDoc for MetadataVersion.java > > > Key: KAFKA-13867 > URL: https://issues.apache.org/jira/browse/KAFKA-13867 > Project: Kafka > Issue Type: Improvement >Reporter: Colin McCabe >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13665) Null pointer exception testPollReturnsNoRecords
Alyssa Huang created KAFKA-13665: Summary: Null pointer exception testPollReturnsNoRecords Key: KAFKA-13665 URL: https://issues.apache.org/jira/browse/KAFKA-13665 Project: Kafka Issue Type: Test Reporter: Alyssa Huang Seeing java.lang.NullPointerException from testPollReturnsNoRecords[0] – org.apache.kafka.connect.runtime.WorkerSourceTaskTest. https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-11732/2/tests/ {code:java} [2022-02-11 00:17:58,495] INFO Kafka startTimeMs: 1644538678494 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-02-11 00:17:58,495] WARN Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser:68)javax.management.InstanceAlreadyExistsException: kafka.connect:type=app-info,id=noop-worker at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64) at org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:99) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest.(RetryWithToleranceOperatorTest.java:93) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at javassist.runtime.Desc.getClassObject(Desc.java:72) at javassist.runtime.Desc.getClazz(Desc.java:81)at org.apache.kafka.connect.runtime.WorkerSourceTaskTest.createWorkerTask(WorkerSourceTaskTest.java:240) at org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testHeadersWithCustomConverter(WorkerSourceTaskTest.java:1001) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413)at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.eval...[truncated 3038049 chars]... ssl.cipher.suites = nullssl.client.auth = none ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = nullssl.keystore.password = nullssl.keystore.type = JKS ssl.protocol = TLSv1.2 ssl.provider = null ssl.secure.random.implementation = null ssl
[jira] [Commented] (KAFKA-10558) Fetch Session Cache Performance Improvement
[ https://issues.apache.org/jira/browse/KAFKA-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204936#comment-17204936 ] Alyssa Huang commented on KAFKA-10558: -- [https://github.com/apache/kafka/pull/9340] > Fetch Session Cache Performance Improvement > --- > > Key: KAFKA-10558 > URL: https://issues.apache.org/jira/browse/KAFKA-10558 > Project: Kafka > Issue Type: Improvement > Components: core >Reporter: Alyssa Huang >Priority: Major > > Make kafka.server.FetchSessionCache implementation faster to help mitigate > high lock contention as detailed in KAFKA-9401, and to allow for increased > cache sizes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10558) Fetch Session Cache Performance Improvement
[ https://issues.apache.org/jira/browse/KAFKA-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-10558: - External issue URL: (was: https://github.com/apache/kafka/pull/9340) > Fetch Session Cache Performance Improvement > --- > > Key: KAFKA-10558 > URL: https://issues.apache.org/jira/browse/KAFKA-10558 > Project: Kafka > Issue Type: Improvement > Components: core >Reporter: Alyssa Huang >Priority: Major > > Make kafka.server.FetchSessionCache implementation faster to help mitigate > high lock contention as detailed in KAFKA-9401, and to allow for increased > cache sizes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-10558) Fetch Session Cache Performance Improvement
[ https://issues.apache.org/jira/browse/KAFKA-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alyssa Huang updated KAFKA-10558: - External issue URL: https://github.com/apache/kafka/pull/9340 > Fetch Session Cache Performance Improvement > --- > > Key: KAFKA-10558 > URL: https://issues.apache.org/jira/browse/KAFKA-10558 > Project: Kafka > Issue Type: Improvement > Components: core >Reporter: Alyssa Huang >Priority: Major > > Make kafka.server.FetchSessionCache implementation faster to help mitigate > high lock contention as detailed in KAFKA-9401, and to allow for increased > cache sizes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10558) Fetch Session Cache Performance Improvement
Alyssa Huang created KAFKA-10558: Summary: Fetch Session Cache Performance Improvement Key: KAFKA-10558 URL: https://issues.apache.org/jira/browse/KAFKA-10558 Project: Kafka Issue Type: Improvement Components: core Reporter: Alyssa Huang Make kafka.server.FetchSessionCache implementation faster to help mitigate high lock contention as detailed in KAFKA-9401, and to allow for increased cache sizes. -- This message was sent by Atlassian Jira (v8.3.4#803005)