Re: [PR] MINOR: Fix thread leak in AuthorizerIntegrationTest [kafka]

2023-12-18 Thread via GitHub


dajac merged PR #15006:
URL: https://github.com/apache/kafka/pull/15006


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix thread leak in AuthorizerIntegrationTest [kafka]

2023-12-18 Thread via GitHub


dajac commented on PR #15006:
URL: https://github.com/apache/kafka/pull/15006#issuecomment-1862271647

   Failed tests are not related. Merging to trunk and 3.7.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16002: Implement RemoteCopyLagSegments, RemoteDeleteLagBytes and RemoteDeleteLagSegments [kafka]

2023-12-18 Thread via GitHub


clolov commented on code in PR #15005:
URL: https://github.com/apache/kafka/pull/15005#discussion_r1431012760


##
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##
@@ -1071,15 +1085,38 @@ void cleanupExpiredRemoteLogSegments() throws 
RemoteStorageException, ExecutionE
 Iterator epochsToClean = remoteLeaderEpochs.stream()
 .filter(remoteEpoch -> remoteEpoch < 
earliestEpochEntry.epoch)
 .iterator();
+
+List listOfSegmentsToBeCleaned = new 
ArrayList<>();
+
 while (epochsToClean.hasNext()) {
 int epoch = epochsToClean.next();
 Iterator segmentsToBeCleaned = 
remoteLogMetadataManager.listRemoteLogSegments(topicIdPartition, epoch);
 while (segmentsToBeCleaned.hasNext()) {
-if (isCancelled() || !isLeader()) {
-return;
+if (!isCancelled() && isLeader()) {
+RemoteLogSegmentMetadata nextSegmentMetadata = 
segmentsToBeCleaned.next();
+sizeOfDeletableSegmentsBytes += 
nextSegmentMetadata.segmentSizeInBytes();
+listOfSegmentsToBeCleaned.add(nextSegmentMetadata);
 }
+}
+}
+
+segmentsLeftToDelete += listOfSegmentsToBeCleaned.size();
+brokerTopicMetrics.recordRemoteDeleteLagBytes(partition, 
sizeOfDeletableSegmentsBytes);
+brokerTopicMetrics.recordRemoteDeleteLagSegments(partition, 
segmentsLeftToDelete);
+for (RemoteLogSegmentMetadata segmentMetadata : 
listOfSegmentsToBeCleaned) {
+if (!isCancelled() && isLeader()) {
 // No need to update the log-start-offset even though 
the segment is deleted as these epochs/offsets are earlier to that value.
-
remoteLogRetentionHandler.deleteLogSegmentsDueToLeaderEpochCacheTruncation(earliestEpochEntry,
 segmentsToBeCleaned.next());
+if 
(remoteLogRetentionHandler.deleteLogSegmentsDueToLeaderEpochCacheTruncation(earliestEpochEntry,
 segmentMetadata)) {
+sizeOfDeletableSegmentsBytes -= 
segmentMetadata.segmentSizeInBytes();
+segmentsLeftToDelete--;
+
brokerTopicMetrics.recordRemoteDeleteLagBytes(partition, 
sizeOfDeletableSegmentsBytes);
+
brokerTopicMetrics.recordRemoteDeleteLagSegments(partition, 
segmentsLeftToDelete);
+}
+} else {

Review Comment:
   In my head, we want to stop reporting the lag for partitions which the 
current replica is no longer the leader for. In other words, there are two 
conditions when we want to remove the lag from the cumulative - either we have 
successfully deleted the segment or we are no longer the leader for those 
partitions. The else condition satisfies the second



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15158) Add metrics for RemoteDeleteRequestsPerSec, RemoteDeleteErrorsPerSec, BuildRemoteLogAuxStateRequestsPerSec, BuildRemoteLogAuxStateErrorsPerSec

2023-12-18 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15158.
---
Resolution: Fixed

> Add metrics for RemoteDeleteRequestsPerSec, RemoteDeleteErrorsPerSec, 
> BuildRemoteLogAuxStateRequestsPerSec, BuildRemoteLogAuxStateErrorsPerSec
> --
>
> Key: KAFKA-15158
> URL: https://issues.apache.org/jira/browse/KAFKA-15158
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Gantigmaa Selenge
>Priority: Major
> Fix For: 3.7.0
>
>
> Add the following metrics for better observability into the RemoteLog related 
> activities inside the broker.
> 1. RemoteWriteRequestsPerSec
> 2. RemoteDeleteRequestsPerSec
> 3. BuildRemoteLogAuxStateRequestsPerSec
>  
> These metrics will be calculated at topic level (we can add them at 
> brokerTopicStats)
> -*RemoteWriteRequestsPerSec* will be marked on every call to 
> RemoteLogManager#-
> -copyLogSegmentsToRemote()- already covered by KAFKA-14953
>  
> *RemoteDeleteRequestsPerSec* will be marked on every call to 
> RemoteLogManager#cleanupExpiredRemoteLogSegments(). This method is introduced 
> in [https://github.com/apache/kafka/pull/13561] 
> *BuildRemoteLogAuxStateRequestsPerSec* will be marked on every call to 
> ReplicaFetcherTierStateMachine#buildRemoteLogAuxState()
>  
> (Note: For all the above, add Error metrics as well such as 
> RemoteDeleteErrorPerSec)
> (Note: This requires a change in KIP-405 and hence, must be approved by KIP 
> author [~satishd] )
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15158: Add metrics for RemoteDelete and BuildRemoteLogAuxState [kafka]

2023-12-18 Thread via GitHub


showuon merged PR #14375:
URL: https://github.com/apache/kafka/pull/14375


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15158: Add metrics for RemoteRequestsPerSec [kafka]

2023-12-18 Thread via GitHub


showuon commented on PR #14375:
URL: https://github.com/apache/kafka/pull/14375#issuecomment-1862219804

   Failed tests are unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Use random cluster id for jsa file preparation [kafka]

2023-12-18 Thread via GitHub


omkreddy merged PR #15041:
URL: https://github.com/apache/kafka/pull/15041


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16013: Added integration test for ExpiresPerSec metric [kafka]

2023-12-18 Thread via GitHub


showuon commented on PR #15015:
URL: https://github.com/apache/kafka/pull/15015#issuecomment-1862142091

   > @showuon Some of the existing tests in `DelayedRemoteFetchTest` are 
failing as the followerFetchParams are passed with a valid replicaId. I took 
liberty of fixing the existing tests in `DelayedRemoteFetchTest` and also added 
a UT to have a fetchParams containing a valid replica-id and it fails while 
creating `DelayedRemoteFetch` instance.
   
   Thanks @satishd !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16013: Added integration test for ExpiresPerSec metric [kafka]

2023-12-18 Thread via GitHub


satishd commented on PR #15015:
URL: https://github.com/apache/kafka/pull/15015#issuecomment-1862116069

   @showuon Some of the existing tests in `DelayedRemoteFetchTest` are failing 
as the followerFetchParams are passed with a valid replicaId. I took liberty of 
fixing the existing tests in `DelayedRemoteFetchTest` and also added a UT to 
have a fetchParams containing a valid replica-id and it fails while creating 
`DelayedRemoteFetch` instance.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16012: Ensure new leader information merged correctly with the current metadata [kafka]

2023-12-18 Thread via GitHub


philipnee commented on PR #15023:
URL: https://github.com/apache/kafka/pull/15023#issuecomment-1862113100

   Hey @jolshan - Thanks again for the review.  Addressed your comments in the 
latest commits. The only client-related failure is 
`testExpandingTopicSubscriptions`, a known flaky test.  I will disable it in a 
different PR.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15696: Refactor closing consumer [kafka]

2023-12-18 Thread via GitHub


philipnee commented on PR #14937:
URL: https://github.com/apache/kafka/pull/14937#issuecomment-1862111535

   Hey @lucasbru - I think your patch fixed the long running/oom issue with the 
async consumer test. The tests finished within 3.5hr in this commit: 
[1786208](https://github.com/apache/kafka/pull/14937/commits/17862086b63864a8ad685f822e565da8626e8ea3)
However, there are still quite a few flaky tests in the 
PlaintextConsumerText. Namely..
   `testExpandingTopicSubscriptions`
   `testShrinkingTopicSubscriptions`
   `testFetchOutOfRangeOffsetResetConfigLatest`
   
   They are observed sparsely in other builds I've seen.  So I'm disabling them 
in the latest commit: 
[11a3ae6](https://github.com/apache/kafka/pull/14937/commits/11a3ae633ac551efcf342aa9cb32bd57a1b44314)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15696: Refactor closing consumer [kafka]

2023-12-18 Thread via GitHub


philipnee commented on PR #14937:
URL: https://github.com/apache/kafka/pull/14937#issuecomment-1862105547

   hey @lucasbru - Seems like your fix fixed the long running/oom issue with 
the test.  The `testExpandingTopicSubscriptions` is on the flaky side, I've 
observed that in other PRs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-14511) Extend AlterIncrementalConfigs API to support group config

2023-12-18 Thread Lan Ding (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lan Ding reassigned KAFKA-14511:


Assignee: Lan Ding

> Extend AlterIncrementalConfigs API to support group config
> --
>
> Key: KAFKA-14511
> URL: https://issues.apache.org/jira/browse/KAFKA-14511
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Lan Ding
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-14401: Fail kafka log read end requests if underneath work thread is dead [kafka]

2023-12-18 Thread via GitHub


github-actions[bot] commented on PR #14372:
URL: https://github.com/apache/kafka/pull/14372#issuecomment-1862052272

   This PR is being marked as stale since it has not had any activity in 90 
days. If you would like to keep this PR alive, please ask a committer for 
review. If the PR has  merge conflicts, please update it with the latest from 
trunk (or appropriate release branch)  If this PR is no longer valid or 
desired, please feel free to close it. If no activity occurs in the next 30 
days, it will be automatically closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15803) Update last seen epoch during commit

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15803:
--
Fix Version/s: 3.7.0

> Update last seen epoch during commit
> 
>
> Key: KAFKA-15803
> URL: https://issues.apache.org/jira/browse/KAFKA-15803
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Lucas Brutschy
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> At the time we implemented commitAsync in the prototypeAsyncConsumer, 
> metadata was not there. The ask here is to investigate if we need to add the 
> following function to the commit code:
>  
> private void updateLastSeenEpochIfNewer(TopicPartition topicPartition, 
> OffsetAndMetadata offsetAndMetadata) {
> if (offsetAndMetadata != null)
> offsetAndMetadata.leaderEpoch().ifPresent(epoch -> 
> metadata.updateLastSeenEpochIfNewer(topicPartition, epoch));
> }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15278) Implement client support for KIP-848 ConsumerGroupHeartbeat protocol RPC

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15278:
--
Fix Version/s: 3.7.0

> Implement client support for KIP-848 ConsumerGroupHeartbeat protocol RPC
> 
>
> Key: KAFKA-15278
> URL: https://issues.apache.org/jira/browse/KAFKA-15278
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> The necessary Java code that represents the {{ConsumerGroupHeartbeatRequest}} 
> and {{ConsumerGroupHeartbeatResponse}} are already present in the codebase. 
> It is assumed that the scaffolding for the other two will come along in time.
>  * Implement {{ConsumerGroupRequestManager}}
>  * Ensure that {{DefaultBackgroundThread}} correctly calculates I/O timeouts 
> so that the heartbeat occurs within the {{group.consumer.session.timeout.ms}} 
> interval regardless of other {{RequestManager}} instance activity
>  * Ensure error is handled correctly
>  * Ensure MembershipStateManager is updated on both successful and failures 
> cases, and the state machine is transioned to the correct state.
> This task is part of the work to implement support for the new KIP-848 
> consumer group protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15544) Enable existing client integration tests for new protocol

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15544:
--
Fix Version/s: 3.7.0

> Enable existing client integration tests for new protocol 
> --
>
> Key: KAFKA-15544
> URL: https://issues.apache.org/jira/browse/KAFKA-15544
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Andrew Schofield
>Priority: Blocker
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> Enable & validate integration tests defined in `PlaintextConsumerTest` that 
> relate to the consumer group functionality work with the new consumer group 
> protocol. The majority of these tests should work with both consumer group 
> protocols, so parameterization of the tests seems like the most appropriate 
> way forward.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15280) Implement client support for selecting KIP-848 server-side assignor

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15280:
--
Fix Version/s: 3.7.0

> Implement client support for selecting KIP-848 server-side assignor
> ---
>
> Key: KAFKA-15280
> URL: https://issues.apache.org/jira/browse/KAFKA-15280
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Lucas Brutschy
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> This includes:
>  * Validate the client’s configuration for server-side assignor selection 
> defined in config `group.remote.assignor`
>  * Include the assignor taken from config in the {{ConsumerGroupHeartbeat}} 
> request, in the `ServerAssignor` field 
>  * Properly handle UNSUPPORTED_ASSIGNOR errors that may be returned in the HB 
> response if the server does not support the assignor defined by the consumer. 
> This task is part of the work to implement support for the new KIP-848 
> consumer group protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15890) Consumer.poll with long timeout unaware of assigned partitions

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15890:
--
Fix Version/s: 3.7.0

> Consumer.poll with long timeout unaware of assigned partitions
> --
>
> Key: KAFKA-15890
> URL: https://issues.apache.org/jira/browse/KAFKA-15890
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> Various problems found testing `kafka-console-consumer.sh` with the new 
> consumer, including NPEs, never-ending reconcilation states and failure to 
> fetch records.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15833) Restrict Consumer API to be used from one thread

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15833:
--
Fix Version/s: 3.7.0

> Restrict Consumer API to be used from one thread
> 
>
> Key: KAFKA-15833
> URL: https://issues.apache.org/jira/browse/KAFKA-15833
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lucas Brutschy
>Assignee: Lucas Brutschy
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> The legacy consumer restricts the API to be used from one thread only. This 
> is not enforced in the new consumer. To avoid inconsistencies in the 
> behavior, we should enforce the same restriction in the new consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15164) Extract reusable logic from OffsetsForLeaderEpochClient

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15164:
--
Fix Version/s: 3.7.0

> Extract reusable logic from OffsetsForLeaderEpochClient
> ---
>
> Key: KAFKA-15164
> URL: https://issues.apache.org/jira/browse/KAFKA-15164
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The OffsetsForLeaderEpochClient class is used for making asynchronous 
> requests to the OffsetsForLeaderEpoch API. It encapsulates the logic for:
>  * preparing the requests
>  * sending them over the network using the network client
>  * handling the response
> The new KafkaConsumer implementation, based on a new threading model, 
> requires the same logic for preparing the requests and handling the 
> responses, with different behaviour for how the request is actually sent.
> This task includes refactoring OffsetsForLeaderEpochClient by extracting out 
> the logic for preparing the requests and handling the responses. No changes 
> in the existing logic, just making the functionality available to be reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15317) Fix for async consumer access to committed offsets with multiple consumers

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15317:
--
Fix Version/s: 3.7.0

> Fix for async consumer access to committed offsets with multiple consumers
> --
>
> Key: KAFKA-15317
> URL: https://issues.apache.org/jira/browse/KAFKA-15317
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> Access to the committed offsets via a call to the _committed_ API func works 
> as expected for a single async consumer, but it some times fails with timeout 
> when trying to retrieve the committed offsets with another consumer in the 
> same group (test testConsumeFromCommittedOffsets on BaseAsynConsumerTest)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15562) Ensure fetch offset and commit offset handler handles both timeout and various error types

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15562:
--
Fix Version/s: 3.7.0

> Ensure fetch offset and commit offset handler handles both timeout and 
> various error types
> --
>
> Key: KAFKA-15562
> URL: https://issues.apache.org/jira/browse/KAFKA-15562
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> Both fetchOffsetRequest and commitOffsetRequest handlers don't have 
> sufficient logic to handle timeout exception.
>  
> CommitOffsetRequest handler also doesn't handle various of server error such 
> as coordinator not found. We need to handle:
> If Exception is non null:
>  - handle RetriableError that respects requestTimeoutMs
>  - handle NonRetriableError
>  
> If the response contains error, ensure to:
>  - mark coordinator unknown if needed
>  - retry if needed
>  - fail the request



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15913) Migrate AsyncKafkaConsumerTest away from ConsumerTestBuilder

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15913:
--
Fix Version/s: 3.7.0

> Migrate AsyncKafkaConsumerTest away from ConsumerTestBuilder
> 
>
> Key: KAFKA-15913
> URL: https://issues.apache.org/jira/browse/KAFKA-15913
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Lucas Brutschy
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> ConsumerTestBuilder is meant to be an unit testing utility; however, we seem 
> to use Mockito#spy quite liberally.  This is not the right testing strategy 
> because we basically turn unit testing into integration testing.
>  
> While the current unit tests run fine, we should probably make the mocking 
> using Mockito#mock by default and test each dependency independently.
>  
> The ask here is
>  # Make mock(class) by default
>  # Provide more flexible interface for the testBuilder to allow user to 
> configure spy or mock.  Or, let user pass in their own mock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15573) Implement auto-commit on partition assignment revocation

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15573:
--
Fix Version/s: 3.7.0

> Implement auto-commit on partition assignment revocation
> 
>
> Key: KAFKA-15573
> URL: https://issues.apache.org/jira/browse/KAFKA-15573
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> When the group member's assignment changes and partitions are revoked and 
> auto-commit is enabled, we need to ensure that the commit request manager is 
> invoked to queue up the commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15679) Client support for new consumer configs

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15679:
--
Fix Version/s: 3.7.0

> Client support for new consumer configs
> ---
>
> Key: KAFKA-15679
> URL: https://issues.apache.org/jira/browse/KAFKA-15679
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> New consumer should support the new configs introduced by KIP-848
> |group.protocol|enum|generic|A flag which indicates if the new protocol 
> should be used or not. It could be: generic or consumer|
> |group.remote.assignor|string|null|The server side assignor to use. It cannot 
> be used in conjunction with group.local.assignor. {{null}} means that the 
> choice of the assignor is left to the group coordinator.|
> The protocol introduces a 3rd property for client side (local) assignors, but 
> that will be introduced later on. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15840) Correct initialization of ConsumerGroupHeartbeat by client

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15840:
--
Fix Version/s: 3.7.0

> Correct initialization of ConsumerGroupHeartbeat by client
> --
>
> Key: KAFKA-15840
> URL: https://issues.apache.org/jira/browse/KAFKA-15840
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> The new consumer using the KIP-848 protocol currently leaves the 
> TopicPartitions set to null for the ConsumerGroupHeartbeat request, even when 
> the MemberEpoch is zero. This violates the KIP which expects the list to be 
> empty (but not null).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15865) Ensure consumer.poll() execute autocommit callback

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15865:
--
Fix Version/s: 3.7.0

> Ensure consumer.poll() execute autocommit callback
> --
>
> Key: KAFKA-15865
> URL: https://issues.apache.org/jira/browse/KAFKA-15865
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Lucas Brutschy
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> When the network thread completes autocommits, we need to send a 
> message/event to the application to notify the thread to execute the 
> callback.  In KAFKA-15327, the network thread sends a 
> AutoCommitCompletionBackgroundEvent to the polling thread.  The polling 
> thread should trigger the OffsetCommitCallback upon receiving it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15275) Implement consumer group membership state machine

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15275:
--
Fix Version/s: 3.7.0

> Implement consumer group membership state machine
> -
>
> Key: KAFKA-15275
> URL: https://issues.apache.org/jira/browse/KAFKA-15275
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support
> Fix For: 3.7.0
>
>
> Provide the Java client support for the consumer group member state machine, 
> including:
>  * Define the states of the client member, based on the heartbeat 
> {{ConsumerGroupHeartbeat}} data structure & state transitions
>  * Determine the valid transitions between those states
>  * Provide functions to update state on successful and failed HB responses
> The state machine won't do any error handling, it's just responsible for 
> doing the appropriate state transitions and keeping the member info (id, 
> epoch, assignment)
> This task is part of the work to implement support for the new KIP-848 
> consumer group protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15184) New consumer internals refactoring and clean up

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15184:
--
Fix Version/s: 3.7.0

> New consumer internals refactoring and clean up
> ---
>
> Key: KAFKA-15184
> URL: https://issues.apache.org/jira/browse/KAFKA-15184
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> Minor refactoring of the new consumer internals including introduction of the 
> {{RequestManagers}} class to hold references to the {{RequestManager}} 
> instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15842) Correct handling of KafkaConsumer.committed for new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15842:
--
Fix Version/s: 3.7.0

> Correct handling of KafkaConsumer.committed for new consumer
> 
>
> Key: KAFKA-15842
> URL: https://issues.apache.org/jira/browse/KAFKA-15842
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> KafkaConsumer.committed throws TimeOutException when there is no response. 
> The new consumer currently returns a null. Changing the new consumer to 
> behave like the old consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15554) Update client state machine to align with protocol sending one assignment at a time

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15554:
--
Fix Version/s: 3.7.0

> Update client state machine to align with protocol sending one assignment at 
> a time 
> 
>
> Key: KAFKA-15554
> URL: https://issues.apache.org/jira/browse/KAFKA-15554
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Blocker
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> The new consumer group protocol will be sending one assignment to the member, 
> and wait for an ack or timeout before sending any other assignment. 
> The client state machine will be updated so that it takes on new target 
> assignment only if there is no other in process. If a new target assignment 
> is received in this case, the member should fail with a fatal error, as this 
> is not expected in the protocol. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15533) Ensure HeartbeatRequestManager only send out some fields once

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15533:
--
Fix Version/s: 3.7.0

> Ensure HeartbeatRequestManager only send out some fields once
> -
>
> Key: KAFKA-15533
> URL: https://issues.apache.org/jira/browse/KAFKA-15533
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Minor
>  Labels: kip-848, kip-848-client-support
> Fix For: 3.7.0
>
>
> We want to ensure ConsumerGroupHeartbeatRequest is as lightweight as 
> possible, so a lot of fields in it don't need to be resend. An example would 
> be the rebalanceTimeoutMs, currently we have the following code:
>  
>  
> {code:java}
> ConsumerGroupHeartbeatRequestData data = new 
> ConsumerGroupHeartbeatRequestData()
> .setGroupId(membershipManager.groupId())
> .setMemberEpoch(membershipManager.memberEpoch())
> .setMemberId(membershipManager.memberId())
> .setRebalanceTimeoutMs(rebalanceTimeoutMs); {code}
>  
>  
> We should encapsulate these once-used fields into a class such as 
> HeartbeatMetdataBuilder, and it should maintain a state of whether a certain 
> field needs to be sent or not.
>  
> Note that, currently only 3 fields are mandatory in the request:
>  * groupId
>  * memberEpoch
>  * memberId
> Note that on retriable errors and network errors (ex. timeout) a full request 
> should be sent to the broker.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15631) Do not send new heartbeat request while another one in-flight

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15631:
--
Fix Version/s: 3.7.0

> Do not send new heartbeat request while another one in-flight
> -
>
> Key: KAFKA-15631
> URL: https://issues.apache.org/jira/browse/KAFKA-15631
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> Client consumer should not send a new heartbeat request while there is a 
> previous in-flight. If a HB is in-flight, we should wait for a response or 
> timeout before sending a next one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15932) Flaky test - PlaintextConsumerTest.testSeek("kraft+kip-848","consumer")

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15932:
--
Fix Version/s: 3.7.0

> Flaky test - PlaintextConsumerTest.testSeek("kraft+kip-848","consumer")
> ---
>
> Key: KAFKA-15932
> URL: https://issues.apache.org/jira/browse/KAFKA-15932
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, flaky-test, kip-848, 
> kip-848-client-support
> Fix For: 3.7.0
>
>
> Intermittently failing test for the new consumer.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14859/1/tests/
> ```Error
> org.opentest4j.AssertionFailedError: Timed out before consuming expected 1 
> records. The number consumed was 0.
> Stacktrace
> org.opentest4j.AssertionFailedError: Timed out before consuming expected 1 
> records. The number consumed was 0.
>   at 
> app//org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38)
>   at app//org.junit.jupiter.api.Assertions.fail(Assertions.java:134)
>   at 
> app//kafka.api.AbstractConsumerTest.consumeRecords(AbstractConsumerTest.scala:161)
>   at 
> app//kafka.api.AbstractConsumerTest.consumeAndVerifyRecords(AbstractConsumerTest.scala:128)
>   at 
> app//kafka.api.PlaintextConsumerTest.testSeek(PlaintextConsumerTest.scala:616)
>   at 
> java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>  Method)
>   at 
> java.base@11.0.16.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base@11.0.16.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@11.0.16.1/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)
>   at 
> app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:214)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:139)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>   at 
> app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>   at 
> app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>   at 

[jira] [Updated] (KAFKA-15592) Member does not need to always try to join a group when a groupId is configured

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15592:
--
Fix Version/s: 3.7.0

> Member does not need to always try to join a group when a groupId is 
> configured
> ---
>
> Key: KAFKA-15592
> URL: https://issues.apache.org/jira/browse/KAFKA-15592
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> Currently, instantiating a membershipManager means the member will always 
> seek to join a group unless it has failed fatally.  However, this is not 
> always the case because the member should be able to join and leave a group 
> any time during its life cycle. Maybe we should include an "inactive" state 
> in the state machine indicating the member does not want to be in a rebalance 
> group.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15316) CommitRequestManager not calling RequestState callbacks

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15316:
--
Fix Version/s: 3.7.0

> CommitRequestManager not calling RequestState callbacks 
> 
>
> Key: KAFKA-15316
> URL: https://issues.apache.org/jira/browse/KAFKA-15316
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> CommitRequestManager is not triggering the RequestState callbacks that update 
> {_}lastReceivedMs{_}, affecting the _canSendRequest_ verification of the 
> RequestState



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15188) Implement more of the remaining PrototypeAsyncConsumer APIs

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15188:
--
Fix Version/s: 3.7.0

> Implement more of the remaining PrototypeAsyncConsumer APIs
> ---
>
> Key: KAFKA-15188
> URL: https://issues.apache.org/jira/browse/KAFKA-15188
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> There are several {{Consumer}} APIs that only touch the {{ConsumerMetadata}} 
> and/or {{SubscriptionState}} classes; they do not perform network I/O or 
> otherwise block. These can be implemented without needing {{RequestManager}} 
> updates and include the following APIs:
>  - {{committed}}
>  - {{currentLag}}
>  - {{metrics}}
>  - {{pause}}
>  - {{paused}}
>  - {{position}}
>  - {{resume}}
>  - {{seek}}
>  - {{seekToBeginning}}
>  - {{seekToEnd}}
>  - {{subscribe}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15543) Send HB request right after reconciliation completes

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15543:
--
Fix Version/s: 3.7.0

> Send HB request right after reconciliation completes
> 
>
> Key: KAFKA-15543
> URL: https://issues.apache.org/jira/browse/KAFKA-15543
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Blocker
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> HeartbeatRequest manager should send HB request outside of the interval, 
> right after the reconciliation process completes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15270) Integration tests for AsyncConsumer simple consume case

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15270:
--
Fix Version/s: 3.7.0

> Integration tests for AsyncConsumer simple consume case
> ---
>
> Key: KAFKA-15270
> URL: https://issues.apache.org/jira/browse/KAFKA-15270
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> This task involves writing integration tests for covering the simple consume 
> functionality of the AsyncConsumer. This should include validation of the 
> assign, fetch and positions logic.
> Not covering any committed offset functionality as part of this task. 
> Integration tests should have a similar form as the existing 
> PlaintextConsumerTest, but scoped to the simple consume flow. 
>   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14675) Extract metadata-related tasks from Fetcher into MetadataFetcher

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-14675:
--
Fix Version/s: 3.7.0

> Extract metadata-related tasks from Fetcher into MetadataFetcher
> 
>
> Key: KAFKA-14675
> URL: https://issues.apache.org/jira/browse/KAFKA-14675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Affects Versions: 3.5.0
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The {{Fetcher}} class is used internally by the {{KafkaConsumer}} to fetch 
> records from the brokers. There is ongoing work to create a new consumer 
> implementation with a significantly refactored threading model. The threading 
> refactor work requires a similarly refactored {{{}Fetcher{}}}.
> This task covers the work to extract from {{Fetcher}} the APIs that are 
> related to metadata operations into a new class named 
> {{{}MetadataFetcher{}}}. This will allow the refactoring of {{Fetcher}} and 
> {{MetadataFetcher}} for the new consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15550) OffsetsForTimes validation for negative timestamps in new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15550:
--
Fix Version/s: 3.7.0

> OffsetsForTimes validation for negative timestamps in new consumer
> --
>
> Key: KAFKA-15550
> URL: https://issues.apache.org/jira/browse/KAFKA-15550
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> OffsetsForTimes api call should fail with _IllegalArgumentException_ if 
> negative timestamps are provided as arguments. This will effectively exclude 
> earliest and latest offsets as target times, keeping the current behaviour of 
> the KafkaConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15539) Client should stop fetching while partitions being revoked

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15539:
--
Fix Version/s: 3.7.0

> Client should stop fetching while partitions being revoked
> --
>
> Key: KAFKA-15539
> URL: https://issues.apache.org/jira/browse/KAFKA-15539
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-preview
> Fix For: 3.7.0
>
>
> When partitions are being revoked (client received revocation on heartbeat 
> and is in the process of invoking the callback), we need to make sure we do 
> not fetch from those partitions anymore:
>  * no new fetches should be sent out for the partitions being revoked
>  * no fetch responses should be handled for those partitions (case where a 
> fetch was already in-flight when the partition revocation started.
> This does not seem to be handled in the current KafkaConsumer and the old 
> consumer protocol (only for the EAGER protocol). 
> Consider re-using the existing pendingRevocation logic that already exist in 
> the subscriptionState & used from the fetcher to determine if a partition is 
> fetchable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15651) Investigate auto commit guarantees during Consumer.assign()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15651:
--
Fix Version/s: 3.7.0

> Investigate auto commit guarantees during Consumer.assign()
> ---
>
> Key: KAFKA-15651
> URL: https://issues.apache.org/jira/browse/KAFKA-15651
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> In the {{assign()}} method implementation, both {{KafkaConsumer}} and 
> {{PrototypeAsyncConsumer}} commit offsets asynchronously. Is this 
> intentional? [~junrao] asks in a [recent PR 
> review|https://github.com/apache/kafka/pull/14406/files/193af8230d0c61853d764cbbe29bca2fc6361af9#r1349023459]:
> {quote}Do we guarantee that the new owner of the unsubscribed partitions 
> could pick up the latest committed offset?
> {quote}
> Let's confirm whether the asynchronous approach is acceptable and correct. If 
> it is, great, let's enhance the documentation to briefly explain why. If it 
> is not, let's correct the behavior if it's within the API semantic 
> expectations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15540) Handle heartbeat and revocation when consumer leaves group

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15540:
--
Fix Version/s: 3.7.0

> Handle heartbeat and revocation when consumer leaves group
> --
>
> Key: KAFKA-15540
> URL: https://issues.apache.org/jira/browse/KAFKA-15540
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> When a consumer intentionally leaves a group we should:
>  * release assignment (revoke partitions)
>  * send a last Heartbeat request with epoch -1 (or -2 if static member)
> Note that the revocation involves stop fetching, committing offsets if 
> auto-commit enabled and invoking the onPartitionsRevoked callback.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15887) Autocommit during close consistently fails with exception in background thread

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15887:
--
Fix Version/s: 3.7.0

> Autocommit during close consistently fails with exception in background thread
> --
>
> Key: KAFKA-15887
> URL: https://issues.apache.org/jira/browse/KAFKA-15887
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lucas Brutschy
>Assignee: Philip Nee
>Priority: Blocker
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> when I run {{AsyncKafkaConsumerTest}} I get this every time I call close:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index: 0
>   at java.base/java.util.Collections$EmptyList.get(Collections.java:4483)
>   at 
> java.base/java.util.Collections$UnmodifiableList.get(Collections.java:1310)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.findCoordinatorSync(ConsumerNetworkThread.java:302)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.ensureCoordinatorReady(ConsumerNetworkThread.java:288)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.maybeAutoCommitAndLeaveGroup(ConsumerNetworkThread.java:276)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.cleanup(ConsumerNetworkThread.java:257)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkThread.run(ConsumerNetworkThread.java:101)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15438) Review exception caching logic used for reset/validate positions in async consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15438:
--
Fix Version/s: 3.7.0

> Review exception caching logic used for reset/validate positions in async 
> consumer
> --
>
> Key: KAFKA-15438
> URL: https://issues.apache.org/jira/browse/KAFKA-15438
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Minor
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The refactored async consumer reuses part of the core logic required for 
> resetting and validating positions. That currently works on the principle of 
> async requests, that reset/validate positions when responses are received. If 
> the responses include errors, or if a validation verification fails (ex. log 
> truncation detected), exceptions are saved in-memory, to be thrown on the 
> next call to the reset/validate. Note that these functionalities are 
> periodically called as part of the poll loop to update fetch positions before 
> fetching records.
>  
> As an initial implementation, the async consumer reuses this same caching 
> logic, as it has the asyn nature required. Keeping this caching logic ensure 
> that we maintaint the timing of the exceptions thrown for reset/validate 
> (they are currently not thrown when discovered, instead they are thrown on 
> the next call to reset/validate). This task aims at reviewing the 
> implications of changing this behaviour, and rely on the  completion of the 
> Reset and Validate events instead, to propagate the errors found. Note that 
> this would happen closely inter-wined with the continued poll loop, that may 
> have already issued a new reset/validate. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15694) New integration tests to have full coverage for preview

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15694:
--
Fix Version/s: 3.7.0

> New integration tests to have full coverage for preview
> ---
>
> Key: KAFKA-15694
> URL: https://issues.apache.org/jira/browse/KAFKA-15694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Minor
>  Labels: kip-848, kip-848-client-support, kip-848-preview
> Fix For: 3.7.0
>
>
> These are to fix bugs discovered during PR reviews but not tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15174) Ensure the correct thread is executing the callbacks

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15174:
--
Fix Version/s: 3.7.0

> Ensure the correct thread is executing the callbacks
> 
>
> Key: KAFKA-15174
> URL: https://issues.apache.org/jira/browse/KAFKA-15174
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> We need to add assertion tests to ensure the correct thread is executing the 
> offset commit callbacks and rebalance callback



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14247) Implement EventHandler interface and DefaultEventHandler for Consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-14247:
--
Fix Version/s: 3.7.0

> Implement EventHandler interface and DefaultEventHandler for Consumer
> -
>
> Key: KAFKA-14247
> URL: https://issues.apache.org/jira/browse/KAFKA-14247
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The background thread runs inside of the DefaultEventHandler to consume 
> events from the ApplicationEventQueue and produce events to the 
> BackgroundEventQueue.
> The background thread runnable consist of a loop that tries to poll events 
> from the ApplicationQueue, processes the event if there are any, and poll 
> networkClient.
> In this implementation, the DefaultEventHandler spawns a thread that runs the 
> BackgroundThreadRunnable.  The runnable, as of the current PR, does the 
> following things:
>  # Initialize the networkClient
>  # Poll ApplicationEvent from the queue if there's any
>  # process the event
>  # poll the networkClient
> PR: https://github.com/apache/kafka/pull/12672



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15555) Ensure wakeups are handled correctly in PrototypeAsyncConsumer.poll()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-1:
--
Fix Version/s: 3.7.0

> Ensure wakeups are handled correctly in PrototypeAsyncConsumer.poll()
> -
>
> Key: KAFKA-1
> URL: https://issues.apache.org/jira/browse/KAFKA-1
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Bruno Cadonna
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> The implementation of the {{poll()}} method in {{PrototypeAsyncConsumer}} 
> does not disable wakeups in the same manner that {{KafkaConsumer}} does. 
> Investigate how to make the new implementation consistent with the 
> functionality of the existing implementation.
> There was a comment in the code that I plan to remove, but I will leave it 
> here for reference:
> {quote}// TODO: Once we implement poll(), clear wakeupTrigger in a finally 
> block: wakeupTrigger.clearActiveTask();{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14252) Create background thread skeleton for new Consumer threading model

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-14252:
--
Fix Version/s: 3.7.0

> Create background thread skeleton for new Consumer threading model
> --
>
> Key: KAFKA-14252
> URL: https://issues.apache.org/jira/browse/KAFKA-14252
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The event handler internally instantiates a background thread to consume 
> ApplicationEvents and produce BackgroundEvents.  In this ticket, we will 
> create a skeleton of the background thread.  We will incrementally add 
> implementation in the future.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15534) Propagate client response time when timeout to the request handler

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15534:
--
Fix Version/s: 3.7.0

> Propagate client response time when timeout to the request handler
> --
>
> Key: KAFKA-15534
> URL: https://issues.apache.org/jira/browse/KAFKA-15534
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> Currently, we don't have a good way to propagate the response time to the 
> handler when timeout is thrown.
> {code:java}
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms.")); 
> {code}
> The current request manager invoke a system call to retrieve the response 
> time, which is not idea because it is already available at network client
> This is an example of the coordinator request manager:
> {code:java}
> unsentRequest.future().whenComplete((clientResponse, throwable) -> {
> long responseTimeMs = time.milliseconds();
> if (clientResponse != null) {
> FindCoordinatorResponse response = (FindCoordinatorResponse) 
> clientResponse.responseBody();
> onResponse(responseTimeMs, response);
> } else {
> onFailedResponse(responseTimeMs, throwable);
> }
> }); {code}
> But in the networkClientDelegate, we should utilize the currentTimeMs in the 
> trySend to avoid calling time.milliseconds():
> {code:java}
> private void trySend(final long currentTimeMs) {
> ...
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms."));
> continue;
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15281) Implement the groupMetadata Consumer API

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15281:
--
Fix Version/s: 3.7.0

> Implement the groupMetadata Consumer API
> 
>
> Key: KAFKA-15281
> URL: https://issues.apache.org/jira/browse/KAFKA-15281
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Bruno Cadonna
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
> Fix For: 3.7.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> The threading refactor project needs to implement the {{groupMetadata()}} API 
> call once support for the KIP-848 protocol is implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15531) Ensure coordinator node is removed upon disconnection exception

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15531:
--
Fix Version/s: 3.7.0

> Ensure coordinator node is removed upon disconnection exception
> ---
>
> Key: KAFKA-15531
> URL: https://issues.apache.org/jira/browse/KAFKA-15531
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> In the async consumer, the coordinator isn't being removed when receiving the 
> following exception:
>  
> {code:java}
> (e instanceof DisconnectException) {
>   markCoordinatorUnknown(true, e.getMessage());
> }{code}
>  
> This should happen on all requests going to coordinator node:
> 1. heartbeat 2. offset fetch/commit 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14246) Update threading model for Consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-14246:
--
Fix Version/s: 3.8.0

> Update threading model for Consumer
> ---
>
> Key: KAFKA-14246
> URL: https://issues.apache.org/jira/browse/KAFKA-14246
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor
> Fix For: 3.8.0
>
>
> Hi community,
>  
> We are refactoring the current KafkaConsumer and making it more asynchronous. 
>  This is the master Jira to track the project's progress; subtasks will be 
> linked to this ticket.  Please review the design document and feel free to 
> use this thread for discussion. 
>  
> The design document is here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/Proposal%3A+Consumer+Threading+Model+Refactor]
>  
> The original email thread is here: 
> [https://lists.apache.org/thread/13jvwzkzmb8c6t7drs4oj2kgkjzcn52l]
>  
> I will continue to update the 1pager as reviews and comments come.
>  
> Thanks, 
> P



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15775) Implement listTopics() and partitionFor() for the AsyncKafkaConsumer

2023-12-18 Thread Kirk True (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798412#comment-17798412
 ] 

Kirk True commented on KAFKA-15775:
---

[~schofielaj]—please update the status of this Jira since this is merged. 
Thanks!

> Implement listTopics() and partitionFor() for the AsyncKafkaConsumer
> 
>
> Key: KAFKA-15775
> URL: https://issues.apache.org/jira/browse/KAFKA-15775
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> {code:java}
> @Override
> public List partitionsFor(String topic) {
> return partitionsFor(topic, Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public List partitionsFor(String topic, Duration timeout) {
> throw new KafkaException("method not implemented");
> }
> @Override
> public Map> listTopics() {
> return listTopics(Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public Map> listTopics(Duration timeout) {
> throw new KafkaException("method not implemented");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15775) Implement listTopics() and partitionFor() for the AsyncKafkaConsumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15775:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: kip-848 kip-848-client-support kip-848-e2e 
kip-848-preview)

> Implement listTopics() and partitionFor() for the AsyncKafkaConsumer
> 
>
> Key: KAFKA-15775
> URL: https://issues.apache.org/jira/browse/KAFKA-15775
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> {code:java}
> @Override
> public List partitionsFor(String topic) {
> return partitionsFor(topic, Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public List partitionsFor(String topic, Duration timeout) {
> throw new KafkaException("method not implemented");
> }
> @Override
> public Map> listTopics() {
> return listTopics(Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public Map> listTopics(Duration timeout) {
> throw new KafkaException("method not implemented");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16031) Enabling testApplyDeltaShouldHandleReplicaAssignedToOnlineDirectory for tiered storage after supporting JBOD

2023-12-18 Thread Luke Chen (Jira)
Luke Chen created KAFKA-16031:
-

 Summary: Enabling 
testApplyDeltaShouldHandleReplicaAssignedToOnlineDirectory for tiered storage 
after supporting JBOD
 Key: KAFKA-16031
 URL: https://issues.apache.org/jira/browse/KAFKA-16031
 Project: Kafka
  Issue Type: Test
  Components: Tiered-Storage
Reporter: Luke Chen


Currently, tiered storage doesn't support JBOD (multiple log dirs). The test  
testApplyDeltaShouldHandleReplicaAssignedToOnlineDirectory requires multiple 
log dirs to run it. We should enable it for tiered storage after supporting 
JBOD in tiered storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15321) Document consumer group member state machine

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15321:
--
Component/s: documentation

> Document consumer group member state machine
> 
>
> Key: KAFKA-15321
> URL: https://issues.apache.org/jira/browse/KAFKA-15321
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, documentation
>Reporter: Kirk True
>Assignee: Lianet Magrans
>Priority: Minor
>  Labels: kip-848, kip-848-client-support
>
> We need to first document the new consumer group member state machine. What 
> are the different states and what are the transitions?
> See [~pnee]'s notes: 
> [https://cwiki.apache.org/confluence/display/KAFKA/Consumer+threading+refactor+design]
> *_Don’t forget to include diagrams for clarity!_*
> This should be documented on the AK wiki.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15691) Upgrade existing and add new system tests to use new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15691:
--
Component/s: system tests

> Upgrade existing and add new system tests to use new consumer
> -
>
> Key: KAFKA-15691
> URL: https://issues.apache.org/jira/browse/KAFKA-15691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, system tests
>Reporter: Kirk True
>Priority: Minor
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15692) New integration tests to ensure full coverage

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15692:
--
Component/s: unit tests

> New integration tests to ensure full coverage
> -
>
> Key: KAFKA-15692
> URL: https://issues.apache.org/jira/browse/KAFKA-15692
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Kirk True
>Priority: Minor
>  Labels: kip-848, kip-848-client-support
>
> These are to fix bugs discovered during PR reviews but not tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15320) Document event queueing patterns

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15320:
--
Component/s: documentation

> Document event queueing patterns
> 
>
> Key: KAFKA-15320
> URL: https://issues.apache.org/jira/browse/KAFKA-15320
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, documentation
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor
>
> We need to first document the event enqueuing patterns in the 
> PrototypeAsyncConsumer. As part of this task, determine if it’s 
> necessary/beneficial to _conditionally_ add events and/or coalesce any 
> duplicate events in the queue.
> _Don’t forget to include diagrams for clarity!_
> This should be documented on the AK wiki.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15999) Migrate HeartbeatRequestManagerTest away from ConsumerTestBuilder

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15999:
--
Component/s: unit tests

> Migrate HeartbeatRequestManagerTest away from ConsumerTestBuilder
> -
>
> Key: KAFKA-15999
> URL: https://issues.apache.org/jira/browse/KAFKA-15999
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Lucas Brutschy
>Priority: Major
>  Labels: consumer-threading-refactor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16000) Migrate MembershipManagerImplTest away from ConsumerTestBuilder

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16000:
--
Summary: Migrate MembershipManagerImplTest away from ConsumerTestBuilder  
(was: Migrate MembershipManagerImpl away from ConsumerTestBuilder)

> Migrate MembershipManagerImplTest away from ConsumerTestBuilder
> ---
>
> Key: KAFKA-16000
> URL: https://issues.apache.org/jira/browse/KAFKA-16000
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Lucas Brutschy
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16000) Migrate MembershipManagerImpl away from ConsumerTestBuilder

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16000:
--
Component/s: unit tests

> Migrate MembershipManagerImpl away from ConsumerTestBuilder
> ---
>
> Key: KAFKA-16000
> URL: https://issues.apache.org/jira/browse/KAFKA-16000
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Lucas Brutschy
>Assignee: Kirk True
>Priority: Major
>  Labels: consumer-threading-refactor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16024) SaslPlaintextConsumerTest#testCoordinatorFailover is flaky

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16024:
--
Labels: consumer-threading-refactor flaky-test  (was: 
consumer-threading-refactor)

> SaslPlaintextConsumerTest#testCoordinatorFailover is flaky
> --
>
> Key: KAFKA-16024
> URL: https://issues.apache.org/jira/browse/KAFKA-16024
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, flaky-test
>
> The test is flaky with the async consumer as we are observing
>  
> {code:java}
> org.opentest4j.AssertionFailedError: Failed to observe commit callback before 
> timeout{code}
> I was not able to replicate this on my local machine easily.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16024) SaslPlaintextConsumerTest#testCoordinatorFailover is flaky

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16024:
--
Component/s: unit tests

> SaslPlaintextConsumerTest#testCoordinatorFailover is flaky
> --
>
> Key: KAFKA-16024
> URL: https://issues.apache.org/jira/browse/KAFKA-16024
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer, unit tests
>Reporter: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, flaky-test
>
> The test is flaky with the async consumer as we are observing
>  
> {code:java}
> org.opentest4j.AssertionFailedError: Failed to observe commit callback before 
> timeout{code}
> I was not able to replicate this on my local machine easily.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16026) AsyncConsumer does not send a poll event to the background thread

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True reassigned KAFKA-16026:
-

Assignee: Philip Nee

> AsyncConsumer does not send a poll event to the background thread
> -
>
> Key: KAFKA-16026
> URL: https://issues.apache.org/jira/browse/KAFKA-16026
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Blocker
>  Labels: consumer-threading-refactor
>
> consumer poll does not send a poll event to the background thread to:
>  # trigger autocommit
>  # reset max poll interval timer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16026) AsyncConsumer does not send a poll event to the background thread

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16026:
--
Fix Version/s: 3.7.0

> AsyncConsumer does not send a poll event to the background thread
> -
>
> Key: KAFKA-16026
> URL: https://issues.apache.org/jira/browse/KAFKA-16026
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Blocker
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> consumer poll does not send a poll event to the background thread to:
>  # trigger autocommit
>  # reset max poll interval timer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15818) Implement max poll interval

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15818:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-e2e 
kip-848-preview  (was: consumer consumer-threading-refactor 
kip-848-client-support kip-848-e2e kip-848-preview)

> Implement max poll interval
> ---
>
> Key: KAFKA-15818
> URL: https://issues.apache.org/jira/browse/KAFKA-15818
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
>
> The consumer needs to be polled at a candance lower than 
> MAX_POLL_INTERVAL_MAX otherwise the consumer should try to leave the group.  
> Currently, we send an acknowledgment event to the network thread per poll.  
> The event only triggers update on autocommit state, we need to implement 
> updating the poll timer so that the consumer can leave the group when the 
> timer expires. 
>  
> The current logic looks like this:
> {code:java}
>  if (heartbeat.pollTimeoutExpired(now)) {
> // the poll timeout has expired, which means that the foreground thread 
> has stalled
> // in between calls to poll().
> log.warn("consumer poll timeout has expired. This means the time between 
> subsequent calls to poll() " +
> "was longer than the configured max.poll.interval.ms, which typically 
> implies that " +
> "the poll loop is spending too much time processing messages. You can 
> address this " +
> "either by increasing max.poll.interval.ms or by reducing the maximum 
> size of batches " +
> "returned in poll() with max.poll.records.");
> maybeLeaveGroup("consumer poll timeout has expired.");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15696) Revoke partitions on Consumer.close()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15696:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-e2e kip-848-preview)

> Revoke partitions on Consumer.close()
> -
>
> Key: KAFKA-15696
> URL: https://issues.apache.org/jira/browse/KAFKA-15696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> Upon closing of the {{Consumer}} we need to revoke assignment. This involves 
> stop fetching, committing offsets if auto-commit enabled and invoking the 
> onPartitionsRevoked callback. There is a mechanism introduced in PR 
> [14406|https://github.com/apache/kafka/pull/14406] that allows for performing 
> network I/O on shutdown. The new method 
> {{ConsumerNetworkThread.runAtClose()}} will be executed when 
> {{Consumer.close()}} is invoked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15696) Revoke partitions on Consumer.close()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15696:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-e2e 
kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-client-support kip-848-e2e kip-848-preview)

> Revoke partitions on Consumer.close()
> -
>
> Key: KAFKA-15696
> URL: https://issues.apache.org/jira/browse/KAFKA-15696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
>
> Upon closing of the {{Consumer}} we need to revoke assignment. This involves 
> stop fetching, committing offsets if auto-commit enabled and invoking the 
> onPartitionsRevoked callback. There is a mechanism introduced in PR 
> [14406|https://github.com/apache/kafka/pull/14406] that allows for performing 
> network I/O on shutdown. The new method 
> {{ConsumerNetworkThread.runAtClose()}} will be executed when 
> {{Consumer.close()}} is invoked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15550) OffsetsForTimes validation for negative timestamps in new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15550:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-preview  (was: consumer-threading-refactor kip-848 kip-848-preview)

> OffsetsForTimes validation for negative timestamps in new consumer
> --
>
> Key: KAFKA-15550
> URL: https://issues.apache.org/jira/browse/KAFKA-15550
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-preview
>
> OffsetsForTimes api call should fail with _IllegalArgumentException_ if 
> negative timestamps are provided as arguments. This will effectively exclude 
> earliest and latest offsets as target times, keeping the current behaviour of 
> the KafkaConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15281) Implement the groupMetadata Consumer API

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15281:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-preview  
(was: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-preview)

> Implement the groupMetadata Consumer API
> 
>
> Key: KAFKA-15281
> URL: https://issues.apache.org/jira/browse/KAFKA-15281
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Bruno Cadonna
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> The threading refactor project needs to implement the {{groupMetadata()}} API 
> call once support for the KIP-848 protocol is implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15548) Send GroupConsumerHeartbeatRequest on Consumer.close()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15548:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-e2e 
kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-client-support kip-848-e2e kip-848-preview)

> Send GroupConsumerHeartbeatRequest on Consumer.close()
> --
>
> Key: KAFKA-15548
> URL: https://issues.apache.org/jira/browse/KAFKA-15548
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
>
> Upon closing of the {{Consumer}} we need to send the last 
> GroupConsumerHeartbeatRequest with epoch = -1 to leave the group (or -2 if 
> static member). There is a mechanism introduced in PR 
> [14406|https://github.com/apache/kafka/pull/14406] that allows for performing 
> network I/O on shutdown. The new method 
> {{ConsumerNetworkThread.runAtClose()}} will be executed when 
> {{Consumer.close()}} is invoked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15548) Send GroupConsumerHeartbeatRequest on Consumer.close()

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15548:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-e2e kip-848-preview)

> Send GroupConsumerHeartbeatRequest on Consumer.close()
> --
>
> Key: KAFKA-15548
> URL: https://issues.apache.org/jira/browse/KAFKA-15548
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> Upon closing of the {{Consumer}} we need to send the last 
> GroupConsumerHeartbeatRequest with epoch = -1 to leave the group (or -2 if 
> static member). There is a mechanism introduced in PR 
> [14406|https://github.com/apache/kafka/pull/14406] that allows for performing 
> network I/O on shutdown. The new method 
> {{ConsumerNetworkThread.runAtClose()}} will be executed when 
> {{Consumer.close()}} is invoked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15534) Propagate client response time when timeout to the request handler

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15534:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-e2e 
kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-client-support kip-848-e2e kip-848-preview)

> Propagate client response time when timeout to the request handler
> --
>
> Key: KAFKA-15534
> URL: https://issues.apache.org/jira/browse/KAFKA-15534
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
>
> Currently, we don't have a good way to propagate the response time to the 
> handler when timeout is thrown.
> {code:java}
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms.")); 
> {code}
> The current request manager invoke a system call to retrieve the response 
> time, which is not idea because it is already available at network client
> This is an example of the coordinator request manager:
> {code:java}
> unsentRequest.future().whenComplete((clientResponse, throwable) -> {
> long responseTimeMs = time.milliseconds();
> if (clientResponse != null) {
> FindCoordinatorResponse response = (FindCoordinatorResponse) 
> clientResponse.responseBody();
> onResponse(responseTimeMs, response);
> } else {
> onFailedResponse(responseTimeMs, throwable);
> }
> }); {code}
> But in the networkClientDelegate, we should utilize the currentTimeMs in the 
> trySend to avoid calling time.milliseconds():
> {code:java}
> private void trySend(final long currentTimeMs) {
> ...
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms."));
> continue;
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15277) Design & implement support for internal Consumer delegates

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15277:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-e2e 
kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-client-support kip-848-e2e kip-848-preview)

> Design & implement support for internal Consumer delegates
> --
>
> Key: KAFKA-15277
> URL: https://issues.apache.org/jira/browse/KAFKA-15277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> The consumer refactoring project introduced another {{Consumer}} 
> implementation, creating two different, coexisting implementations of the 
> {{Consumer}} interface:
>  * {{KafkaConsumer}} (AKA "existing", "legacy" consumer)
>  * {{PrototypeAsyncConsumer}} (AKA "new", "refactored" consumer)
> The goal of this task is to refactor the code via the delegation pattern so 
> that we can keep a top-level {{KafkaConsumer}} but then delegate to another 
> implementation under the covers. There will be two delegates at first:
>  * {{LegacyKafkaConsumer}}
>  * {{AsyncKafkaConsumer}}
> {{LegacyKafkaConsumer}} essentially be a renamed {{{}KafkaConsumer{}}}. That 
> implementation handles the existing group protocol. {{AsyncKafkaConsumer}} is 
> renamed from {{PrototypeAsyncConsumer}} and will implement the new consumer 
> group protocol from KIP-848. Both of those implementations will live in the 
> "internals" sub-package to discourage their use.
> This task is part of the work to implement support for the new KIP-848 
> consumer group protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15284) Implement ConsumerGroupProtocolVersionResolver to determine consumer group protocol

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15284:
--
Labels: consumer-threading-refactor kip-848-client-support  (was: 
consumer-threading-refactor kip-848 kip-848-client-support)

> Implement ConsumerGroupProtocolVersionResolver to determine consumer group 
> protocol
> ---
>
> Key: KAFKA-15284
> URL: https://issues.apache.org/jira/browse/KAFKA-15284
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848-client-support
>
> At client initialization, we need to determine which of the 
> {{ConsumerDelegate}} implementations to use:
>  # {{LegacyKafkaConsumerDelegate}}
>  # {{AsyncKafkaConsumerDelegate}}
> There are conditions defined by KIP-848 that determine client eligibility to 
> use the new protocol. This will be modeled by the—deep 
> breath—{{{}ConsumerGroupProtocolVersionResolver{}}}.
> Known tasks:
>  * Determine at what point in the {{Consumer}} initialization the network 
> communication should happen
>  * Determine what RPCs to invoke in order to determine eligibility (API 
> versions, IBP version, etc.)
>  * Implement the network client lifecycle (startup, communication, shutdown, 
> etc.)
>  * Determine the fallback path in case the client is not eligible to use the 
> protocol



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15550) OffsetsForTimes validation for negative timestamps in new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15550:
--
Labels: consumer-threading-refactor kip-848-client-support kip-848-preview  
(was: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-preview)

> OffsetsForTimes validation for negative timestamps in new consumer
> --
>
> Key: KAFKA-15550
> URL: https://issues.apache.org/jira/browse/KAFKA-15550
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-client-support, 
> kip-848-preview
>
> OffsetsForTimes api call should fail with _IllegalArgumentException_ if 
> negative timestamps are provided as arguments. This will effectively exclude 
> earliest and latest offsets as target times, keeping the current behaviour of 
> the KafkaConsumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15534) Propagate client response time when timeout to the request handler

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15534:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-e2e kip-848-preview)

> Propagate client response time when timeout to the request handler
> --
>
> Key: KAFKA-15534
> URL: https://issues.apache.org/jira/browse/KAFKA-15534
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> Currently, we don't have a good way to propagate the response time to the 
> handler when timeout is thrown.
> {code:java}
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms.")); 
> {code}
> The current request manager invoke a system call to retrieve the response 
> time, which is not idea because it is already available at network client
> This is an example of the coordinator request manager:
> {code:java}
> unsentRequest.future().whenComplete((clientResponse, throwable) -> {
> long responseTimeMs = time.milliseconds();
> if (clientResponse != null) {
> FindCoordinatorResponse response = (FindCoordinatorResponse) 
> clientResponse.responseBody();
> onResponse(responseTimeMs, response);
> } else {
> onFailedResponse(responseTimeMs, throwable);
> }
> }); {code}
> But in the networkClientDelegate, we should utilize the currentTimeMs in the 
> trySend to avoid calling time.milliseconds():
> {code:java}
> private void trySend(final long currentTimeMs) {
> ...
> unsent.handler.onFailure(new TimeoutException(
> "Failed to send request after " + unsent.timer.timeoutMs() + " ms."));
> continue;
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15277) Design & implement support for internal Consumer delegates

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15277:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: consumer-threading-refactor kip-848 
kip-848-e2e kip-848-preview)

> Design & implement support for internal Consumer delegates
> --
>
> Key: KAFKA-15277
> URL: https://issues.apache.org/jira/browse/KAFKA-15277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Blocker
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
> Fix For: 3.7.0
>
>
> The consumer refactoring project introduced another {{Consumer}} 
> implementation, creating two different, coexisting implementations of the 
> {{Consumer}} interface:
>  * {{KafkaConsumer}} (AKA "existing", "legacy" consumer)
>  * {{PrototypeAsyncConsumer}} (AKA "new", "refactored" consumer)
> The goal of this task is to refactor the code via the delegation pattern so 
> that we can keep a top-level {{KafkaConsumer}} but then delegate to another 
> implementation under the covers. There will be two delegates at first:
>  * {{LegacyKafkaConsumer}}
>  * {{AsyncKafkaConsumer}}
> {{LegacyKafkaConsumer}} essentially be a renamed {{{}KafkaConsumer{}}}. That 
> implementation handles the existing group protocol. {{AsyncKafkaConsumer}} is 
> renamed from {{PrototypeAsyncConsumer}} and will implement the new consumer 
> group protocol from KIP-848. Both of those implementations will live in the 
> "internals" sub-package to discourage their use.
> This task is part of the work to implement support for the new KIP-848 
> consumer group protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15986) New consumer group protocol integration test failures

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15986:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support  (was: 
CTR consumer-threading-refactor kip-848 kip-848-client-support)

> New consumer group protocol integration test failures
> -
>
> Key: KAFKA-15986
> URL: https://issues.apache.org/jira/browse/KAFKA-15986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support
> Fix For: 3.7.0
>
>
> A recent change in `AsyncKafkaConsumer.updateFetchPositions` has made 
> fetching fail without returning records in some situations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15991) Flaky new consumer test testGroupIdNotNullAndValid

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15991:
--
Labels: consumer-threading-refactor flaky-test kip-848 
kip-848-client-support  (was: consumer-threading-refactor flaky-test)

> Flaky new consumer test testGroupIdNotNullAndValid
> --
>
> Key: KAFKA-15991
> URL: https://issues.apache.org/jira/browse/KAFKA-15991
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer, unit tests
>Reporter: Lianet Magrans
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: consumer-threading-refactor, flaky-test, kip-848, 
> kip-848-client-support
>
> Fails locally when running it in a loop with it's latest changes from 
> [https://github.com/apache/kafka/commit/6df192b6cb1397a6e6173835bbbd8a3acb7e3988.]
>  Failed the build so temporarily disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16004) Review new consumer inflight offset commit logic

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-16004:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support  (was: 
kip-848-client-support)

> Review new consumer inflight offset commit logic
> 
>
> Key: KAFKA-16004
> URL: https://issues.apache.org/jira/browse/KAFKA-16004
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support
>
> New consumer logic for committing offsets handles inflight requests, to 
> validate that no commit requests are sent if a previous one hasn't received a 
> response. Review how that logic is currently applied to both, sync and async 
> commits and validate against the legacy coordinator, who seems to apply it 
> only for async commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15991) Flaky new consumer test testGroupIdNotNullAndValid

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15991:
--
Labels: consumer-threading-refactor flaky-test  (was: flaky-test)

> Flaky new consumer test testGroupIdNotNullAndValid
> --
>
> Key: KAFKA-15991
> URL: https://issues.apache.org/jira/browse/KAFKA-15991
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer, unit tests
>Reporter: Lianet Magrans
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: consumer-threading-refactor, flaky-test
>
> Fails locally when running it in a loop with it's latest changes from 
> [https://github.com/apache/kafka/commit/6df192b6cb1397a6e6173835bbbd8a3acb7e3988.]
>  Failed the build so temporarily disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15842) Correct handling of KafkaConsumer.committed for new consumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15842:
--
Labels: consumer-threading-refactor kip-848 kip-848-client-support 
kip-848-e2e kip-848-preview  (was: kip-848-client-support kip-848-e2e 
kip-848-preview)

> Correct handling of KafkaConsumer.committed for new consumer
> 
>
> Key: KAFKA-15842
> URL: https://issues.apache.org/jira/browse/KAFKA-15842
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Minor
>  Labels: consumer-threading-refactor, kip-848, 
> kip-848-client-support, kip-848-e2e, kip-848-preview
>
> KafkaConsumer.committed throws TimeOutException when there is no response. 
> The new consumer currently returns a null. Changing the new consumer to 
> behave like the old consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15954) Review minimal effort approach on consumer last heartbeat on unsubscribe

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15954:
--
Labels: kip-848 kip-848-client-support  (was: kip-848-client-support)

> Review minimal effort approach on consumer last heartbeat on unsubscribe
> 
>
> Key: KAFKA-15954
> URL: https://issues.apache.org/jira/browse/KAFKA-15954
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support
>
> Currently the legacy and new consumer follows a minimal effort approach when 
> sending a leave group (legacy) or last heartbeat request (new consumer). The 
> request is sent without waiting/handling any response. This behaviour applies 
> when the consumer is being closed or when it unsubscribes.
> For the case when the consumer is being closed, (which is a "terminal" 
> state), it makes sense to just follow a minimal effort approach for 
> "properly" leaving the group. But for the case of unsubscribe, it would maybe 
> make sense to put a little more effort in making sure that the last heartbeat 
> is sent and received by the broker. Note that unsubscribe could a temporary 
> state, where the consumer might want to re-join the group at any time. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15991) Flaky new consumer test testGroupIdNotNullAndValid

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15991:
--
Component/s: unit tests

> Flaky new consumer test testGroupIdNotNullAndValid
> --
>
> Key: KAFKA-15991
> URL: https://issues.apache.org/jira/browse/KAFKA-15991
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer, unit tests
>Reporter: Lianet Magrans
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: flaky-test
>
> Fails locally when running it in a loop with it's latest changes from 
> [https://github.com/apache/kafka/commit/6df192b6cb1397a6e6173835bbbd8a3acb7e3988.]
>  Failed the build so temporarily disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15847) Allow to resolve client metadata for specific topics

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15847:
--
Labels:   (was: kip-848-client-support)

> Allow to resolve client metadata for specific topics
> 
>
> Key: KAFKA-15847
> URL: https://issues.apache.org/jira/browse/KAFKA-15847
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Major
>
> Currently metadata updates requested through the metadata object request 
> metadata for all topics. Consider allowing the partial updates that are 
> already expressed as an intention in the Metadata class but not fully 
> supported. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15847) Allow to resolve client metadata for specific topics

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15847:
--
Component/s: consumer

> Allow to resolve client metadata for specific topics
> 
>
> Key: KAFKA-15847
> URL: https://issues.apache.org/jira/browse/KAFKA-15847
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Major
>  Labels: kip-848-client-support
>
> Currently metadata updates requested through the metadata object request 
> metadata for all topics. Consider allowing the partial updates that are 
> already expressed as an intention in the Metadata class but not fully 
> supported. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15835) Group commit/callbacks triggering logic

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15835:
--
Labels: kip-848 kip-848-client-support kip-848-e2e kip-848-preview  (was: 
kip-848-client-support kip-848-e2e kip-848-preview)

> Group commit/callbacks triggering logic
> ---
>
> Key: KAFKA-15835
> URL: https://issues.apache.org/jira/browse/KAFKA-15835
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> The new consumer reconciliation logic triggers a commit request, revocation 
> callback and assignment callbacks sequentially to ensure that they are 
> executed in that order. This means that we could require multiple iterations 
> of the poll loop to complete reconciling an assignment. 
> We could consider triggering them all together, to be executed in the same 
> poll iteration, while still making sure that they are executed in the right 
> order. Note that the sequence sometimes should not block on failures (ex. if 
> commit fails revocation proceeds anyways), and other times it does block (if 
> revocation callbacks fail onPartitionsAssigned is not called).
> As part of this task, review the time boundaries for the commit request 
> issued when the assignment changes. It will be effectively time bounded by 
> the rebalance timeout enforced by the broker, so initial approach is to use 
> the same rebalance timeout as boundary on the client. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15775) Implement listTopics() and partitionFor() for the AsyncKafkaConsumer

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15775:
--
Labels: kip-848 kip-848-client-support kip-848-e2e kip-848-preview  (was: 
kip-848-client-support kip-848-e2e kip-848-preview)

> Implement listTopics() and partitionFor() for the AsyncKafkaConsumer
> 
>
> Key: KAFKA-15775
> URL: https://issues.apache.org/jira/browse/KAFKA-15775
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> {code:java}
> @Override
> public List partitionsFor(String topic) {
> return partitionsFor(topic, Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public List partitionsFor(String topic, Duration timeout) {
> throw new KafkaException("method not implemented");
> }
> @Override
> public Map> listTopics() {
> return listTopics(Duration.ofMillis(defaultApiTimeoutMs));
> }
> @Override
> public Map> listTopics(Duration timeout) {
> throw new KafkaException("method not implemented");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15840) Correct initialization of ConsumerGroupHeartbeat by client

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15840:
--
Labels: kip-848 kip-848-client-support kip-848-e2e kip-848-preview  (was: 
kip-848-client-support kip-848-e2e kip-848-preview)

> Correct initialization of ConsumerGroupHeartbeat by client
> --
>
> Key: KAFKA-15840
> URL: https://issues.apache.org/jira/browse/KAFKA-15840
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> The new consumer using the KIP-848 protocol currently leaves the 
> TopicPartitions set to null for the ConsumerGroupHeartbeat request, even when 
> the MemberEpoch is zero. This violates the KIP which expects the list to be 
> empty (but not null).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15967) Fix revocation in reconcilation logic

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15967:
--
Component/s: clients
 consumer
 Labels: kip-848 kip-848-client-support  (was: )

> Fix revocation in reconcilation logic
> -
>
> Key: KAFKA-15967
> URL: https://issues.apache.org/jira/browse/KAFKA-15967
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lucas Brutschy
>Priority: Major
>  Labels: kip-848, kip-848-client-support
>
> Looks like there is a problem in the reconciliation logic.
> We are getting 6 partitions from an HB, we add them to 
> {{{}assignmentReadyToReconcile{}}}. Next HB we get only 4 partitions (2 are 
> revoked), we also add them to {{{}assignmentReadyToReconcile{}}}, but the 2 
> partitions that were supposed to be removed from the assignment are never 
> removed because they are still in {{{}assignmentReadyToReconcile{}}}.
> This was discovered during integration testing of 
> [https://github.com/apache/kafka/pull/14878] - part of the test 
> testRemoteAssignorRange was disabled and should be re-enabled once this is 
> fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15953) Refactor polling delays

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15953:
--
Component/s: consumer
 Labels: kip-848 kip-848-client-support  (was: )

> Refactor polling delays
> ---
>
> Key: KAFKA-15953
> URL: https://issues.apache.org/jira/browse/KAFKA-15953
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: kip-848, kip-848-client-support
> Fix For: 3.7.0
>
>
> This is a follow-on tasks for 
> https://issues.apache.org/jira/browse/KAFKA-15890. The idea is to reduce the 
> interaction between the application thread and the request managers which was 
> introduced in that earlier JIRA's patch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15846) Review consumer leave group request best effort

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15846:
--
Labels: kip-848 kip-848-client-support  (was: kip-848-client-support)

> Review consumer leave group request best effort
> ---
>
> Key: KAFKA-15846
> URL: https://issues.apache.org/jira/browse/KAFKA-15846
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support
>
> New consumer sends out a leave group request with a best effort approach. 
> Transitions to LEAVING to indicate the HB manager that the request must be 
> sent, but it does not do any response handling or retrying (note that the 
> response is still handled as any other HB response). After the first HB 
> manager poll iteration while on LEAVING, the consumer transitions into 
> UNSUBSCRIBE (no matter if the request was actually sent out or not, ex, due 
> to coordinator not known). Review if this is good enough as an effort to send 
> the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15839) Improve TopicIdPartition integration in consumer membershipManager

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15839:
--
Labels: kip-848 kip-848-client-support kip-848-e2e kip-848-preview  (was: 
kip-848-client-support kip-848-e2e kip-848-preview)

> Improve TopicIdPartition integration in consumer membershipManager
> --
>
> Key: KAFKA-15839
> URL: https://issues.apache.org/jira/browse/KAFKA-15839
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> It is currently used in the reconciliation path. Could be used more, just 
> leaving topicPartitions when necessary for the callbacks and interaction with 
> the subscription state that does not fully support topic ids yet
> Ensure that we properly handle topic re-creation (same name, diff topic IDs) 
> in the reconciliation process (assignment cache, same assignment comparison, 
> etc.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15679) Client support for new consumer configs

2023-12-18 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated KAFKA-15679:
--
Labels: kip-848 kip-848-client-support kip-848-e2e kip-848-preview  (was: 
kip-848-client-support kip-848-e2e kip-848-preview)

> Client support for new consumer configs
> ---
>
> Key: KAFKA-15679
> URL: https://issues.apache.org/jira/browse/KAFKA-15679
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Philip Nee
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
>
> New consumer should support the new configs introduced by KIP-848
> |group.protocol|enum|generic|A flag which indicates if the new protocol 
> should be used or not. It could be: generic or consumer|
> |group.remote.assignor|string|null|The server side assignor to use. It cannot 
> be used in conjunction with group.local.assignor. {{null}} means that the 
> choice of the assignor is left to the group coordinator.|
> The protocol introduces a 3rd property for client side (local) assignors, but 
> that will be introduced later on. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >