kirktrue commented on code in PR #19914: URL: https://github.com/apache/kafka/pull/19914#discussion_r2206031972
########## clients/clients-integration-tests/src/test/java/org/apache/kafka/clients/consumer/PlaintextConsumerCommitTest.java: ########## @@ -452,6 +452,34 @@ private void testPositionAndCommit(GroupProtocol groupProtocol) throws Interrupt } } + /** + * This is testing when closing the consumer but commit request has already been sent. + * During the closing, the consumer won't find the coordinator anymore. + */ + @ClusterTest + public void testCommitAsyncFailsWhenCoordinatorUnavailableDuringClose() throws InterruptedException { + try (Producer<byte[], byte[]> producer = cluster.producer(); + var consumer = createConsumer(GroupProtocol.CONSUMER, false) + ) { + sendRecords(producer, tp, 3, System.currentTimeMillis()); + consumer.assign(List.of(tp)); + + var callback = new CountConsumerCommitCallback(); + + // Close the coordinator before committing because otherwise the commit will fail to find the coordinator. + cluster.brokerIds().forEach(cluster::shutdownBroker); + + consumer.poll(Duration.ofMillis(500)); + consumer.commitAsync(Map.of(tp1, new OffsetAndMetadata(1L)), callback); + consumer.close(CloseOptions.timeout(Duration.ofMillis(500))); Review Comment: Is it possible for `close()` to take more than 500 milliseconds? Should we time the method call and add an assert to ensure it's not more than, say, 1000 milliseconds? ########## clients/clients-integration-tests/src/test/java/org/apache/kafka/clients/consumer/PlaintextConsumerCommitTest.java: ########## @@ -452,6 +452,34 @@ private void testPositionAndCommit(GroupProtocol groupProtocol) throws Interrupt } } + /** + * This is testing when closing the consumer but commit request has already been sent. + * During the closing, the consumer won't find the coordinator anymore. + */ + @ClusterTest + public void testCommitAsyncFailsWhenCoordinatorUnavailableDuringClose() throws InterruptedException { + try (Producer<byte[], byte[]> producer = cluster.producer(); + var consumer = createConsumer(GroupProtocol.CONSUMER, false) + ) { + sendRecords(producer, tp, 3, System.currentTimeMillis()); + consumer.assign(List.of(tp)); + + var callback = new CountConsumerCommitCallback(); + + // Close the coordinator before committing because otherwise the commit will fail to find the coordinator. + cluster.brokerIds().forEach(cluster::shutdownBroker); Review Comment: I wonder if we need to have some sort of `wiatFor()` check after this to ensure the brokers are down. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org