Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1817

2023-05-03 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #122

2023-05-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 437465 lines...]
[2023-05-04T02:49:46.372Z] > Task :connect:api:testJar
[2023-05-04T02:49:46.372Z] > Task :connect:api:testSrcJar
[2023-05-04T02:49:46.372Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2023-05-04T02:49:46.372Z] > Task :connect:api:publishToMavenLocal
[2023-05-04T02:49:47.315Z] 
[2023-05-04T02:49:47.315Z] > Task :clients:javadoc
[2023-05-04T02:49:47.315Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/OAuthBearerLoginCallbackHandler.java:151:
 warning - Tag @link: reference not found: 
[2023-05-04T02:49:49.956Z] 
[2023-05-04T02:49:49.956Z] > Task :streams:javadoc
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2023-05-04T02:49:49.956Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:109:
 warning - Tag @link: reference not found: this#getResult()
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureReason()
[2023-05-04T02:49:49.956Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2023-05-04T02:49:49.956Z] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1816

2023-05-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 471161 lines...]
[2023-05-04T01:17:09.428Z] > Task :storage:api:testClasses UP-TO-DATE
[2023-05-04T01:17:09.428Z] > Task :raft:compileTestJava UP-TO-DATE
[2023-05-04T01:17:09.428Z] > Task :raft:testClasses UP-TO-DATE
[2023-05-04T01:17:09.428Z] > Task :connect:json:testJar
[2023-05-04T01:17:09.428Z] > Task :connect:json:testSrcJar
[2023-05-04T01:17:09.428Z] > Task :group-coordinator:compileTestJava UP-TO-DATE
[2023-05-04T01:17:09.428Z] > Task :group-coordinator:testClasses UP-TO-DATE
[2023-05-04T01:17:10.377Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2023-05-04T01:17:10.377Z] > Task :metadata:compileTestJava UP-TO-DATE
[2023-05-04T01:17:10.377Z] > Task :metadata:testClasses UP-TO-DATE
[2023-05-04T01:17:10.377Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2023-05-04T01:17:12.328Z] 
[2023-05-04T01:17:12.328Z] > Task :connect:api:javadoc
[2023-05-04T01:17:12.328Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/connect/api/src/main/java/org/apache/kafka/connect/source/SourceRecord.java:44:
 warning - Tag @link: reference not found: org.apache.kafka.connect.data
[2023-05-04T01:17:14.105Z] 1 warning
[2023-05-04T01:17:15.055Z] 
[2023-05-04T01:17:15.055Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task :connect:api:jar UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2023-05-04T01:17:15.055Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task :connect:json:jar UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2023-05-04T01:17:15.055Z] > Task :connect:api:javadocJar
[2023-05-04T01:17:15.055Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task :connect:api:testClasses UP-TO-DATE
[2023-05-04T01:17:15.055Z] > Task :connect:api:testJar
[2023-05-04T01:17:15.055Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2023-05-04T01:17:15.055Z] > Task :connect:json:publishToMavenLocal
[2023-05-04T01:17:15.055Z] > Task :connect:api:testSrcJar
[2023-05-04T01:17:15.055Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2023-05-04T01:17:15.055Z] > Task :connect:api:publishToMavenLocal
[2023-05-04T01:17:18.679Z] > Task :streams:javadoc
[2023-05-04T01:17:18.679Z] > Task :streams:javadocJar
[2023-05-04T01:17:18.679Z] > Task :streams:processTestResources UP-TO-DATE
[2023-05-04T01:17:19.631Z] 
[2023-05-04T01:17:19.631Z] > Task :clients:javadoc
[2023-05-04T01:17:19.631Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API
[2023-05-04T01:17:19.631Z] 
[2023-05-04T01:17:19.631Z]  This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
[2023-05-04T01:17:19.631Z]  The type field in both files must match and must 
not change. The type field
[2023-05-04T01:17:19.631Z]  is used both for passing ScramCredentialUpsertion 
and for the internal
[2023-05-04T01:17:19.631Z]  UserScramCredentialRecord. Do not change the type 
field."
[2023-05-04T01:17:20.582Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-05-04T01:17:22.361Z] 2 warnings
[2023-05-04T01:17:23.312Z] 
[2023-05-04T01:17:23.312Z] > Task :clients:javadocJar
[2023-05-04T01:17:24.263Z] > Task :clients:srcJar
[2023-05-04T01:17:25.216Z] > Task :clients:testJar
[2023-05-04T01:17:25.216Z] > Task :clients:testSrcJar
[2023-05-04T01:17:25.216Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2023-05-04T01:17:25.216Z] > Task :clients:publishToMavenLocal
[2023-05-04T01:17:39.274Z] > Task :core:compileScala
[2023-05-04T01:19:28.472Z] > Task :core:classes
[2023-05-04T01:19:28.472Z] > Task :core:compileTestJava NO-SOURCE
[2023-05-04T01:19:50.643Z] > Task :core:compileTestScala
[2023-05-04T01:21:23.855Z] > Task :core:testClasses
[2023-05-04T01:21:23.855Z] > Task :streams:compileTestJava UP-TO-DATE
[2023-05-04T01:21:23.855Z] > Task :streams:testClasses UP-TO-DATE
[2023-05-04T01:21:23.855Z] > Task :streams:testJar
[2023-05-04T01:21:23.855Z] > Task :streams:testSrcJar
[2023-05-04T01:21:23.855Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2023-05-04T01:21:23.855Z] > Task :streams:publishToMavenLocal
[2023-05-04T01:21:23.855Z] 
[2023-05-04T01:21:23.855Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 9.0.

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.4 #121

2023-05-03 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1815

2023-05-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 567523 lines...]
[2023-05-03T20:51:06.276Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > 
shouldSetChangelogTopicProperties PASSED
[2023-05-03T20:51:06.276Z] 
[2023-05-03T20:51:06.276Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > shouldRestore 
STARTED
[2023-05-03T20:52:00.204Z] 
[2023-05-03T20:52:00.204Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > shouldRestore PASSED
[2023-05-03T20:52:00.204Z] 
[2023-05-03T20:52:00.204Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > 
shouldPutGetAndDelete STARTED
[2023-05-03T20:52:00.204Z] 
[2023-05-03T20:52:00.204Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > 
shouldPutGetAndDelete PASSED
[2023-05-03T20:52:00.204Z] 
[2023-05-03T20:52:00.204Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned STARTED
[2023-05-03T20:52:44.525Z] 
[2023-05-03T20:52:44.525Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned PASSED
[2023-05-03T20:52:46.581Z] 
[2023-05-03T20:52:46.581Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted STARTED
[2023-05-03T20:52:53.088Z] 
[2023-05-03T20:52:53.088Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted PASSED
[2023-05-03T20:52:54.048Z] OpenJDK 64-Bit Server VM warning: Sharing is only 
supported for boot loader classes because bootstrap classpath has been appended
[2023-05-03T20:52:54.990Z] 
[2023-05-03T20:52:54.990Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers STARTED
[2023-05-03T20:53:18.794Z] 
[2023-05-03T20:53:18.794Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargeNumConsumers PASSED
[2023-05-03T20:53:18.794Z] 
[2023-05-03T20:53:18.794Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED
[2023-05-03T20:53:32.271Z] 
[2023-05-03T20:53:32.271Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED
[2023-05-03T20:53:32.271Z] 
[2023-05-03T20:53:32.271Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED
[2023-05-03T20:53:32.271Z] 
[2023-05-03T20:53:32.271Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED
[2023-05-03T20:53:32.271Z] 
[2023-05-03T20:53:32.271Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys STARTED
[2023-05-03T20:53:40.611Z] 
[2023-05-03T20:53:40.611Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyStandbys PASSED
[2023-05-03T20:53:40.611Z] 
[2023-05-03T20:53:40.611Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2023-05-03T20:53:40.611Z] 
[2023-05-03T20:53:40.611Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED
[2023-05-03T20:53:40.611Z] 
[2023-05-03T20:53:40.611Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2023-05-03T20:53:41.563Z] 
[2023-05-03T20:53:41.563Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2023-05-03T20:53:41.563Z] 
[2023-05-03T20:53:41.563Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 178 > StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2023-05-03T20:54:12.195Z] 
[2023-05-03T20:54:12.195Z] Gradle Test Run 

[jira] [Created] (KAFKA-14966) Extract reusable common logic from OffsetFetcher

2023-05-03 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-14966:
--

 Summary: Extract reusable common logic from OffsetFetcher
 Key: KAFKA-14966
 URL: https://issues.apache.org/jira/browse/KAFKA-14966
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Lianet Magrans


The OffsetFetcher is internally used by the KafkaConsumer to fetch offsets, 
validate and reset positions. 

For the new consumer based on a refactored threading model, similar 
functionality will be needed by the ListOffsetsRequestManager component. 

This task aims at identifying and extracting the OffsetFetcher functionality 
that can be reused by the new consumer implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.4 #120

2023-05-03 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14965) Introduce ListOffsetsRequestManager to integrate ListOffsetsRequests into new consumer threading refactor

2023-05-03 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-14965:
--

 Summary: Introduce ListOffsetsRequestManager to integrate 
ListOffsetsRequests into new consumer threading refactor
 Key: KAFKA-14965
 URL: https://issues.apache.org/jira/browse/KAFKA-14965
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Reporter: Lianet Magrans


This task introduces new functionality for handling ListOffsetsRequests for the 
new consumer implementation, as part for the ongoing work for the consumer 
threading model refactor.

This task introduces a new class named {{ListOffsetsRequestManager, 
}}responsible of handling ListOffsets requests performed by the consumer to 
expose functionality like beginningOffsets, endOffsets and offsetsForTimes. 

The Offset{{{}Fetcher{}}} class is used internally by the {{KafkaConsumer}} to 
list offsets, so this task will be based on a refactored Offset{{{}Fetcher{}}}, 
 reusing the fetching logic as much as possible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14964) ClientQuotaMetadataManager should not suppress exceptions

2023-05-03 Thread David Arthur (Jira)
David Arthur created KAFKA-14964:


 Summary: ClientQuotaMetadataManager should not suppress exceptions
 Key: KAFKA-14964
 URL: https://issues.apache.org/jira/browse/KAFKA-14964
 Project: Kafka
  Issue Type: Bug
Reporter: David Arthur


As MetadataLoader calls each MetadataPublisher upon receiving new records from 
the controller, it surrounds the call with a try-catch block in order to pass 
exceptions to a FaultHandler. The FaultHandler used by MetadataLoader is 
essential for us to learn about metadata errors on the broker since it 
increments the metadata loader error JMX metric.

ClientQuotaMetadataManager is in the update path for ClientQuota metadata 
updates and is capturing exceptions. This means validation errors (like invalid 
quotas) will not be seen by the FaultHandler, and the JMX metric will not get 
incremented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka-site] bbejeck merged pull request #509: Update podcast logo

2023-05-03 Thread via GitHub


bbejeck merged PR #509:
URL: https://github.com/apache/kafka-site/pull/509


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14963) Incorrect partition count metrics for kraft controllers

2023-05-03 Thread Jira
José Armando García Sancio created KAFKA-14963:
--

 Summary: Incorrect partition count metrics for kraft controllers
 Key: KAFKA-14963
 URL: https://issues.apache.org/jira/browse/KAFKA-14963
 Project: Kafka
  Issue Type: Bug
  Components: controller, kraft
Affects Versions: 3.4.0
Reporter: José Armando García Sancio
Assignee: José Armando García Sancio
 Fix For: 3.4.1


It is possible for the KRaft controller to report more partitions than are 
available in the cluster. This is because the following test fail against 3.4.0:
{code:java}
       @Test
      public void testPartitionCountDecreased() {
          ControllerMetrics metrics = new MockControllerMetrics();
          ControllerMetricsManager manager = new 
ControllerMetricsManager(metrics);          Uuid createTopicId = 
Uuid.randomUuid();
          Uuid createPartitionTopicId = new Uuid(
              createTopicId.getMostSignificantBits(),
              createTopicId.getLeastSignificantBits()
          );
          Uuid removeTopicId = new Uuid(createTopicId.getMostSignificantBits(), 
createTopicId.getLeastSignificantBits());
          manager.replay(topicRecord("test", createTopicId));
          manager.replay(partitionRecord(createPartitionTopicId, 0, 0, 
Arrays.asList(0, 1, 2)));
          manager.replay(partitionRecord(createPartitionTopicId, 1, 0, 
Arrays.asList(0, 1, 2)));
          manager.replay(removeTopicRecord(removeTopicId));
          assertEquals(0, metrics.globalPartitionCount());
      }
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14016) Revoke more partitions than expected in Cooperative rebalance

2023-05-03 Thread Philip Nee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Nee resolved KAFKA-14016.

Fix Version/s: 3.5.0
   3.4.1
 Assignee: Philip Nee
   Resolution: Fixed

> Revoke more partitions than expected in Cooperative rebalance
> -
>
> Key: KAFKA-14016
> URL: https://issues.apache.org/jira/browse/KAFKA-14016
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.3.0
>Reporter: Shawn Wang
>Assignee: Philip Nee
>Priority: Major
>  Labels: new-rebalance-should-fix
> Fix For: 3.5.0, 3.4.1
>
> Attachments: CooperativeStickyAssignorBugReproduction.java
>
>
> In https://issues.apache.org/jira/browse/KAFKA-13419 we found that some 
> consumer didn't reset generation and state after sync group fail with 
> REABALANCE_IN_PROGRESS error.
> So we fixed it by reset generationId (no memberId) when  sync group fail with 
> REABALANCE_IN_PROGRESS error.
> But this change missed the reset part, so another change made in 
> https://issues.apache.org/jira/browse/KAFKA-13891 make this works.
> After apply this change, we found that: sometimes consumer will revoker 
> almost 2/3 of the partitions with cooperative enabled. Because if a consumer 
> did a very quick re-join, other consumers will get REABALANCE_IN_PROGRESS in 
> syncGroup and revoked their partition before re-jion. example:
>  # consumer A1-A10 (ten consumers) joined and synced group successfully with 
> generation 1 
>  # New consumer B1 joined and start a rebalance
>  # all consumer joined successfully and then A1 need to revoke partition to 
> transfer to B1
>  # A1 do a very quick syncGroup and re-join, because it revoked partition
>  # A2-A10 didn't send syncGroup before A1 re-join, so after the send 
> syncGruop, will get REBALANCE_IN_PROGRESS
>  # A2-A10 will revoke there partitions and re-join
> So in this rebalance almost every partition revoked, which highly decrease 
> the benefit of Cooperative rebalance 
> I think instead of "{*}resetStateAndRejoin{*} when 
> *RebalanceInProgressException* errors happend in {*}sync group{*}" we need 
> another way to fix it.
> Here is my proposal:
>  # revert the change in https://issues.apache.org/jira/browse/KAFKA-13891
>  # In Server Coordinator handleSyncGroup when generationId checked and group 
> state is PreparingRebalance. We can send the assignment along with the error 
> code REBALANCE_IN_PROGRESS. ( i think it's safe since we verified the 
> generation first )
>  # When get the REBALANCE_IN_PROGRESS error in client, try to apply the 
> assignment first and then set the rejoinNeeded = true to make it re-join 
> immediately 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13891) sync group failed with rebalanceInProgress error cause rebalance many rounds in coopeartive

2023-05-03 Thread Philip Nee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Nee resolved KAFKA-13891.

Fix Version/s: 3.5.0
   3.4.1
   (was: 3.6.0)
   Resolution: Fixed

> sync group failed with rebalanceInProgress error cause rebalance many rounds 
> in coopeartive
> ---
>
> Key: KAFKA-13891
> URL: https://issues.apache.org/jira/browse/KAFKA-13891
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.0
>Reporter: Shawn Wang
>Assignee: Philip Nee
>Priority: Major
> Fix For: 3.5.0, 3.4.1
>
>
> This issue was first found in 
> [KAFKA-13419|https://issues.apache.org/jira/browse/KAFKA-13419]
> But the previous PR forgot to reset generation when sync group failed with 
> rebalanceInProgress error. So the previous bug still exists and it may cause 
> consumer to rebalance many rounds before final stable.
> Here's the example ({*}bold is added{*}):
>  # consumer A joined and synced group successfully with generation 1 *( with 
> ownedPartition P1/P2 )*
>  # New rebalance started with generation 2, consumer A joined successfully, 
> but somehow, consumer A doesn't send out sync group immediately
>  # other consumer completed sync group successfully in generation 2, except 
> consumer A.
>  # After consumer A send out sync group, the new rebalance start, with 
> generation 3. So consumer A got REBALANCE_IN_PROGRESS error with sync group 
> response
>  # When receiving REBALANCE_IN_PROGRESS, we re-join the group, with 
> generation 3, with the assignment (ownedPartition) in generation 1.
>  # So, now, we have out-of-date ownedPartition sent, with unexpected results 
> happened
>  # *After the generation-3 rebalance, consumer A got P3/P4 partition. the 
> ownedPartition is ignored because of old generation.*
>  # *consumer A revoke P1/P2 and re-join to start a new round of rebalance*
>  # *if some other consumer C failed to syncGroup before consumer A's 
> joinGroup. the same issue will happens again and result in many rounds of 
> rebalance before stable*
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14962) Whitespace in ACL configuration causes Kafka startup to fail

2023-05-03 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14962:


 Summary: Whitespace in ACL configuration causes Kafka startup to 
fail
 Key: KAFKA-14962
 URL: https://issues.apache.org/jira/browse/KAFKA-14962
 Project: Kafka
  Issue Type: Bug
Reporter: Divij Vaidya
Assignee: Divij Vaidya
 Fix For: 3.6.0


Kafka's startup can fail if there is a trailing or leading whitespace for a 
configuration value. This fix makes it more tolerant towards cases where a user 
might accidentally add a trailing or leading whitespace in ACL configuration.
{code:java}
ERROR [KafkaServer id=3] Fatal error during KafkaServer startup. Prepare to 
shutdown (kafka.server.KafkaServer)

java.lang.IllegalArgumentException: For input string: "true "

at scala.collection.StringOps$.toBooleanImpl$extension(StringOps.scala:943)

at 
kafka.security.authorizer.AclAuthorizer.$anonfun$configure$4(AclAuthorizer.scala:153)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka-site] showuon commented on pull request #508: MINOR: add Luke's key

2023-05-03 Thread via GitHub


showuon commented on PR #508:
URL: https://github.com/apache/kafka-site/pull/508#issuecomment-1532902203

   @mimaison , please take a look. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] showuon opened a new pull request, #508: MINOR: add Luke's key

2023-05-03 Thread via GitHub


showuon opened a new pull request, #508:
URL: https://github.com/apache/kafka-site/pull/508

   add Luke's key for running release


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14961) DefaultBackgroundThreadTest.testStartupAndTearDown test is flasky

2023-05-03 Thread Manyanda Chitimbo (Jira)
Manyanda Chitimbo created KAFKA-14961:
-

 Summary: DefaultBackgroundThreadTest.testStartupAndTearDown test 
is flasky
 Key: KAFKA-14961
 URL: https://issues.apache.org/jira/browse/KAFKA-14961
 Project: Kafka
  Issue Type: Test
Reporter: Manyanda Chitimbo
Assignee: Manyanda Chitimbo


When running the test suite locally I noticed the following error
{code:java}
org.opentest4j.AssertionFailedError: expected:  but was: 
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
at 
app//org.apache.kafka.clients.consumer.internals.DefaultBackgroundThreadTest.testStartupAndTearDown(DefaultBackgroundThreadTest.java:95)
 {code}
which happened only once and I could reproduce it again. 

I further noticed some NPE in debug logs in the form of
{code:java}
 ERROR The background thread failed due to unexpected error 
(org.apache.kafka.clients.consumer.internals.DefaultBackgroundThread:166)
java.lang.NullPointerException
    at 
org.apache.kafka.clients.consumer.internals.DefaultBackgroundThread.handlePollResult(DefaultBackgroundThread.java:200)
    at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
    at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
    at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
    at 
java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1675)
    at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
    at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
    at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
    at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at 
java.base/java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:553)
    at 
org.apache.kafka.clients.consumer.internals.DefaultBackgroundThread.runOnce(DefaultBackgroundThread.java:187)
    at 
org.apache.kafka.clients.consumer.internals.DefaultBackgroundThread.run(DefaultBackgroundThread.java:159)
 {code}
which is due to missing stubs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-892: Transactional Semantics for StateStores

2023-05-03 Thread Nick Telford
Hi Bruno,

Thanks for reviewing my proposal.

1.
The main reason I added it was because it was easy to do. If we see no
value in it, I can remove it.

2.
Global StateStores can have multiple partitions in their input topics
(which function as their changelogs), so they would have more than one
partition.

3.
That's a good point. At present, the only method it adds is
isolationLevel(), which is likely not necessary outside of StateStores.
It *does* provide slightly different guarantees in the documentation to
several of the methods (hence the overrides). I'm not sure if this is
enough to warrant a new interface though.
I think the question that remains is whether this interface makes it easier
to implement custom transactional StateStores than if we were to remove it?
Probably not.

4.
The main motivation for the Atomic Checkpointing is actually performance.
My team has been testing out an implementation of this KIP without it, and
we had problems with RocksDB doing *much* more compaction, due to the
significantly increased flush rate. It was enough of a problem that (for
the time being), we had to revert back to Kafka Streams proper.
I think the best way to solve this, as you say, is to keep the .checkpoint
files *in addition* to the offsets being stored within the store itself.
Essentially, when closing StateStores, we force a memtable flush, then call
getCommittedOffsets and write those out to the .checkpoint file. That would
ensure the metadata is available to the StreamsPartitionAssignor for all
closed stores.
If there's a crash (no clean close), then we won't be able to guarantee
which offsets were flushed to disk by RocksDB, so we'd need to open (init()),
read offsets, and then close() those stores. But since this is the
exception, and will only occur once (provided it doesn't crash every
time!), I think the performance impact here would be acceptable.

Thanks for the feedback, please let me know if you have any more comments
or questions!

I'm currently working on rebasing against trunk. This involves adding
support for transactionality to VersionedStateStores. I will probably need
to revise my implementation for transactional "segmented" stores, both to
accommodate VersionedStateStore, and to clean up some other stuff.

Regards,
Nick


On Tue, 2 May 2023 at 13:45, Bruno Cadonna  wrote:

> Hi Nick,
>
> Thanks for the updates!
>
> I have a couple of questions/comments.
>
> 1.
> Why do you propose a configuration that involves max. bytes and max.
> reords? I think we are mainly concerned about memory consumption because
> we want to limit the off-heap memory used. I cannot think of a case
> where one would want to set the max. number of records.
>
>
> 2.
> Why does
>
>   default void commit(final Map changelogOffsets) {
>   flush();
>   }
>
> take a map of partitions to changelog offsets?
> The mapping between state stores to partitions is a 1:1 relationship.
> Passing in a single changelog offset should suffice.
>
>
> 3.
> Why do we need the Transaction interface? It should be possible to hide
> beginning and committing a transactions withing the state store
> implementation, so that from outside the state store, it does not matter
> whether the state store is transactional or not. What would be the
> advantage of using the Transaction interface?
>
>
> 4.
> Regarding checkpointing offsets, I think we should keep the checkpoint
> file in any case for the reason you mentioned about rebalancing. Even if
> that would not be an issue, I would propose to move the change to offset
> management to a new KIP and to not add more complexity than needed to
> this one. I would not be too concerned about the consistency violation
> you mention. As far as I understand, with transactional state stores
> Streams would write the checkpoint file during every commit even under
> EOS. In the failure case you describe, Streams would restore the state
> stores from the offsets found in the checkpoint file written during the
> penultimate commit instead of during the last commit. Basically, Streams
> would overwrite the records written to the state store between the last
> two commits with the same records read from the changelogs. While I
> understand that this is wasteful, it is -- at the same time --
> acceptable and most importantly it does not break EOS.
>
> Best,
> Bruno
>
>
> On 27.04.23 12:34, Nick Telford wrote:
> > Hi everyone,
> >
> > I find myself (again) considering removing the offset management from
> > StateStores, and keeping the old checkpoint file system. The reason is
> that
> > the StreamPartitionAssignor directly reads checkpoint files in order to
> > determine which instance has the most up-to-date copy of the local state.
> > If we move offsets into the StateStore itself, then we will need to open,
> > initialize, read offsets and then close each StateStore (that is not
> > already assigned and open) for which we have *any* local state, on every
> > rebalance.
> >
> > Generally, I don't think 

An important fix waiting to be merged

2023-05-03 Thread Sandeep Biswas
Hi,
Could you please review the pull request (
https://github.com/apache/kafka/compare/trunk...MPeli:kafka:KAFKA-1194) and
merge it if everything looks alright.
Kafka on windows has been plagued by this issue for years and makes it
unsuitable for Production deployments. We are a large MNC and need to
provide a 100% windows based solution to a few of our customers.
If this fix works, this could be a massive help to all windows based
customers who can use kafka capabilities. Please do the needful.

Thanks,
Sandeep


Re: Adding reviewers with Github actions

2023-05-03 Thread Viktor Somogyi-Vass
Yes, perhaps this can be used in the github action too, I think this is a
very useful tool. Sadly I couldn't get to the github action but hopefully I
will get there soon.

On Fri, Apr 28, 2023 at 8:48 AM David Jacot 
wrote:

> Thanks, David. This is a nice addition!
>
> Coming back to the original proposal of using github actions, it may be
> possible to run David's script automatically. For instance, we could
> trigger an action which pulls the folks who have approved the PR and feed
> the script when a comment with `reviewers` is posted. Then the action would
> post a comment with the "Reviewers: ". This way, we could do
> everything from within the PR.
>
> Cheers,
> David
>
> On Thu, Apr 27, 2023 at 8:35 PM David Arthur
>  wrote:
>
> > I just merged the "reviewers" script I wrote a while ago:
> > https://github.com/apache/kafka/pull/11096
> >
> > It works by finding previous occurrences of "Reviewers: ...", so it only
> > works for people who have reviewed something before. I do suspect this is
> > largely the common case.
> >
> > E.g., searching for "Ismael" gives:
> >
> > Possible matches (in order of most recent):
> > [1] Ismael Juma ism...@juma.me.uk (1514)
> > [2] Ismael Juma ij...@apache.org (3)
> > [3] Ismael Juma mli...@juma.me.uk (4)
> > [4] Ismael Juma ism...@confluent.io (19)
> > [5] Ismael Juma git...@juma.me.uk (7)
> >
> > it shows them in order of most recently occurring along with the number
> of
> > occurrences. Now that it's merged, it should be easier for folks to try
> it
> > out.
> >
> > Cheers,
> > David
> >
> > On Thu, Apr 20, 2023 at 1:02 PM Justine Olshan
> > 
> > wrote:
> >
> > > I've tried the script, but it's not quite complete.
> > > I've had issues finding folks -- if they haven't reviewed in kafka, we
> > can
> > > not find an email for them. I also had some issues with finding folks
> who
> > > had reviewed before.
> > >
> > > Right now, my strategy is to use GitHub to search previous commits for
> > > folks' emails, but that isn't the most optimal solution -- especially
> if
> > > the reviewer has no public email.
> > > I do think it is useful to have in the commit though, so if anyone has
> > some
> > > ideas on how to improve, I'd be happy to hear.
> > >
> > > Justine
> > >
> > > On Wed, Apr 19, 2023 at 6:53 AM Ismael Juma  wrote:
> > >
> > > > It's a lot more convenient to have it in the commit than having to
> > follow
> > > > links, etc.
> > > >
> > > > David Arthur also wrote a script to help with this step, I believe.
> > > >
> > > > Ismael
> > > >
> > > > On Tue, Apr 18, 2023, 9:29 AM Divij Vaidya 
> > > > wrote:
> > > >
> > > > > Do we even need a manual attribution for a reviewer in the commit
> > > > message?
> > > > > GitHub automatically marks the folks as "reviewers" who have used
> the
> > > > > "review-changes" button on the top left corner and left feedback.
> > > GitHub
> > > > > also has searchability for such reviews done by a particular person
> > > using
> > > > > the following link:
> > > > >
> > > > > https://github.com/search?q=is%3Apr+reviewed-by%3A
> > > > >
> > > >
> > >
> >
> +repo%3Aapache%2Fkafka+repo%3Aapache%2Fkafka-site=issues
> > > > >
> > > > > (replace  with the GitHub username)
> > > > >
> > > > > --
> > > > > Divij Vaidya
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Apr 18, 2023 at 4:09 PM Viktor Somogyi-Vass
> > > > >  wrote:
> > > > >
> > > > > > I'm not that familiar with Actions either, it just seemed like a
> > tool
> > > > for
> > > > > > this purpose. :)
> > > > > > I Did some digging and what I have in mind is that on pull
> request
> > > > review
> > > > > > it can trigger a workflow:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_review
> > > > > >
> > > > > > We could in theory use Github CLI to edit the description of the
> PR
> > > > when
> > > > > > someone gives a review (or we could perhaps enable this to simply
> > > > comment
> > > > > > too):
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://docs.github.com/en/actions/using-workflows/using-github-cli-in-workflows
> > > > > >
> > > > > > So the action definition would look something like this below.
> Note
> > > > that
> > > > > > the "run" part is very basic, it's just here for the idea. We'll
> > > > probably
> > > > > > need a shell script instead of that line to format it better. But
> > the
> > > > > point
> > > > > > is that it edits the PR and adds the reviewer:
> > > > > >
> > > > > > name: Add revieweron:
> > > > > >   issues:
> > > > > > types:
> > > > > >   - pull_request_reviewjobs:
> > > > > >   comment:
> > > > > > runs-on: ubuntu-latest
> > > > > > steps:  - run: gh pr edit $PR_ID --title "$PR_TITLE"
> --body
> > > > > > "$PR_BODY\n\nReviewers: $SENDER"
> > > > > > env:
> > > > > >   GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
> > > > > >   PR_ID: ${{ github.event.pull_request.id }}
> > > > 

Re: Kafka client needs KAFKA-10337 to cover async commit use case

2023-05-03 Thread Erik van Oosten

Hi Philip,


Firstly, could you explain the situation
in that you would prefer to invoke commitAsync over commitSync in the
rebalance listener?


Of course!

Short answer: we prefer commitAsync because we want to handle multiple 
partitions concurrently using the ZIO runtime.


Long answer: this is in the context of zio-kafka. In zio-kafka the user 
writes code for a stream that processes data and does commits. In the 
library we intercept those commits and pass them to the KafkaConsumer. 
We also keep track of the offsets of handed out records. Together this 
information allows us to track when a stream is ready processing a 
partition and that it is safe to start the rebalance.


All of this happens concurrently and asynchronously using the ZIO 
runtime. When calling commit inside the onPartitionRevoked callback the 
library does a thread-id check; we can only call the KafkaConsumer from 
the same thread that invoked us. This is unfortunate because it forces 
us to spin up a specialized single-threaded ZIO runtime inside the 
callback method. Though this runtime can run blocking methods like 
commitSync, it will need careful programming since all other tasks need 
to wait.


(BTW, it would be great if there is an option to disable the thread-id 
check. It has more use cases, see for example KAFKA-7143.)



is it your concern that we
currently don't have a way to invoke the callback, and the user won't 
be to

correctly handle these failed/successful async commits?


Yes, that is correct.

Sorry - I dug a bit into the old PR. Seems like the issue is there's 
broken
contract that if the commitSync won't wait for the previous async 
commits

to complete if it tries to commit an empty offset map.


Indeed! I am assuming the same is true for commitAsync. The important 
thing is that we need something to get those callbacks. I would prefer 
commitAsync but if only commitSync gets fixed we can use that as well. 
If there is another method completely for this task, that would be good 
as well.


Kind regards,
Erik.


Philip Nee schreef op 2023-05-02 21:49:

Hey Erik,

Just a couple of questions to you: Firstly, could you explain the 
situation

in that you would prefer to invoke commitAsync over commitSync in the
rebalance listener?  Typically we would use the synchronized method to
ensure the commits are completed before moving on with the rebalancing,
which leads to my second comment/question.  is it your concern that we
currently don't have a way to invoke the callback, and the user won't 
be to

correctly handle these failed/successful async commits?

Thanks,
P

On Tue, May 2, 2023 at 12:22 PM Erik van Oosten
 wrote:


Dear developers of the Kafka java client,

It seems I have found a feature gap in the Kafka java client.
KAFKA-10337 and its associated pull request on Github (from 2020!) 
would

solve this, but it was closed without merging. We would love to see it
being reconsidered for merging. This mail has the arguments for doing 
so.


The javadoc of `ConsumerRebalanceListener` method 
`onPartitionsRevoked`
recommends you commit all offsets within the method, thereby holding 
up
the rebalance until those commits are done. The (perceived) feature 
gap

is when the user is trying to do async commits from the rebalance
listener; there is nothing available to trigger the callbacks of
completed commits. Without these callback, there is no way to know 
when

it is safe to return from onPartitionsRevoked. (We cannot call `poll`
because the rebalance listener is already called from inside a poll.)

Calling `commitAsync` with an empty offsets parameter seems a perfect
candidate for triggering callbacks of earlier commits. Unfortunately,
commitAsync doesn't behave that way. This is fixed by mentioned pull
request.

The pull request conversation has a comment saying that calling 
`commit`

with an empty offsets parameter is not something that should happen. I
found this a strange thing to say. First of all, the method does have
special handling for this situation, negating the comment outright. In
addition this special handling violates the contract of the method (as
specified in the javadoc section about ordering). Therefore, this pull
request has 2 advantages:

 1. KafkaConsumer.commitAsync will be more in line with its javadoc,
 2. the feature gap is gone.

Of course, it might be that I missed something and that there are 
other

ways to trigger the commit callbacks. I would be very happy to hear
about that because it means I do not have to wait for a release cycle.

If you agree these arguments are sound, I would be happy to make the
pull request mergable again.

Curious to your thoughts and kind regards,
 Erik.


--
Erik van Oosten
e.vanoos...@grons.nl
https://day-to-day-stuff.blogspot.com
Committer of zio-kafkahttps://github.com/zio/zio-kafka