yashmayya commented on PR #13646: URL: https://github.com/apache/kafka/pull/13646#issuecomment-1527292922
Thanks @sambhav-jain-16 > What I suspect is happening is that when the method is initially storing the end offsets of the partitions, the connector hasn't produced 100 records till then and therefore the method doesn't consume fully even though messages are being produced by the connector. I'm not sure how this is is possible given that we're waiting for `MINIMUM_MESSAGES` to be committed first?https://github.com/apache/kafka/blob/c6ad151ac3bac0d8d1d6985d230eacaa170b8984/connect/runtime/src/test/java/org/apache/kafka/connect/integration/ExactlyOnceSourceIntegrationTest.java#L399-L410 (i.e. `SourceTask::commitRecord` is called `MINIMUM_MESSAGES` number of times). Records are "committed" only after the producer transaction is committed successfully - https://github.com/apache/kafka/blob/c6ad151ac3bac0d8d1d6985d230eacaa170b8984/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ExactlyOnceWorkerSourceTask.java#L302-L332 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org