fvaleri commented on code in PR #13514:
URL: https://github.com/apache/kafka/pull/13514#discussion_r1177496559


##########
examples/src/main/java/kafka/examples/Consumer.java:
##########
@@ -76,12 +79,17 @@ public Consumer(String threadName,
     public void run() {
         // the consumer instance is NOT thread safe
         try (KafkaConsumer<Integer, String> consumer = createKafkaConsumer()) {
+            // subscribes to a list of topics to get dynamically assigned 
partitions
+            // this class implements the rebalance listener that we pass here 
to be notified of such events
             consumer.subscribe(singleton(topic), this);
             Utils.printOut("Subscribed to %s", topic);
             while (!closed && remainingRecords > 0) {
                 try {
-                    // next poll must be called within session.timeout.ms to 
avoid rebalance
-                    ConsumerRecords<Integer, String> records = 
consumer.poll(Duration.ofSeconds(1));
+                    // if required, poll updates partition assignment and 
invokes the configured rebalance listener
+                    // then tries to fetch records sequentially using the last 
committed offset or auto.offset.reset policy
+                    // returns immediately if there are records or times out 
returning an empty record set
+                    // the next poll must be called within session.timeout.ms 
to avoid group rebalance
+                    ConsumerRecords<Integer, String> records = 
consumer.poll(Duration.ofSeconds(10));

Review Comment:
   I can revert that, considering that the examples are supposed to be run on 
localhost and payloads are very small.



##########
examples/src/main/java/kafka/examples/Consumer.java:
##########
@@ -91,9 +99,13 @@ public void run() {
                     // we can't recover from these exceptions
                     Utils.printErr(e.getMessage());
                     shutdown();
+                } catch (OffsetOutOfRangeException | 
NoOffsetForPartitionException e) {
+                    // invalid or no offset found without auto.reset.policy
+                    Utils.printOut("Invalid or no offset found, using latest");
+                    consumer.seekToEnd(emptyList());

Review Comment:
   I think this is correct. The javadoc says: "If no partitions are provided, 
seek to the final offset for all of the currently assigned partitions."



##########
examples/src/main/java/kafka/examples/Consumer.java:
##########
@@ -91,9 +99,13 @@ public void run() {
                     // we can't recover from these exceptions
                     Utils.printErr(e.getMessage());
                     shutdown();
+                } catch (OffsetOutOfRangeException | 
NoOffsetForPartitionException e) {
+                    // invalid or no offset found without auto.reset.policy
+                    Utils.printOut("Invalid or no offset found, using latest");
+                    consumer.seekToEnd(emptyList());
+                    consumer.commitSync();

Review Comment:
   In the exactly-once demo (auto commit disabled), what happens if you seek to 
the end and in the next cycles there are no transactions to process? I think 
you will seek again after every consumer restart, until some transaction is 
processed and its offsets are committed. I know this can't happen in this demo, 
but could happen in theory, so I think this commit is correct.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to