kfaraz commented on code in PR #17735:
URL: https://github.com/apache/druid/pull/17735#discussion_r1962115071


##########
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/supervisor/KafkaSupervisor.java:
##########
@@ -380,6 +383,94 @@ public LagStats computeLagStats()
     return computeLags(partitionRecordLag);
   }
 
+  /**
+   * This method is similar to updatePartitionLagFromStream
+   * but also determines time lag. Once this method has been
+   * tested, we can remove the older one.
+   */
+  protected void updatePartitionTimeAndRecordLagFromStream()
+  {
+    final Map<KafkaTopicPartition, Long> highestCurrentOffsets = 
getHighestCurrentOffsets();
+
+    getRecordSupplierLock().lock();
+    try {
+      Set<KafkaTopicPartition> partitionIds;
+      try {
+        partitionIds = 
recordSupplier.getPartitionIds(getIoConfig().getStream());
+      }
+      catch (Exception e) {
+        log.warn("Could not fetch partitions for topic/stream [%s]", 
getIoConfig().getStream());
+        throw new StreamException(e);
+      }
+
+      for (Map.Entry<KafkaTopicPartition, Long> entry : 
highestCurrentOffsets.entrySet()) {
+        if (partitionIds.contains(entry.getKey()) && 
highestCurrentOffsets.get(entry.getKey()) != null) {
+          // since we need to consider the last arrived record at that 
sequence do a `-1`
+          recordSupplier.seek(new StreamPartition<>(getIoConfig().getStream(), 
entry.getKey()), entry.getValue() - 1);
+        }
+      }
+
+      final Map<KafkaTopicPartition, Long> lastIngestedTimestamps =
+          
CollectionUtils.mapValues(getRecordPerPartitionAtCurrentOffset(partitionIds),
+                                    OrderedPartitionableRecord::getTimestamp
+          );
+
+      Set<StreamPartition<KafkaTopicPartition>> partitions = partitionIds
+          .stream()
+          .map(e -> new StreamPartition<>(getIoConfig().getStream(), e))
+          .collect(Collectors.toSet());
+
+      recordSupplier.seekToLatest(partitions);
+
+      // this method isn't actually computing the lag, just fetching the 
latests offsets from the stream. This is
+      // because we currently only have record lag for kafka, which can be 
lazily computed by subtracting the highest
+      // task offsets from the latest offsets from the stream when it is needed
+      latestSequenceFromStream =
+          
partitions.stream().collect(Collectors.toMap(StreamPartition::getPartitionId, p 
-> recordSupplier.getPosition(p)));
+
+      // .position() gives next value to read, and we need seek by -2 to get 
the current record in next poll()
+      for (Map.Entry<KafkaTopicPartition, Long> entry : 
latestSequenceFromStream.entrySet()) {
+        recordSupplier.seek(new StreamPartition<>(getIoConfig().getStream(), 
entry.getKey()), entry.getValue() - 2);

Review Comment:
   I think this is slightly incorrect.
   
   In the case you describe above, if latest offset is n,
   `seekToEnd()` would have the same effect as `seek(n)`
   
   (This is because latest offset is always defined as the offset of the next 
record to read.
   i.e. for an empty partition, latest offset is 0, not -1.)
   
   The `seekToEnd()` method already positions the consumer at the end of the 
partition i.e. _after_ the last record.
   https://www.baeldung.com/kafka-consumer-seek#3-seek-by-end
   
   So `getPosition()` after this would return n because n is the offset of the 
next record to read.
   The offset of the last record is (n - 1), we need to do `seek(n - 1)`.
   
   You could try to test it out either via a unit test or some cluster testing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to