satishd commented on code in PR #13947:
URL: https://github.com/apache/kafka/pull/13947#discussion_r1289750794


##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -343,21 +345,78 @@ public void onLeadershipChange(Set<Partition> 
partitionsBecomeLeader,
     /**
      * Deletes the internal topic partition info if delete flag is set as true.
      *
-     * @param topicPartition topic partition to be stopped.
+     * @param topicPartitions topic partitions that needs to be stopped.
      * @param delete         flag to indicate whether the given topic 
partitions to be deleted or not.
      */
-    public void stopPartitions(TopicPartition topicPartition, boolean delete) {
+    public void stopPartitions(Set<TopicPartition> topicPartitions,
+                               boolean delete,
+                               BiConsumer<TopicPartition, Throwable> 
errorHandler) {
+        LOGGER.debug("Stopping {} partitions, delete: {}", 
topicPartitions.size(), delete);
+        Set<TopicIdPartition> topicIdPartitions = topicPartitions.stream()
+                .filter(topicIdByPartitionMap::containsKey)
+                .map(tp -> new TopicIdPartition(topicIdByPartitionMap.get(tp), 
tp))
+                .collect(Collectors.toSet());
+
+        topicIdPartitions.forEach(tpId -> {
+            try {
+                RLMTaskWithFuture task = leaderOrFollowerTasks.remove(tpId);
+                if (task != null) {
+                    LOGGER.info("Cancelling the RLM task for tpId: {}", tpId);
+                    task.cancel();
+                }
+                if (delete) {
+                    LOGGER.info("Deleting the remote log segments task for 
partition: {}", tpId);
+                    deleteRemoteLogPartition(tpId);
+                }
+            } catch (Exception ex) {
+                errorHandler.accept(tpId.topicPartition(), ex);
+            }
+        });
+
         if (delete) {
-            // Delete from internal datastructures only if it is to be deleted.
-            Uuid topicIdPartition = topicPartitionIds.remove(topicPartition);
-            LOGGER.debug("Removed partition: {} from topicPartitionIds", 
topicIdPartition);
+            // NOTE: this#stopPartitions method is called when Replica state 
changes to Offline and ReplicaDeletionStarted
+            remoteLogMetadataManager.onStopPartitions(topicIdPartitions);
+            topicPartitions.forEach(topicIdByPartitionMap::remove);
+        }
+    }
+
+    private void deleteRemoteLogPartition(TopicIdPartition partition) throws 
RemoteStorageException, ExecutionException, InterruptedException {
+        List<RemoteLogSegmentMetadata> metadataList = new ArrayList<>();
+        
remoteLogMetadataManager.listRemoteLogSegments(partition).forEachRemaining(metadataList::add);
+
+        List<RemoteLogSegmentMetadataUpdate> deleteSegmentStartedEvents = 
metadataList.stream()
+                .map(metadata ->
+                        new 
RemoteLogSegmentMetadataUpdate(metadata.remoteLogSegmentId(), 
time.milliseconds(),
+                                RemoteLogSegmentState.DELETE_SEGMENT_STARTED, 
brokerId))
+                .collect(Collectors.toList());
+        publishEvents(deleteSegmentStartedEvents).get();
+
+        // KAFKA-15166: Instead of deleting the segment one by one. If the 
underlying RemoteStorageManager supports deleting
+        // a partition, then we can call that API instead.

Review Comment:
   This is not needed once we address removing the remote log segments in 
asynchronous manner through controller and RLMM as mentioned in KIP-405.



##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -343,21 +345,78 @@ public void onLeadershipChange(Set<Partition> 
partitionsBecomeLeader,
     /**
      * Deletes the internal topic partition info if delete flag is set as true.
      *
-     * @param topicPartition topic partition to be stopped.
+     * @param topicPartitions topic partitions that needs to be stopped.
      * @param delete         flag to indicate whether the given topic 
partitions to be deleted or not.
      */
-    public void stopPartitions(TopicPartition topicPartition, boolean delete) {
+    public void stopPartitions(Set<TopicPartition> topicPartitions,
+                               boolean delete,
+                               BiConsumer<TopicPartition, Throwable> 
errorHandler) {
+        LOGGER.debug("Stopping {} partitions, delete: {}", 
topicPartitions.size(), delete);
+        Set<TopicIdPartition> topicIdPartitions = topicPartitions.stream()
+                .filter(topicIdByPartitionMap::containsKey)
+                .map(tp -> new TopicIdPartition(topicIdByPartitionMap.get(tp), 
tp))
+                .collect(Collectors.toSet());
+
+        topicIdPartitions.forEach(tpId -> {
+            try {
+                RLMTaskWithFuture task = leaderOrFollowerTasks.remove(tpId);
+                if (task != null) {
+                    LOGGER.info("Cancelling the RLM task for tpId: {}", tpId);
+                    task.cancel();
+                }
+                if (delete) {
+                    LOGGER.info("Deleting the remote log segments task for 
partition: {}", tpId);
+                    deleteRemoteLogPartition(tpId);
+                }
+            } catch (Exception ex) {
+                errorHandler.accept(tpId.topicPartition(), ex);
+            }
+        });
+
         if (delete) {
-            // Delete from internal datastructures only if it is to be deleted.
-            Uuid topicIdPartition = topicPartitionIds.remove(topicPartition);
-            LOGGER.debug("Removed partition: {} from topicPartitionIds", 
topicIdPartition);
+            // NOTE: this#stopPartitions method is called when Replica state 
changes to Offline and ReplicaDeletionStarted
+            remoteLogMetadataManager.onStopPartitions(topicIdPartitions);
+            topicPartitions.forEach(topicIdByPartitionMap::remove);
+        }
+    }
+
+    private void deleteRemoteLogPartition(TopicIdPartition partition) throws 
RemoteStorageException, ExecutionException, InterruptedException {
+        List<RemoteLogSegmentMetadata> metadataList = new ArrayList<>();
+        
remoteLogMetadataManager.listRemoteLogSegments(partition).forEachRemaining(metadataList::add);
+
+        List<RemoteLogSegmentMetadataUpdate> deleteSegmentStartedEvents = 
metadataList.stream()
+                .map(metadata ->
+                        new 
RemoteLogSegmentMetadataUpdate(metadata.remoteLogSegmentId(), 
time.milliseconds(),
+                                RemoteLogSegmentState.DELETE_SEGMENT_STARTED, 
brokerId))
+                .collect(Collectors.toList());
+        publishEvents(deleteSegmentStartedEvents).get();
+
+        // KAFKA-15166: Instead of deleting the segment one by one. If the 
underlying RemoteStorageManager supports deleting
+        // a partition, then we can call that API instead.
+        for (RemoteLogSegmentMetadata metadata: metadataList) {
+            remoteLogStorageManager.deleteLogSegmentData(metadata);

Review Comment:
   Note: This call does not throw an exception when there is no segment for the 
given metadata in an idempotent manner.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to