apoorvmittal10 commented on code in PR #16842:
URL: https://github.com/apache/kafka/pull/16842#discussion_r1811299985


##########
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##########
@@ -594,7 +602,8 @@ private SharePartition 
getOrCreateSharePartition(SharePartitionKey sharePartitio
     private void maybeCompleteInitializationWithException(
             SharePartitionKey sharePartitionKey,
             CompletableFuture<Map<TopicIdPartition, PartitionData>> future,
-            Throwable throwable) {
+            Throwable throwable
+    ) {

Review Comment:
   Reverted.



##########
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##########
@@ -603,22 +612,43 @@ private void maybeCompleteInitializationWithException(
             return;
         }
 
+        // Remove the partition from the cache as it's failed to initialize.
+        partitionCacheMap.computeIfPresent(sharePartitionKey, (k, v) -> null);
+        // The partition initialization failed, so complete the request with 
the exception.
+        // The server should not be in this state, so log the error on broker 
and surface the same
+        // to the client. The broker should not be in this state, investigate 
the root cause of the error.
+        log.error("Error initializing share partition with key {}", 
sharePartitionKey, throwable);
+        future.completeExceptionally(throwable);
+    }
+
+    private void handleSharePartitionException(
+        SharePartitionKey sharePartitionKey,
+        Throwable throwable
+    ) {
         if (throwable instanceof NotLeaderOrFollowerException || throwable 
instanceof FencedStateEpochException) {
             log.info("The share partition with key {} is fenced: {}", 
sharePartitionKey, throwable.getMessage());
             // The share partition is fenced hence remove the partition from 
map and let the client retry.
             // But surface the error to the client so client might take some 
action i.e. re-fetch
             // the metadata and retry the fetch on new leader.
-            partitionCacheMap.remove(sharePartitionKey);
-            future.completeExceptionally(throwable);
-            return;
+            partitionCacheMap.computeIfPresent(sharePartitionKey, (k, v) -> 
null);
         }
+    }
 
-        // The partition initialization failed, so complete the request with 
the exception.
-        // The server should not be in this state, so log the error on broker 
and surface the same
-        // to the client. As of now this state is in-recoverable for the 
broker, and we should
-        // investigate the root cause of the error.
-        log.error("Error initializing share partition with key {}", 
sharePartitionKey, throwable);
-        future.completeExceptionally(throwable);
+    // TODO: Should the return be -1 or throw an exception?
+    private int getLeaderEpoch(TopicPartition tp) {
+        Either<Errors, Partition> partitionOrError = 
replicaManager.getPartitionOrError(tp);
+        if (partitionOrError.isLeft()) {
+          log.error("Failed to get partition leader for topic partition: {}-{} 
due to error: {}",
+              tp.topic(), tp.partition(), partitionOrError.left().get());
+          return -1;

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to