lhotari commented on code in PR #24366:
URL: https://github.com/apache/pulsar/pull/24366#discussion_r2115897511
##########
pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentTopic.java:
##########
@@ -1510,6 +1511,28 @@ private CompletableFuture<Void> delete(boolean
failIfHasSubscriptions,
}
fenceTopicToCloseOrDelete(); // Avoid clients reconnections while
deleting
+ // Wait compaction to avoid a deadlock below:
+ // 1. thread-compaction : create a raw reader.
+ // 2. thread-topic-deleting : mark topic as deleting, and
disconnect clients that includes compaction
+ // reader.
+ // 3. thread-compaction: the raw reader attempts to reconnect due
to the disconnection in step "2-1".
+ // 4. thread-topic-deleting: unsubscribe all subscriptions, which
contains "__compaction".
+ // 5. thread-topic-deleting: deleting "__compaction" cursor will
wait for the in-progress compaction task.
+ // 6. thread-compaction: the raw read can not connect successfully
anymore because the topic was marked as
+ // "fenced && isDeleting".
+ // 7. deadlock: "thread-topic-deleting" waiting for compaction
task being finished, thread-compaction
+ // continuously reconnects.
+ // To solve the issue, let "thread-topic-deleting" release the
fencing&deleting state, wait for the
+ // compaction task being finished. Since "thread-topic-deleting"
never release "lock.writeLock", the broker
+ // will not trigger a new compaction task.
+ if (currentCompaction != null && !currentCompaction.isDone()) {
+ unfenceTopicToResume();
+ return currentCompaction
+ .thenCompose(__ -> delete(failIfHasSubscriptions,
failIfHasBacklogs, closeIfClientsConnected))
Review Comment:
since the `delete` call attempts to acquire the write lock again, wouldn't
that deadlock when the compaction completes?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]