[
https://issues.apache.org/jira/browse/YARN-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17890515#comment-17890515
]
Syed Shameerur Rahman commented on YARN-11191:
----------------------------------------------
[~slfan1989] I see that the patch/PR is merged. Why is this still open ?
> Global Scheduler refreshQueue cause deadLock
> ---------------------------------------------
>
> Key: YARN-11191
> URL: https://issues.apache.org/jira/browse/YARN-11191
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacity scheduler
> Affects Versions: 2.9.0, 3.0.0, 3.1.0, 2.10.0, 3.2.0, 3.3.0
> Reporter: ben yang
> Assignee: Tamas Domok
> Priority: Major
> Labels: pull-request-available
> Attachments: 1.jstack, Lock holding status.png, YARN-11191.001.patch
>
>
> This is a potential bug may impact all open premmption cluster.In our
> current version with preemption enabled, the capacityScheduler will call the
> refreshQueue method of the PreemptionManager when it refreshQueue. This
> process hold the preemptionManager write lock and require csqueue read
> lock.Meanwhile,ParentQueue.canAssignToThisQueue will hold csqueue readLock
> and require PreemptionManager ReadLock.
> There is a possibility of deadlock at this time.Because readlock has one rule
> on unfair policy, when a lock is already occupied by a read lock and the
> first request in the lock competition queue is a write lock request,other
> read lock requests cann‘t acquire the lock.
> So the potential deadlock is:
> {code:java}
> CapacityScheduler.refreshQueue: hold: PremmptionManager.writeLock
> require: csqueue.readLock
> CapacityScheduler.schedule: hold: csqueue.readLock
> require: PremmptionManager.readLock
> other thread(completeContainer,release Resource,etc.): require:
> csqueue.writeLock
> {code}
> The jstack logs at the time were as follows
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]