[ 
https://issues.apache.org/jira/browse/YARN-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154071#comment-17154071
 ] 

Masatake Iwasaki commented on YARN-10347:
-----------------------------------------

Thanks, [~ayushtkn]. I'm running {{mvn test}} in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 on my local. I will commit the patch after checking the result.

Since I can reproduce the docker failure by running ./start-build-env.sh, I 
filed HADOOP-17120.

> Fix double locking in CapacityScheduler#reinitialize in branch-3.1
> ------------------------------------------------------------------
>
>                 Key: YARN-10347
>                 URL: https://issues.apache.org/jira/browse/YARN-10347
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacity scheduler
>    Affects Versions: 3.1.4
>            Reporter: Masatake Iwasaki
>            Assignee: Masatake Iwasaki
>            Priority: Critical
>         Attachments: YARN-10347-branch-3.1.001.patch
>
>
> Double locking blocks another threads in ResourceManager waiting for the lock.
> I found the issue on testing hadoop-3.1.4-RC2 with RM-HA enabled deployment. 
> ResourceManager blocks on {{submitApplication}} waiting for the lock when I 
> run example MR applications.
> {noformat}
> "IPC Server handler 45 on default port 8032" #211 daemon prio=5 os_prio=0 
> tid=0x00007f0e45a40200 nid=0x418 waiting on condition [0x00007f0e14abe000]
>    java.lang.Thread.State: WAITING (parking)
>         at sun.misc.Unsafe.park(Native Method)
>         - parking to wait for  <0x0000000085d56510> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
>         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>         at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>         at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>         at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>         at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkAndGetApplicationPriority(CapacityScheduler.java:2521)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:417)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:342)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:678)
>         at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
>         at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to