[ https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550119#comment-14550119 ]
Hadoop QA commented on YARN-3126: --------------------------------- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 5m 19s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 31s | There were no new javac warning messages. | | {color:green}+1{color} | release audit | 0m 20s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 44s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 31s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 31s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 1m 16s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:red}-1{color} | yarn tests | 60m 19s | Tests failed in hadoop-yarn-server-resourcemanager. | | | | 77m 34s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-server-resourcemanager | | | Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.isHDFS; locked 66% of time Unsynchronized access at FileSystemRMStateStore.java:66% of time Unsynchronized access at FileSystemRMStateStore.java:[line 156] | | Timed out tests | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12733746/resourcelimit-test.patch | | Optional Tests | javac unit findbugs checkstyle | | git revision | trunk / 93972a3 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/whitespace.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html | | hadoop-yarn-server-resourcemanager test log | https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/7993/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/7993/console | This message was automatically generated. > FairScheduler: queue's usedResource is always more than the maxResource limit > ----------------------------------------------------------------------------- > > Key: YARN-3126 > URL: https://issues.apache.org/jira/browse/YARN-3126 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler > Affects Versions: 2.3.0 > Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. > Reporter: Xia Hu > Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources > Fix For: trunk-win > > Attachments: resourcelimit-02.patch, resourcelimit-test.patch, > resourcelimit.patch > > > When submitting spark application(both spark-on-yarn-cluster and > spark-on-yarn-cleint model), the queue's usedResources assigned by > fairscheduler always can be more than the queue's maxResources limit. > And by reading codes of fairscheduler, I suppose this issue happened because > of ignore to check the request resources when assign Container. > Here is the detail: > 1. choose a queue. In this process, it will check if queue's usedResource is > bigger than its max, with assignContainerPreCheck. > 2. then choose a app in the certain queue. > 3. then choose a container. And here is the question, there is no check > whether this container would make the queue sources over its max limit. If a > queue's usedResource is 13G, the maxResource limit is 16G, then a container > which asking for 4G resources may be assigned successful. > This problem will always happen in spark application, cause we can ask for > different container resources in different applications. > By the way, I have already use the patch from YARN-2083. -- This message was sent by Atlassian JIRA (v6.3.4#6332)