[ https://issues.apache.org/jira/browse/YARN-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868628#comment-16868628 ]
Hadoop QA commented on YARN-9209: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | YARN-9209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972335/YARN-9209.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7347c11e028e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e02eb24 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/24294/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/24294/testReport/ | | Max. process+thread count | 878 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/24294/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > When nodePartition is not set in Placement Constraints, containers are > allocated only in default partition > ---------------------------------------------------------------------------------------------------------- > > Key: YARN-9209 > URL: https://issues.apache.org/jira/browse/YARN-9209 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler > Affects Versions: 3.1.0 > Reporter: Tarun Parimi > Assignee: Tarun Parimi > Priority: Major > Attachments: YARN-9209.001.patch, YARN-9209.002.patch, > YARN-9209.003.patch > > > When application sets a placement constraint without specifying a > nodePartition, the default partition is always chosen as the constraint when > allocating containers. This can be a problem. when an application is > submitted to a queue which has doesn't have enough capacity available on the > default partition. > This is a common scenario when node labels are configured for a particular > queue. The below sample sleeper service cannot get even a single container > allocated when it is submitted to a "labeled_queue", even though enough > capacity is available on the label/partition configured for the queue. Only > the AM container runs. > {code:java}{ > "name": "sleeper-service", > "version": "1.0.0", > "queue": "labeled_queue", > "components": [ > { > "name": "sleeper", > "number_of_containers": 2, > "launch_command": "sleep 90000", > "resource": { > "cpus": 1, > "memory": "4096" > }, > "placement_policy": { > "constraints": [ > { > "type": "ANTI_AFFINITY", > "scope": "NODE", > "target_tags": [ > "sleeper" > ] > } > ] > } > } > ] > } > {code} > It runs fine if I specify the node_partition explicitly in the constraints > like below. > {code:java} > { > "name": "sleeper-service", > "version": "1.0.0", > "queue": "labeled_queue", > "components": [ > { > "name": "sleeper", > "number_of_containers": 2, > "launch_command": "sleep 90000", > "resource": { > "cpus": 1, > "memory": "4096" > }, > "placement_policy": { > "constraints": [ > { > "type": "ANTI_AFFINITY", > "scope": "NODE", > "target_tags": [ > "sleeper" > ], > "node_partitions": [ > "label" > ] > } > ] > } > } > ] > } > {code} > The problem seems to be because only the default partition "" is considered > when node_partition constraint is not specified as seen in below RM log. > {code:java} > 2019-01-17 16:51:59,921 INFO placement.SingleConstraintAppPlacementAllocator > (SingleConstraintAppPlacementAllocator.java:validateAndSetSchedulingRequest(367)) > - Successfully added SchedulingRequest to > app=appattempt_1547734161165_0010_000001 targetAllocationTags=[sleeper]. > nodePartition= > {code} > However, I think it makes more sense to consider "*" or the > {{default-node-label-expression}} of the queue if configured, when no > node_partition is specified in the placement constraint. Since not specifying > any node_partition should ideally mean we don't enforce placement constraints > on any node_partition. However we are enforcing the default partition instead > now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org