[ https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15828436#comment-15828436 ]
Hadoop QA commented on YARN-3884: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 4 new + 270 unchanged - 1 fixed = 274 total (was 271) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 41s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-3884 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12848089/YARN-3884.0006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a3602e731a49 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9130af3 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14687/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/14687/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14687/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14687/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RMContainerImpl transition from RESERVED to KILL apphistory status not updated > ------------------------------------------------------------------------------ > > Key: YARN-3884 > URL: https://issues.apache.org/jira/browse/YARN-3884 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager > Environment: Suse11 Sp3 > Reporter: Bibin A Chundatt > Assignee: Bibin A Chundatt > Labels: oct16-easy > Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, > Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, > YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, > YARN-3884.0006.patch > > > Setup > =============== > 1 NM 3072 16 cores each > Steps to reproduce > =============== > 1.Submit apps to Queue 1 with 512 mb 1 core > 2.Submit apps to Queue 2 with 512 mb and 5 core > lots of containers get reserved and unreserved in this case > {code} > 2015-07-02 20:45:31,169 INFO > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: > container_e24_1435849994778_0002_01_000013 Container Transitioned from NEW to > RESERVED > 2015-07-02 20:45:31,170 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > Reserved container application=application_1435849994778_0002 > resource=<memory:512, vCores:5> queue=QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:2560, vCores:21>, > usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, > numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 > used=<memory:2560, vCores:21> cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:31,170 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:3072, vCores:26>, > usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, > numContainers=6 > 2015-07-02 20:45:31,170 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > assignedContainer queue=root usedCapacity=0.96875 > absoluteUsedCapacity=0.96875 used=<memory:5632, vCores:31> > cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: > container_e24_1435849994778_0001_01_000014 Container Transitioned from NEW to > ALLOCATED > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf > OPERATION=AM Allocated Container TARGET=SchedulerApp > RESULT=SUCCESS APPID=application_1435849994778_0001 > CONTAINERID=container_e24_1435849994778_0001_01_000014 > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: > Assigned container container_e24_1435849994778_0001_01_000014 of capacity > <memory:512, vCores:1> on host host-10-19-92-117:64318, which has 6 > containers, <memory:3072, vCores:14> used and <memory:0, vCores:2> available > after allocation > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > assignedContainer application attempt=appattempt_1435849994778_0001_000001 > container=Container: [ContainerId: > container_e24_1435849994778_0001_01_000014, NodeId: host-10-19-92-117:64318, > NodeHttpAddress: host-10-19-92-117:65321, Resource: <memory:512, vCores:1>, > Priority: 20, Token: null, ] queue=default: capacity=0.2, > absoluteCapacity=0.2, usedResources=<memory:2560, vCores:5>, > usedCapacity=2.0846906, absoluteUsedCapacity=0.41666666, numApps=1, > numContainers=5 clusterResource=<memory:6144, vCores:32> > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > Re-sorting assigned queue: root.default stats: default: capacity=0.2, > absoluteCapacity=0.2, usedResources=<memory:3072, vCores:6>, > usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6 > 2015-07-02 20:45:31,191 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 > used=<memory:6144, vCores:32> cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:32,143 INFO > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: > container_e24_1435849994778_0001_01_000014 Container Transitioned from > ALLOCATED to ACQUIRED > 2015-07-02 20:45:32,174 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Trying to fulfill reservation for application application_1435849994778_0002 > on node: host-10-19-92-143:64318 > 2015-07-02 20:45:32,174 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > Reserved container application=application_1435849994778_0002 > resource=<memory:512, vCores:5> queue=QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:3072, vCores:26>, > usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, > numContainers=6 usedCapacity=2.0317461 absoluteUsedCapacity=0.8125 > used=<memory:3072, vCores:26> cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:32,174 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Skipping scheduling since node host-10-19-92-143:64318 is reserved by > application appattempt_1435849994778_0002_000001 > 2015-07-02 20:45:32,213 INFO > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: > container_e24_1435849994778_0001_01_000014 Container Transitioned from > ACQUIRED to RUNNING > 2015-07-02 20:45:32,213 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Null container completed... > 2015-07-02 20:45:33,178 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Trying to fulfill reservation for application application_1435849994778_0002 > on node: host-10-19-92-143:64318 > 2015-07-02 20:45:33,178 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > Reserved container application=application_1435849994778_0002 > resource=<memory:512, vCores:5> queue=QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:3072, vCores:26>, > usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, > numContainers=6 usedCapacity=2.0317461 absoluteUsedCapacity=0.8125 > used=<memory:3072, vCores:26> cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:33,178 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Skipping scheduling since node host-10-19-92-143:64318 is reserved by > application appattempt_1435849994778_0002_000001 > 2015-07-02 20:45:33,704 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: > Application application_1435849994778_0002 unreserved on node host: > host-10-19-92-143:64318 #containers=5 available=<memory:512, vCores:3> > used=<memory:2560, vCores:13>, currently has 0 at priority 20; > currentReservation <memory:0, vCores:0> > 2015-07-02 20:45:33,704 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > QueueA used=<memory:2560, vCores:21> numContainers=5 user=dsperf > user-resources=<memory:2560, vCores:21> > 2015-07-02 20:45:33,710 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > completedContainer container=Container: [ContainerId: > container_e24_1435849994778_0002_01_000013, NodeId: host-10-19-92-143:64318, > NodeHttpAddress: host-10-19-92-143:65321, Resource: <memory:512, vCores:5>, > Priority: 20, Token: null, ] queue=QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:2560, vCores:21>, > usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, > numContainers=5 cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:33,710 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > completedContainer queue=root usedCapacity=0.9166667 > absoluteUsedCapacity=0.9166667 used=<memory:5632, vCores:27> > cluster=<memory:6144, vCores:32> > 2015-07-02 20:45:33,711 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: > Re-sorting completed queue: root.QueueA stats: QueueA: capacity=0.4, > absoluteCapacity=0.4, usedResources=<memory:2560, vCores:21>, > usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, > numContainers=5 > 2015-07-02 20:45:33,711 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Application attempt appattempt_1435849994778_0002_000001 released container > container_e24_1435849994778_0002_01_000013 on node: host: > host-10-19-92-143:64318 #containers=5 available=<memory:512, vCores:3> > used=<memory:2560, vCores:13> with event: KILL > {code} > *Impact:* > In application history server the status get updated to -1000 (INVALID) > but the end time not updated so Elapsed Time always changes. > Please check the snapshot attached -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org