[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs
[ https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317903#comment-15317903 ] Hadoop QA commented on YARN-5199: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 13s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 48s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 40s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808556/YARN-5199.3.patch | | JIRA Issue | YARN-5199 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ac5c1a1dbba0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3a154f7 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11866/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server | | Console output |
[jira] [Commented] (YARN-5118) Tests fails with localizer port bind exception.
[ https://issues.apache.org/jira/browse/YARN-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317893#comment-15317893 ] Rohith Sharma K S commented on YARN-5118: - +1 lgtm > Tests fails with localizer port bind exception. > --- > > Key: YARN-5118 > URL: https://issues.apache.org/jira/browse/YARN-5118 > Project: Hadoop YARN > Issue Type: Test > Components: test >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: YARN-5118.patch > > > Following test fails with localzier port bind expception. > {noformat} > TestQueuingContainerManager > TestEventFlow > TestNodeStatusUpdaterForLabels > TestLogAggregationService > {noformat} > See following for more details: > https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1473/testReport/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5118) Tests fails with localizer port bind exception.
[ https://issues.apache.org/jira/browse/YARN-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5118: Issue Type: Test (was: Bug) > Tests fails with localizer port bind exception. > --- > > Key: YARN-5118 > URL: https://issues.apache.org/jira/browse/YARN-5118 > Project: Hadoop YARN > Issue Type: Test > Components: test >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: YARN-5118.patch > > > Following test fails with localzier port bind expception. > {noformat} > TestQueuingContainerManager > TestEventFlow > TestNodeStatusUpdaterForLabels > TestLogAggregationService > {noformat} > See following for more details: > https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1473/testReport/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs
[ https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5199: Attachment: YARN-5199.3.patch fix the checkstyle issue > Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream > in NMWebServices#getLogs > > > Key: YARN-5199 > URL: https://issues.apache.org/jira/browse/YARN-5199 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5199.1.patch, YARN-5199.2.patch, YARN-5199.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4525) Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317830#comment-15317830 ] Hudson commented on YARN-4525: -- SUCCESS: Integrated in Hadoop-trunk-Commit #9918 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9918/]) YARN-4525. Fix bug in RLESparseResourceAllocation.getRangeOverlapping(). (arun suresh: rev 3a154f75ed85d864b3ffd35818992418f2b6aa59) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestRLESparseResourceAllocation.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/RLESparseResourceAllocation.java > Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Fix For: 2.8.0 > > Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4525) Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-4525: -- Summary: Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...) (was: Bug in RLESparseResourceAllocation.getRangeOverlapping(...)) > Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: Fix NPE in corner case
[ https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317805#comment-15317805 ] Hudson commented on YARN-5185: -- SUCCESS: Integrated in Hadoop-trunk-Commit #9917 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9917/]) YARN-5185. StageAllocaterGreedyRLE: Fix NPE in corner case. (Carlo (arun suresh: rev 7a9b7372a1a917c7b5e1beca7e13c0419e3dbfef) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/StageAllocatorGreedyRLE.java > StageAllocaterGreedyRLE: Fix NPE in corner case > > > Key: YARN-5185 > URL: https://issues.apache.org/jira/browse/YARN-5185 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 2.8.0 > > Attachments: YARN-5185.1.patch > > > If the plan has only one interval, and the reservation exactly overlap we > will have a null from partialMap.higherKey() that we should guard against. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317802#comment-15317802 ] Arun Suresh commented on YARN-4525: --- Pretty straightforward patch. Looks good to me.. and nice test case. +1, shall commit this shortly.. > Bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5185) StageAllocaterGreedyRLE: Fix NPE in corner case
[ https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-5185: -- Fix Version/s: 2.8.0 > StageAllocaterGreedyRLE: Fix NPE in corner case > > > Key: YARN-5185 > URL: https://issues.apache.org/jira/browse/YARN-5185 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 2.8.0 > > Attachments: YARN-5185.1.patch > > > If the plan has only one interval, and the reservation exactly overlap we > will have a null from partialMap.higherKey() that we should guard against. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5185) StageAllocaterGreedyRLE: Fix NPE in corner case
[ https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-5185: -- Summary: StageAllocaterGreedyRLE: Fix NPE in corner case (was: StageAllocaterGreedyRLE: NPE in corner case ) > StageAllocaterGreedyRLE: Fix NPE in corner case > > > Key: YARN-5185 > URL: https://issues.apache.org/jira/browse/YARN-5185 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5185.1.patch > > > If the plan has only one interval, and the reservation exactly overlap we > will have a null from partialMap.higherKey() that we should guard against. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3426) Add jdiff support to YARN
[ https://issues.apache.org/jira/browse/YARN-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317759#comment-15317759 ] Wangda Tan commented on YARN-3426: -- Latest patch looks good, tried this on trunk and branch-2.8, both works and jdiff site file can be generated. Findbugs warnings should not related, since this is a doc only change, just triggered Jenkins run to verify. Will commit once Jenkins get back. > Add jdiff support to YARN > - > > Key: YARN-3426 > URL: https://issues.apache.org/jira/browse/YARN-3426 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Li Lu >Assignee: Li Lu >Priority: Blocker > Labels: BB2015-05-TBR > Attachments: YARN-3426-040615-1.patch, YARN-3426-040615.patch, > YARN-3426-040715.patch, YARN-3426-040815.patch, YARN-3426-05-12-2016.txt > > > Maybe we'd like to extend our current jdiff tool for hadoop-common and hdfs > to YARN as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats
[ https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317740#comment-15317740 ] Naganarasimha G R commented on YARN-4308: - Hi [~sunilg], Overall the changes looks good but we could have avoided {{MockCPUResourceCalculatorProcessTree}} and just used {{Mockito.when along with Mockito.thenReturn(Value1,value2...)}}. thoughts? > ContainersAggregated CPU resource utilization reports negative usage in first > few heartbeats > > > Key: YARN-4308 > URL: https://issues.apache.org/jira/browse/YARN-4308 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.1 >Reporter: Sunil G >Assignee: Sunil G > Attachments: 0001-YARN-4308.patch, 0002-YARN-4308.patch, > 0003-YARN-4308.patch, 0004-YARN-4308.patch, 0005-YARN-4308.patch, > 0006-YARN-4308.patch, 0007-YARN-4308.patch, 0008-YARN-4308.patch, > 0009-YARN-4308.patch > > > NodeManager reports ContainerAggregated CPU resource utilization as -ve value > in first few heartbeats cycles. I added a new debug print and received below > values from heartbeats. > {noformat} > INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 > INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource > Utilization : CpuTrackerUsagePercent : 198.94598 > {noformat} > Its better we send 0 as CPU usage rather than sending a negative values in > heartbeats eventhough its happening in only first few heartbeats. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317718#comment-15317718 ] Hadoop QA commented on YARN-5124: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 23 new + 157 unchanged - 32 fixed = 180 total (was 189) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 5 new + 156 unchanged - 0 fixed = 161 total (was 156) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 4s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 112m 6s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | | org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl$ResourceReverseMemoryThenCpuComparator implements Comparator but not Serializable At AMRMClientImpl.java:Serializable At AMRMClientImpl.java:[lines 123-142] | | Failed junit tests | hadoop.yarn.client.TestGetGroups | | | hadoop.yarn.client.api.impl.TestAMRMProxy | | Timed out junit tests | org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling | | | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | | | org.apache.hadoop.yarn.client.api.impl.TestNMClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808520/YARN-5124.010.patch | | JIRA Issue | YARN-5124
[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317657#comment-15317657 ] Hadoop QA commented on YARN-4525: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 53s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 48s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestRMAdminService | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808522/YARN-4525.2.patch | | JIRA Issue | YARN-4525 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4fb0bc44d7b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6de9213 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/11864/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/11864/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11864/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/11864/console | | Powered by |
[jira] [Created] (YARN-5205) yarn logs for live applications does not provide log files which may have already been aggregated
Siddharth Seth created YARN-5205: Summary: yarn logs for live applications does not provide log files which may have already been aggregated Key: YARN-5205 URL: https://issues.apache.org/jira/browse/YARN-5205 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.9.0 Reporter: Siddharth Seth With periodic aggregation enabled, the logs which have been partially aggregated are not always displayed by the yarn logs command. If the file exists in the log dir for a container - all previously aggregated files with the same name, along with the current file will be part of the yarn log output. Files which have been previously aggregated, for which a file with the same name does not exists in the container log dir do not show up in the output. After the app completes, all logs are available. cc [~xgong] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces
[ https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317605#comment-15317605 ] Hadoop QA commented on YARN-1942: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 36 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 2s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 11s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 13s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 1s {color} | {color:red} root: The patch generated 111 new + 2943 unchanged - 33 fixed = 3054 total (was 2976) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 12 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 37s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 2 new + 4579 unchanged - 0 fixed = 4581 total (was 4579) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 1s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 22s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 50s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 16s {color} | {color:red} hadoop-yarn-server-timeline-pluginstorage in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 14s {color} | {color:red}
[jira] [Updated] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API
[ https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5203: - Assignee: Ellen Hui > Return ResourceRequest JAXB object in ResourceManager Cluster Applications > REST API > --- > > Key: YARN-5203 > URL: https://issues.apache.org/jira/browse/YARN-5203 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Ellen Hui > > The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} > as String rather than a JAXB object. This prevents downstream tools like > Federation Router (YARN-3659) that depend on the REST API to unmarshall the > {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version > of the {{ResourceRequest}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM
[ https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317600#comment-15317600 ] Hadoop QA commented on YARN-5176: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 152 unchanged - 4 fixed = 153 total (was 156) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 53s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 49s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808516/YARN-5176.002.patch | | JIRA Issue | YARN-5176 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0f80fd91725c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6de9213 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/11862/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/11862/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/11862/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11862/testReport/ | | modules | C:
[jira] [Updated] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-4525: --- Attachment: YARN-4525.2.patch > Bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317590#comment-15317590 ] Carlo Curino commented on YARN-4525: fixed checkstyle... > Bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs
[ https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317580#comment-15317580 ] Hadoop QA commented on YARN-5199: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 40s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808508/YARN-5199.2.patch | | JIRA Issue | YARN-5199 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5723687fce9c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6de9213 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/11861/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11861/testReport/ | | modules | C:
[jira] [Comment Edited] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317575#comment-15317575 ] Arun Suresh edited comment on YARN-5124 at 6/7/16 12:48 AM: Uploading patch addressing most of [~curino]'s feedback : bq. Can we wrap this datastructure in some object? Map{} class ExecutionTypeMap extends HashMap {} class LocationMap extends HashMap {} class RemoteRequestsTable extends HashMap {} {noformat} # The remoteRequestsTable then looks like this: {noformat} final RemoteRequestsTable remoteRequestsTable = new RemoteRequestsTable(); {noformat} bq. Do you need the new constructor AMRMClientImpl(ApplicationMasterProtocol protocol)? Can't you use mockito for the conf in tests and pass the ApplicationMasterProtocol that way? I can't use mockito, since I actually have a real AMRMClientImpl object. This is a functional test that uses an actual MiniYARNCluster. So don't think there is any other way to inject a custom ApplicationMasterProtocol. But I did add a {{@VisibleForTesting}} as per [~kasha]'s suggestion. bq. The ordering of where ExecutionType goes is not consistent, in ResourceRequest is at the end, while in most other places is earlier. Strong typing make this not a big deal, just looks cleaner if consistent. The reason I had to put ExecutionType in the end for {{ContainerRequest}} and {{ResourceRequest}} was that otherwise, would have to change many of the methods marked {{@Stable}} in previous releases.. In the AMRMClientImpl, it appears before the {{Resource}} in the composite key for reasons i mentioend [here|https://issues.apache.org/jira/browse/YARN-5124?focusedCommentId=15303099=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303099] but thats an implementation detail and hidden from the user. Hope this made sense.. was (Author: asuresh): Uploading patch addressing most of [~curino]'s feedback : bq. Can we wrap this datastructure in some object? Map {} class ExecutionTypeMap extends HashMap {} class LocationMap extends HashMap {} class RemoteRequestsTable extends HashMap {} {noformat} # The remoteRequestsTable then looks like this: {noformat} final RemoteRequestsTable remoteRequestsTable = new RemoteRequestsTable(); {noformat} bq. Do you need the new constructor AMRMClientImpl(ApplicationMasterProtocol protocol)? Can't you use mockito for the conf in tests and pass the ApplicationMasterProtocol that way? I can't use mockito, since I actually have a real AMRMClientImpl object. This is a functional test that uses an actual MiniYARNCluster. So don't think there is any other way to inject a custom ApplicationMasterProtocol. But I did add a {{@VisibleForTesting}} as per [~kasha]'s suggestion. bq. The ordering of where ExecutionType goes is not consistent, in ResourceRequest is at the end, while in most other places is earlier. Strong typing make this not a big deal, just looks cleaner if consistent. The reason I had to put ExecutionType in the end for {{ContainerRequest}} and {{ResourceRequest}} was that otherwise, would have to change many of the methods marked {{@Stable}} in previous releases.. In the AMRMClientImpl, it appears before the {{Resource}} in the composite key for reasons i mentioend [here|https://issues.apache.org/jira/browse/YARN-5124?focusedCommentId=15303099=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303099] but thats an implementation detail and hidden from the user. Hope this made sense.. > Modify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 >
[jira] [Comment Edited] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317575#comment-15317575 ] Arun Suresh edited comment on YARN-5124 at 6/7/16 12:47 AM: Uploading patch addressing most of [~curino]'s feedback : bq. Can we wrap this datastructure in some object? Map{} class ExecutionTypeMap extends HashMap {} class LocationMap extends HashMap {} class RemoteRequestsTable extends HashMap {} {noformat} # The remoteRequestsTable then looks like this: {noformat} final RemoteRequestsTable remoteRequestsTable = new RemoteRequestsTable(); {noformat} bq. Do you need the new constructor AMRMClientImpl(ApplicationMasterProtocol protocol)? Can't you use mockito for the conf in tests and pass the ApplicationMasterProtocol that way? I can't use mockito, since I actually have a real AMRMClientImpl object. This is a functional test that uses an actual MiniYARNCluster. So don't think there is any other way to inject a custom ApplicationMasterProtocol. But I did add a {{@VisibleForTesting}} as per [~kasha]'s suggestion. bq. The ordering of where ExecutionType goes is not consistent, in ResourceRequest is at the end, while in most other places is earlier. Strong typing make this not a big deal, just looks cleaner if consistent. The reason I had to put ExecutionType in the end for {{ContainerRequest}} and {{ResourceRequest}} was that otherwise, would have to change many of the methods marked {{@Stable}} in previous releases.. In the AMRMClientImpl, it appears before the {{Resource}} in the composite key for reasons i mentioend [here|https://issues.apache.org/jira/browse/YARN-5124?focusedCommentId=15303099=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303099] but thats an implementation detail and hidden from the user. Hope this made sense.. was (Author: asuresh): Uploading patch addressing most of [~curino]'s feedback : bq. Can we wrap this datastructure in some object? Map {} class ExecutionTypeMap extends HashMap {} class LocationMap extends HashMap {} class RemoteRequestsTable extends HashMap {} {noformat} # The remoteRequestsTable then looks like this: {noformat} final RemoteRequestsTable remoteRequestsTable = new RemoteRequestsTable(); {noformat} bq. Do you need the new constructor AMRMClientImpl(ApplicationMasterProtocol protocol)? Can't you use mockito for the conf in tests and pass the ApplicationMasterProtocol that way? I can't use mockito, since I actually have a real AMRMClientImpl object. This is a functional test that uses an actual MiniYARNCluster. So don't think there is any other way to inject a custom ApplicationMasterProtocol. But I did add a {{@VisibleForTesting}} as per [~kasha]'s suggestion. bq. The ordering of where ExecutionType goes is not consistent, in ResourceRequest is at the end, while in most other places is earlier. Strong typing make this not a big deal, just looks cleaner if consistent. The reason I had to put ExecutionType in the end for {{ContainerRequest}} and {{ResourceRequest}} was that otherwise, would have to change many of the methods marked {{@Stable}} in previous releases.. In the AMRMClientImpl, it appears before the {{Resource}} in the composite key for reasons i mentioend [here|https://issues.apache.org/jira/browse/YARN-5124?focusedCommentId=15303099=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303099] but thats an implementation detail and hidden from the user. Hope this made sense.. > Modify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 > URL:
[jira] [Updated] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-5124: -- Attachment: YARN-5124.010.patch Uploading patch addressing most of [~curino]'s feedback : bq. Can we wrap this datastructure in some object? Map{} class ExecutionTypeMap extends HashMap {} class LocationMap extends HashMap {} class RemoteRequestsTable extends HashMap {} {noformat} # The remoteRequestsTable then looks like this: {noformat} final RemoteRequestsTable remoteRequestsTable = new RemoteRequestsTable(); {noformat} bq. Do you need the new constructor AMRMClientImpl(ApplicationMasterProtocol protocol)? Can't you use mockito for the conf in tests and pass the ApplicationMasterProtocol that way? I can't use mockito, since I actually have a real AMRMClientImpl object. This is a functional test that uses an actual MiniYARNCluster. So don't think there is any other way to inject a custom ApplicationMasterProtocol. But I did add a {{@VisibleForTesting}} as per [~kasha]'s suggestion. bq. The ordering of where ExecutionType goes is not consistent, in ResourceRequest is at the end, while in most other places is earlier. Strong typing make this not a big deal, just looks cleaner if consistent. The reason I had to put ExecutionType in the end for {{ContainerRequest}} and {{ResourceRequest}} was that otherwise, would have to change many of the methods marked {{@Stable}} in previous releases.. In the AMRMClientImpl, it appears before the {{Resource}} in the composite key for reasons i mentioend [here|https://issues.apache.org/jira/browse/YARN-5124?focusedCommentId=15303099=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303099] but thats an implementation detail and hidden from the user. Hope this made sense.. > Modify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 > URL: https://issues.apache.org/jira/browse/YARN-5124 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5124.001.patch, YARN-5124.002.patch, > YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, > YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, > YARN-5124.010.patch, YARN-5124_YARN-5180_combined.007.patch, > YARN-5124_YARN-5180_combined.008.patch > > > Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} > in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} > that is sent to the RM -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5176) More test cases for queuing of containers at the NM
[ https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-5176: - Attachment: YARN-5176.002.patch Attaching new patch to fix checkstyle issues: # Added javadoc in the {{TestQueuingContainersManager}}. # Removed superfluous import. # Did not fix the issue with having a method with more than 7 parameters. That method was already there (I just moved it to another class during refactoring). > More test cases for queuing of containers at the NM > --- > > Key: YARN-5176 > URL: https://issues.apache.org/jira/browse/YARN-5176 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-5176.001.patch, YARN-5176.002.patch > > > Extending {{TestQueuingContainerManagerImpl}} to include more test cases for > the queuing of containers at the NM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM
[ https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317551#comment-15317551 ] Hadoop QA commented on YARN-5176: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 3 new + 152 unchanged - 4 fixed = 155 total (was 156) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 47s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 29s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808507/YARN-5176.001.patch | | JIRA Issue | YARN-5176 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8363279bb94d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6de9213 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/11860/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11860/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/11860/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > More test cases for queuing of containers at the NM > --- > > Key: YARN-5176 > URL:
[jira] [Commented] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices
[ https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317538#comment-15317538 ] Hadoop QA commented on YARN-5191: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 279 unchanged - 2 fixed = 279 total (was 281) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 16s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 8s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 39s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808505/YARN-5191.5.patch | | JIRA Issue | YARN-5191 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4ad70b005138 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs
[ https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317536#comment-15317536 ] Xuan Gong commented on YARN-5199: - [~varun_saxena] Ah, That is correct. Attached a new patch to address it > Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream > in NMWebServices#getLogs > > > Key: YARN-5199 > URL: https://issues.apache.org/jira/browse/YARN-5199 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5199.1.patch, YARN-5199.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs
[ https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5199: Attachment: YARN-5199.2.patch > Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream > in NMWebServices#getLogs > > > Key: YARN-5199 > URL: https://issues.apache.org/jira/browse/YARN-5199 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5199.1.patch, YARN-5199.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing
[ https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317530#comment-15317530 ] Wangda Tan commented on YARN-4837: -- +1 to latest patch, findbugs warning is not related. Will commit this patch in 24h if no objections. > User facing aspects of 'AM blacklisting' feature need fixing > > > Key: YARN-4837 > URL: https://issues.apache.org/jira/browse/YARN-4837 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli >Priority: Critical > Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, > YARN-4837-20160520.txt, YARN-4837-20160527.txt, YARN-4837-20160604.txt > > > Was reviewing the user-facing aspects that we are releasing as part of 2.8.0. > Looking at the 'AM blacklisting feature', I see several things to be fixed > before we release it in 2.8.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node
[ https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317528#comment-15317528 ] Inigo Goiri commented on YARN-5171: --- Reuse the test for distributed scheduling. > Extend DistributedSchedulerProtocol to notify RM of containers allocated by > the Node > > > Key: YARN-5171 > URL: https://issues.apache.org/jira/browse/YARN-5171 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Inigo Goiri > Attachments: YARN-5171.000.patch, YARN-5171.001.patch, > YARN-5171.002.patch, YARN-5171.003.patch, YARN-5171.004.patch, > YARN-5171.005.patch > > > Currently, the RM does not know about Containers allocated by the > OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the > Distributed Scheduler request interceptor and the protocol to notify the RM > of new containers as and when they are allocated at the NM. The > {{RMContainer}} should also be extended to expose the {{ExecutionType}} of > the container. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5204) Properly report status of killed/stopped queued containers
Konstantinos Karanasos created YARN-5204: Summary: Properly report status of killed/stopped queued containers Key: YARN-5204 URL: https://issues.apache.org/jira/browse/YARN-5204 Project: Hadoop YARN Issue Type: Sub-task Reporter: Konstantinos Karanasos When a queued container gets killed or stopped, we need to report its status in the {{getContainerStatusInternal}} method of the {{QueuingContainerManagerImpl}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5204) Properly report status of killed/stopped queued containers
[ https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos reassigned YARN-5204: Assignee: Konstantinos Karanasos > Properly report status of killed/stopped queued containers > -- > > Key: YARN-5204 > URL: https://issues.apache.org/jira/browse/YARN-5204 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > > When a queued container gets killed or stopped, we need to report its status > in the {{getContainerStatusInternal}} method of the > {{QueuingContainerManagerImpl}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5176) More test cases for queuing of containers at the NM
[ https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-5176: - Attachment: YARN-5176.001.patch Adding patch with additional test cases. > More test cases for queuing of containers at the NM > --- > > Key: YARN-5176 > URL: https://issues.apache.org/jira/browse/YARN-5176 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-5176.001.patch > > > Extending {{TestQueuingContainerManagerImpl}} to include more test cases for > the queuing of containers at the NM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API
Subru Krishnan created YARN-5203: Summary: Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API Key: YARN-5203 URL: https://issues.apache.org/jira/browse/YARN-5203 Project: Hadoop YARN Issue Type: Bug Reporter: Subru Krishnan The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} as String rather than a JAXB object. This prevents downstream tools like Federation Router (YARN-3659) that depend on the REST API to unmarshall the {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version of the {{ResourceRequest}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices
[ https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5191: Attachment: YARN-5191.5.patch > Rename the “download=true” option for getLogs in NMWebServices and > AHSWebServices > - > > Key: YARN-5191 > URL: https://issues.apache.org/jira/browse/YARN-5191 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5191.1.patch, YARN-5191.2.patch, YARN-5191.3.patch, > YARN-5191.4.patch, YARN-5191.5.patch > > > Rename the “download=true” option to instead be something like > “format=octet-stream”, so that we are explicit -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices
[ https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317485#comment-15317485 ] Xuan Gong commented on YARN-5191: - Thanks for the comments. [~vinodkv] bq. Refactor the common code for handling of format / contentType between AHSWebServices and NMWebServices - getting default format value, validation for invalid contentType etc. I would prefer to do all the refactor work together in https://issues.apache.org/jira/browse/YARN-4993 New patch has addressed other comments. > Rename the “download=true” option for getLogs in NMWebServices and > AHSWebServices > - > > Key: YARN-5191 > URL: https://issues.apache.org/jira/browse/YARN-5191 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5191.1.patch, YARN-5191.2.patch, YARN-5191.3.patch, > YARN-5191.4.patch, YARN-5191.5.patch > > > Rename the “download=true” option to instead be something like > “format=octet-stream”, so that we are explicit -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE
[ https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317464#comment-15317464 ] Hadoop QA commented on YARN-5164: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 18 unchanged - 1 fixed = 19 total (was 19) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 11s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Dead store to p in org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy.validate(Plan, ReservationAllocation) At CapacityOverTimePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy.validate(Plan, ReservationAllocation) At CapacityOverTimePolicy.java:[line 170] | | Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization | | | hadoop.yarn.server.resourcemanager.reservation.TestFairSchedulerPlanFollower | | | hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower | | | hadoop.yarn.server.resourcemanager.reservation.planning.TestGreedyReservationAgent | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation | | | hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA | | | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808494/YARN-5164.6.patch | | JIRA Issue | YARN-5164 | | Optional
[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats
[ https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317461#comment-15317461 ] Daniel Templeton commented on YARN-4308: Thanks, [~sunilg]! Tests look good. It would be nice to add messages to your asserts. It would also be nice to add javadocs to the constructor and non-overridden methods in {{MockCPUResourceCalculatorProcessTree}}. > ContainersAggregated CPU resource utilization reports negative usage in first > few heartbeats > > > Key: YARN-4308 > URL: https://issues.apache.org/jira/browse/YARN-4308 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.1 >Reporter: Sunil G >Assignee: Sunil G > Attachments: 0001-YARN-4308.patch, 0002-YARN-4308.patch, > 0003-YARN-4308.patch, 0004-YARN-4308.patch, 0005-YARN-4308.patch, > 0006-YARN-4308.patch, 0007-YARN-4308.patch, 0008-YARN-4308.patch, > 0009-YARN-4308.patch > > > NodeManager reports ContainerAggregated CPU resource utilization as -ve value > in first few heartbeats cycles. I added a new debug print and received below > values from heartbeats. > {noformat} > INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 > INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource > Utilization : CpuTrackerUsagePercent : 198.94598 > {noformat} > Its better we send 0 as CPU usage rather than sending a negative values in > heartbeats eventhough its happening in only first few heartbeats. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317407#comment-15317407 ] Hadoop QA commented on YARN-4525: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 45s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 4 new + 21 unchanged - 0 fixed = 25 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 2s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 45s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808480/YARN-4525.1.patch | | JIRA Issue | YARN-4525 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a75529893ec1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4a1cedc | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/11857/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/11857/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/11857/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11857/testReport/ | | modules | C:
[jira] [Assigned] (YARN-5070) upgrade HBase version for first merge
[ https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C reassigned YARN-5070: Assignee: Vrushali C (was: Sangjin Lee) > upgrade HBase version for first merge > - > > Key: YARN-5070 > URL: https://issues.apache.org/jira/browse/YARN-5070 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Vrushali C >Priority: Critical > Labels: yarn-2928-1st-milestone > > Currently we set the HBase version for the timeline service storage to 1.0.1. > This is a fairly old version, and there are reasons to upgrade to a newer > version. We should upgrade it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service
[ https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317385#comment-15317385 ] Hadoop QA commented on YARN-679: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 53 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 44s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 41s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 13s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 13s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 32s {color} | {color:red} root: The patch generated 181 new + 145 unchanged - 9 fixed = 326 total (was 154) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 44s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 108 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 12s {color} | {color:red} hadoop-common-project_hadoop-common generated 9 new + 1 unchanged - 0 fixed = 10 total (was 1) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 37s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 56s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 20s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Issue | YARN-679 | | GITHUB PR | https://github.com/apache/hadoop/pull/68 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c75ae23252b2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE
[ https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317371#comment-15317371 ] Carlo Curino commented on YARN-5164: Checkstyle and javadocs + creating patch after YARN-5165 is committed. [~chris.douglas] can you take a look, since you are already familiar with the intended behavior of the code? Thanks. > CapacityOvertimePolicy does not take advantaged of plan RLE > --- > > Key: YARN-5164 > URL: https://issues.apache.org/jira/browse/YARN-5164 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, > YARN-5164-inclusive.5.patch, YARN-5164.1.patch, YARN-5164.2.patch, > YARN-5164.5.patch, YARN-5164.6.patch > > > As a consequence small time granularities (e.g., 1 sec) and long time horizon > for a reservation (e.g., months) run rather slow (10 sec). > Proposed resolution is to switch to interval math in checking, similar to how > YARN-4359 does for agents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms
[ https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317372#comment-15317372 ] Jonathan Maron commented on YARN-4757: -- {quote} Do you mean this flag will be used to enable/disable dns functionality if the DNS server is hosted in RM ? {quote} Oh - good point. At one point I was looking at embedding the server, but at this point that is not the case so the flag is probably unnecessary. I'll remove it. {quote} dont quite know what the SimpleResolver can do. Does it behave like a normal DNS server which can answer non-YARN queries ? I thought the flow is that if the primary server cannot answer the query, that will be forwarded to yarn dns. Not that yarn dns forward to the primary server. {quote} The SimpleResolver acts as a DNS client in this instance. I was exploring the idea of allowing the YARN DNS server to server as a "primary" server by indirectly supporting record retrieval for records outside of the YARN zone. I now feel that is probably very unlikely, so I can remove that feature. {quote} After closer look, IIUC, this patch assumes the last entry of the zk path is container Id only. If it is component name, then the BaseServiceRecordProcessor#getContainerIDName will break. {quote} It is container ID in practice, which may actually conflict with the YARN Registry documentation. I may have mis-spoken when I indicated the component name is related in the path. So my belief is that the correct and is correlated to the real work implementation. > [Umbrella] Simplified discovery of services via DNS mechanisms > -- > > Key: YARN-4757 > URL: https://issues.apache.org/jira/browse/YARN-4757 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Vinod Kumar Vavilapalli >Assignee: Jonathan Maron > Attachments: > 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- > Simplified discovery of services via DNS mechanisms.pdf, > YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, > YARN-4757-YARN-4757.003.patch > > > [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track > all related efforts.] > In addition to completing the present story of service-registry (YARN-913), > we also need to simplify the access to the registry entries. The existing > read mechanisms of the YARN Service Registry are currently limited to a > registry specific (java) API and a REST interface. In practice, this makes it > very difficult for wiring up existing clients and services. For e.g, dynamic > configuration of dependent endpoints of a service is not easy to implement > using the present registry-read mechanisms, *without* code-changes to > existing services. > A good solution to this is to expose the registry information through a more > generic and widely used discovery mechanism: DNS. Service Discovery via DNS > uses the well-known DNS interfaces to browse the network for services. > YARN-913 in fact talked about such a DNS based mechanism but left it as a > future task. (Task) Having the registry information exposed via DNS > simplifies the life of services. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE
[ https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5164: --- Attachment: YARN-5164.6.patch > CapacityOvertimePolicy does not take advantaged of plan RLE > --- > > Key: YARN-5164 > URL: https://issues.apache.org/jira/browse/YARN-5164 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, > YARN-5164-inclusive.5.patch, YARN-5164.1.patch, YARN-5164.2.patch, > YARN-5164.5.patch, YARN-5164.6.patch > > > As a consequence small time granularities (e.g., 1 sec) and long time horizon > for a reservation (e.g., months) run rather slow (10 sec). > Proposed resolution is to switch to interval math in checking, similar to how > YARN-4359 does for agents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5094) some YARN container events have timestamp of -1 in REST output
[ https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5094: -- Labels: (was: yarn-2928-1st-milestone) > some YARN container events have timestamp of -1 in REST output > -- > > Key: YARN-5094 > URL: https://issues.apache.org/jira/browse/YARN-5094 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Li Lu > Attachments: YARN-5094-YARN-2928.001.patch > > > Some events in the YARN container entities have timestamp of -1. The > RM-generated container events have proper timestamps. It appears that it's > the NM-generated events that have -1: YARN_CONTAINER_CREATED, > YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, > YARN_NM_CONTAINER_LOCALIZATION_STARTED. > In the YARN container page, > {noformat} > { > id: "YARN_CONTAINER_CREATED", > timestamp: -1, > info: { } > }, > { > id: "YARN_CONTAINER_FINISHED", > timestamp: -1, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: -1, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: -1, > info: { } > } > {noformat} > I think the data itself is OK, but the values are not being populated in the > REST output? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case
[ https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317356#comment-15317356 ] Arun Suresh commented on YARN-5185: --- Makes sense.. +1 > StageAllocaterGreedyRLE: NPE in corner case > > > Key: YARN-5185 > URL: https://issues.apache.org/jira/browse/YARN-5185 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5185.1.patch > > > If the plan has only one interval, and the reservation exactly overlap we > will have a null from partialMap.higherKey() that we should guard against. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5071) address HBase compatibility issues with trunk
[ https://issues.apache.org/jira/browse/YARN-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317325#comment-15317325 ] Sangjin Lee commented on YARN-5071: --- After the HBase upgrade (YARN-5070) and the last rebase, we will assess whether we still have an issue. If not, we can either close this issue for now or at least remove the merge blocker label. > address HBase compatibility issues with trunk > - > > Key: YARN-5071 > URL: https://issues.apache.org/jira/browse/YARN-5071 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > Labels: yarn-2928-1st-milestone > > The trunk is now adding or planning to add more and more > backward-incompatible changes. Some examples include > - remove v.1 metrics classes (HADOOP-12504) > - update jersey version (HADOOP-9613) > - target java 8 by default (HADOOP-11858) > This poses big challenges for the timeline service v.2 as we have a > dependency on hbase which depends on an older version of hadoop. > We need to find a way to solve/contain/manage these risks before it is too > late. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1 in REST output
[ https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317326#comment-15317326 ] Sangjin Lee commented on YARN-5094: --- I don't believe this should block the merge. Please let me know if you disagree. > some YARN container events have timestamp of -1 in REST output > -- > > Key: YARN-5094 > URL: https://issues.apache.org/jira/browse/YARN-5094 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Li Lu > Labels: yarn-2928-1st-milestone > Attachments: YARN-5094-YARN-2928.001.patch > > > Some events in the YARN container entities have timestamp of -1. The > RM-generated container events have proper timestamps. It appears that it's > the NM-generated events that have -1: YARN_CONTAINER_CREATED, > YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, > YARN_NM_CONTAINER_LOCALIZATION_STARTED. > In the YARN container page, > {noformat} > { > id: "YARN_CONTAINER_CREATED", > timestamp: -1, > info: { } > }, > { > id: "YARN_CONTAINER_FINISHED", > timestamp: -1, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: -1, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: -1, > info: { } > } > {noformat} > I think the data itself is OK, but the values are not being populated in the > REST output? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5167) Escaping occurences of encodedValues
[ https://issues.apache.org/jira/browse/YARN-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317323#comment-15317323 ] Sangjin Lee commented on YARN-5167: --- Thanks [~jrottinghuis] and [~varun_saxena] for your review! > Escaping occurences of encodedValues > > > Key: YARN-5167 > URL: https://issues.apache.org/jira/browse/YARN-5167 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Joep Rottinghuis >Assignee: Sangjin Lee >Priority: Critical > Labels: yarn-2928-1st-milestone > Fix For: YARN-2928 > > Attachments: YARN-5167-YARN-2928.01.patch, > YARN-5167-YARN-2928.02.patch, YARN-5167-YARN-2928.03.patch > > > We had earlier decided to punt on this, but in discussing YARN-5109 we > thought it would be best to just be safe rather than sorry later on. > Encoded sequences can occur in the original string, especially in case of > "foreign key" if we decide to have lookups. > For example, space is encoded as %2$. > Encoding "String with %2$ in it" would decode to "String with in it". > We though we should first escape existing occurrences of encoded strings by > prefixing a backslash (even if there is already a backslash that should be > ok). Then we should replace all unencoded strings. > On the way out, we should replace all occurrences of our encoded string to > the original except when it is prefixed by an escape character. Lastly we > should strip off the one additional backslash in front of each remaining > (escaped) sequence. > If we add the following entry to TestSeparator#testEncodeDecode() that > demonstrates what this jira should accomplish: > {code} > testEncodeDecode("Double-escape %2$ and %3$ or \\%2$ or \\%3$, nor > %2$ = no problem!", Separator.QUALIFIERS, > Separator.VALUES, Separator.SPACE, Separator.TAB); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE
[ https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5164: --- Attachment: YARN-5164.5.patch > CapacityOvertimePolicy does not take advantaged of plan RLE > --- > > Key: YARN-5164 > URL: https://issues.apache.org/jira/browse/YARN-5164 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, > YARN-5164-inclusive.5.patch, YARN-5164.1.patch, YARN-5164.2.patch, > YARN-5164.5.patch > > > As a consequence small time granularities (e.g., 1 sec) and long time horizon > for a reservation (e.g., months) run rather slow (10 sec). > Proposed resolution is to switch to interval math in checking, similar to how > YARN-4359 does for agents. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case
[ https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317311#comment-15317311 ] Carlo Curino commented on YARN-5185: [~asuresh], this bug is very similar to what we had in YARN-4525, for which I was able to construct a test. Here it is tricky, as the bug only triggers once the NavigableMap is accessed and restricted multiple times, and the use is fairly deep into the stack of calls we can test. However (answering your question), I saw tests in YARN-5164 triggering this bug. I am ok merging this or not as you prefer. I think the patch is fairly clearly non-harmful and protects against a possible NPE so I would suggest to commit it as is, because the reviewing of YARN-5164 is less trivial, and actually quite delicate. > StageAllocaterGreedyRLE: NPE in corner case > > > Key: YARN-5185 > URL: https://issues.apache.org/jira/browse/YARN-5185 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5185.1.patch > > > If the plan has only one interval, and the reservation exactly overlap we > will have a null from partialMap.higherKey() that we should guard against. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms
[ https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317309#comment-15317309 ] Jian He commented on YARN-4757: --- [~jmaron], thanks for your reply ! few other questions: bq. The idea here was to support a use case in which the YARN DNS server was designated as a resolver for a suite of hosts in the cluster. Do you mean this flag will be used to enable/disable dns functionality if the DNS server is hosted in RM ? bq. In those instances, queries that it itself could not resolve would have to be forwarded to a "primary" DNS server for resolution I dont quite know what the SimpleResolver can do. Does it behave like a normal DNS server which can answer non-YARN queries ? I thought the flow is that if the primary server cannot answer the query, that will be forwarded to yarn dns. Not that yarn dns forward to the primary server. bq. I assume the reason is because the ZK path to the node relates the user, application, and component name. After closer look, IIUC, this patch assumes the last entry of the zk path is container Id only. If it is component name, then the BaseServiceRecordProcessor#getContainerIDName will break. > [Umbrella] Simplified discovery of services via DNS mechanisms > -- > > Key: YARN-4757 > URL: https://issues.apache.org/jira/browse/YARN-4757 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Vinod Kumar Vavilapalli >Assignee: Jonathan Maron > Attachments: > 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- > Simplified discovery of services via DNS mechanisms.pdf, > YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, > YARN-4757-YARN-4757.003.patch > > > [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track > all related efforts.] > In addition to completing the present story of service-registry (YARN-913), > we also need to simplify the access to the registry entries. The existing > read mechanisms of the YARN Service Registry are currently limited to a > registry specific (java) API and a REST interface. In practice, this makes it > very difficult for wiring up existing clients and services. For e.g, dynamic > configuration of dependent endpoints of a service is not easy to implement > using the present registry-read mechanisms, *without* code-changes to > existing services. > A good solution to this is to expose the registry information through a more > generic and widely used discovery mechanism: DNS. Service Discovery via DNS > uses the well-known DNS interfaces to browse the network for services. > YARN-913 in fact talked about such a DNS based mechanism but left it as a > future task. (Task) Having the registry information exposed via DNS > simplifies the life of services. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces
[ https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-1942: - Attachment: YARN-1942.11.patch Attached ver.11 patch, addressed comments from [~jianhe]. > Many of ConverterUtils methods need to have public interfaces > - > > Key: YARN-1942 > URL: https://issues.apache.org/jira/browse/YARN-1942 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Affects Versions: 2.4.0 >Reporter: Thomas Graves >Assignee: Wangda Tan >Priority: Critical > Attachments: YARN-1942.1.patch, YARN-1942.10.patch, > YARN-1942.11.patch, YARN-1942.2.patch, YARN-1942.3.patch, YARN-1942.4.patch, > YARN-1942.5.patch, YARN-1942.6.patch, YARN-1942.8.patch, YARN-1942.9.patch > > > ConverterUtils has a bunch of functions that are useful to application > masters. It should either be made public or we make some of the utilities > in it public or we provide other external apis for application masters to > use. Note that distributedshell and MR are both using these interfaces. > For instance the main use case I see right now is for getting the application > attempt id within the appmaster: > String containerIdStr = > System.getenv(Environment.CONTAINER_ID.name()); > ConverterUtils.toContainerId > ContainerId containerId = ConverterUtils.toContainerId(containerIdStr); > ApplicationAttemptId applicationAttemptId = > containerId.getApplicationAttemptId(); > I don't see any other way for the application master to get this information. > If there is please let me know. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5202) Dynamic Overcommit of Node Resources - POC
[ https://issues.apache.org/jira/browse/YARN-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Roberts updated YARN-5202: - Attachment: YARN-5202.patch Originally branched from commit: 42f90ab885d9693fcc1e52f9637f7de410ae > Dynamic Overcommit of Node Resources - POC > -- > > Key: YARN-5202 > URL: https://issues.apache.org/jira/browse/YARN-5202 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager, resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Nathan Roberts >Assignee: Nathan Roberts > Attachments: YARN-5202.patch > > > This Jira is to present a proof-of-concept implementation (collaboration > between [~jlowe] and myself) of a dynamic over-commit implementation in YARN. > The type of over-commit implemented in this jira is similar to but not as > full-featured as what's being implemented via YARN-1011. YARN-1011 is where > we see ourselves heading but we needed something quick and completely > transparent so that we could test it at scale with our varying workloads > (mainly MapReduce, Spark, and Tez). Doing so has shed some light on how much > additional capacity we can achieve with over-commit approaches, and has > fleshed out some of the problems these approaches will face. > Primary design goals: > - Avoid changing protocols, application frameworks, or core scheduler logic, > - simply adjust individual nodes' available resources based on current node > utilization and then let scheduler do what it normally does > - Over-commit slowly, pull back aggressively - If things are looking good and > there is demand, slowly add resource. If memory starts to look over-utilized, > aggressively reduce the amount of over-commit. > - Make sure the nodes protect themselves - i.e. if memory utilization on a > node gets too high, preempt something - preferably something from a > preemptable queue > A patch against trunk will be attached shortly. Some notes on the patch: > - This feature was originally developed against something akin to 2.7. Since > the patch is mainly to explain the approach, we didn't do any sort of testing > against trunk except for basic build and basic unit tests > - The key pieces of functionality are in {{SchedulerNode}}, > {{AbstractYarnScheduler}}, and {{NodeResourceMonitorImpl}}. The remainder of > the patch is mainly UI, Config, Metrics, Tests, and some minor code > duplication (e.g. to optimize node resource changes we treat an over-commit > resource change differently than an updateNodeResource change - i.e. > remove_node/add_node is just too expensive for the frequency of over-commit > changes) > - We only over-commit memory at this point. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-4525: --- Attachment: YARN-4525.1.patch > Bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
[ https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317294#comment-15317294 ] Carlo Curino commented on YARN-4525: Adding a test for this fix. > Bug in RLESparseResourceAllocation.getRangeOverlapping(...) > --- > > Key: YARN-4525 > URL: https://issues.apache.org/jira/browse/YARN-4525 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ishai Menache >Assignee: Ishai Menache > Attachments: YARN-4525.1.patch, YARN-4525.patch > > > One of our tests detected a corner case in getRangeOverlapping: When the > RLESparseResourceAllocation object is a result of a merge operation, the > underlying map is a "view" within some range. If 'end' is outside that > range, headMap(..) throws an uncaught exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5202) Dynamic Overcommit of Node Resources - POC
Nathan Roberts created YARN-5202: Summary: Dynamic Overcommit of Node Resources - POC Key: YARN-5202 URL: https://issues.apache.org/jira/browse/YARN-5202 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager, resourcemanager Affects Versions: 3.0.0-alpha1 Reporter: Nathan Roberts Assignee: Nathan Roberts This Jira is to present a proof-of-concept implementation (collaboration between [~jlowe] and myself) of a dynamic over-commit implementation in YARN. The type of over-commit implemented in this jira is similar to but not as full-featured as what's being implemented via YARN-1011. YARN-1011 is where we see ourselves heading but we needed something quick and completely transparent so that we could test it at scale with our varying workloads (mainly MapReduce, Spark, and Tez). Doing so has shed some light on how much additional capacity we can achieve with over-commit approaches, and has fleshed out some of the problems these approaches will face. Primary design goals: - Avoid changing protocols, application frameworks, or core scheduler logic, - simply adjust individual nodes' available resources based on current node utilization and then let scheduler do what it normally does - Over-commit slowly, pull back aggressively - If things are looking good and there is demand, slowly add resource. If memory starts to look over-utilized, aggressively reduce the amount of over-commit. - Make sure the nodes protect themselves - i.e. if memory utilization on a node gets too high, preempt something - preferably something from a preemptable queue A patch against trunk will be attached shortly. Some notes on the patch: - This feature was originally developed against something akin to 2.7. Since the patch is mainly to explain the approach, we didn't do any sort of testing against trunk except for basic build and basic unit tests - The key pieces of functionality are in {{SchedulerNode}}, {{AbstractYarnScheduler}}, and {{NodeResourceMonitorImpl}}. The remainder of the patch is mainly UI, Config, Metrics, Tests, and some minor code duplication (e.g. to optimize node resource changes we treat an over-commit resource change differently than an updateNodeResource change - i.e. remove_node/add_node is just too expensive for the frequency of over-commit changes) - We only over-commit memory at this point. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317214#comment-15317214 ] Carlo Curino commented on YARN-5124: Few issues, mostly minor: # Please add javadoc for {{AMRMClientSync.getMatchingRequest()}} or is this an override? # Can we wrap this datastructure in some object? {{MapModify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 > URL: https://issues.apache.org/jira/browse/YARN-5124 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5124.001.patch, YARN-5124.002.patch, > YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, > YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, > YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch > > > Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} > in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} > that is sent to the RM -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-679) add an entry point that can start any Yarn service
[ https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated YARN-679: Attachment: YARN-679-008.patch Patch 008 > add an entry point that can start any Yarn service > -- > > Key: YARN-679 > URL: https://issues.apache.org/jira/browse/YARN-679 > Project: Hadoop YARN > Issue Type: New Feature > Components: api >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: YARN-679-001.patch, YARN-679-002.patch, > YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, > YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, > YARN-679-008.patch, org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf > > Time Spent: 72h > Remaining Estimate: 0h > > There's no need to write separate .main classes for every Yarn service, given > that the startup mechanism should be identical: create, init, start, wait for > stopped -with an interrupt handler to trigger a clean shutdown on a control-c > interrrupt. > Provide one that takes any classname, and a list of config files/options -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5194) Avoid adding yarn-site to all Configuration instances created by the JVM
[ https://issues.apache.org/jira/browse/YARN-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316936#comment-15316936 ] Allen Wittenauer commented on YARN-5194: Not getConf; wrong level of the API. The 'hdfs getconf' command is the ONLY way we have at the script level to reliably get configuration information out of the xml files. It is used for, amongst other things, to bring up HARM and nodemanagers using the start-yarn.sh script. > Avoid adding yarn-site to all Configuration instances created by the JVM > > > Key: YARN-5194 > URL: https://issues.apache.org/jira/browse/YARN-5194 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Siddharth Seth > > {code} > static { > addDeprecatedKeys(); > Configuration.addDefaultResource(YARN_DEFAULT_CONFIGURATION_FILE); > Configuration.addDefaultResource(YARN_SITE_CONFIGURATION_FILE); > } > {code} > This puts the contents of yarn-default and yarn-site into every configuration > instance created in the VM after YarnConfiguration has been initialized. > This should be changed to a local addResource for the specific > YarnConfiguration instance, instead of polluting every Configuration instance. > Incompatible change. Have set the target version to 3.x. > The same applies to HdfsConfiguration (hdfs-site.xml), and Configuration > (core-site.xml etc). > core-site may be worth including everywhere, however it would be better to > expect users to explicitly add the relevant resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5201) Apache Ranger Yarn policies are not used
Rajendranath Rengan created YARN-5201: - Summary: Apache Ranger Yarn policies are not used Key: YARN-5201 URL: https://issues.apache.org/jira/browse/YARN-5201 Project: Hadoop YARN Issue Type: Bug Reporter: Rajendranath Rengan Hi, I have setup Apache Ranger in hadoop cluster and defined yarn policies to allow certain user to certain queue. Idea is to have user 'x' submit spark job only to queue 'x' and not to queue 'y'. when submitting spark job queue is mentioned as one of the arguments But user 'x' is able to submit spark job to queue 'y' Ranger audit logs shows the policy used is HDFS policy Yarn policy is not used at all. I have enabled ranger plugin for YARN and defined yarn policy Yarn ACL is also set to true capacity scheduler setting as below: yarn.scheduler.capacity.queue-mappings=u:user1:user1,u:user2:userr2 yarn.scheduler.capacity.root.acl_submit_applications=yarn,spark,hdfs yarn.scheduler.capacity.root.customer1.acl_administer_jobs=user1 yarn.scheduler.capacity.root.customer1.acl_submit_applications=user1 yarn.scheduler.capacity.root.customer1.capacity=50 yarn.scheduler.capacity.root.customer1.maximum-capacity=100 yarn.scheduler.capacity.root.customer1.state=RUNNING yarn.scheduler.capacity.root.customer1.user-limit-factor=1 yarn.scheduler.capacity.root.customer2.acl_administer_jobs=user2 yarn.scheduler.capacity.root.customer2.acl_submit_applications=user2 yarn.scheduler.capacity.root.customer2.capacity=50 yarn.scheduler.capacity.root.customer2.maximum-capacity=100 yarn.scheduler.capacity.root.customer2.state=RUNNING yarn.scheduler.capacity.root.customer2.user-limit-factor=1 yarn.scheduler.capacity.root.queues=user1,user2 Thanks Rengan -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316916#comment-15316916 ] Hadoop QA commented on YARN-5124: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 3s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 22 new + 157 unchanged - 32 fixed = 179 total (was 189) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 44s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 106m 36s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMProxy | | | hadoop.yarn.client.TestGetGroups | | Timed out junit tests | org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling | | | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | | | org.apache.hadoop.yarn.client.api.impl.TestNMClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808261/YARN-5124.009.patch | | JIRA Issue | YARN-5124 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 721287a3a343 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 35f255b | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (YARN-5194) Avoid adding yarn-site to all Configuration instances created by the JVM
[ https://issues.apache.org/jira/browse/YARN-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316909#comment-15316909 ] Siddharth Seth commented on YARN-5194: -- This will likely break a bunch of things - hence targeted at 3.0. Could you please elaborate on HDFS getConf ? If there's enough interest to reduce the size of config objects in memory / serialized size - this can be taken up for a 3.x release. > Avoid adding yarn-site to all Configuration instances created by the JVM > > > Key: YARN-5194 > URL: https://issues.apache.org/jira/browse/YARN-5194 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Siddharth Seth > > {code} > static { > addDeprecatedKeys(); > Configuration.addDefaultResource(YARN_DEFAULT_CONFIGURATION_FILE); > Configuration.addDefaultResource(YARN_SITE_CONFIGURATION_FILE); > } > {code} > This puts the contents of yarn-default and yarn-site into every configuration > instance created in the VM after YarnConfiguration has been initialized. > This should be changed to a local addResource for the specific > YarnConfiguration instance, instead of polluting every Configuration instance. > Incompatible change. Have set the target version to 3.x. > The same applies to HdfsConfiguration (hdfs-site.xml), and Configuration > (core-site.xml etc). > core-site may be worth including everywhere, however it would be better to > expect users to explicitly add the relevant resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5183) [YARN-3368] Support responsive navbar in case of resized
[ https://issues.apache.org/jira/browse/YARN-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316623#comment-15316623 ] Sunil G commented on YARN-5183: --- Thanks [~kaisasak]. Patch looks good to me. > [YARN-3368] Support responsive navbar in case of resized > > > Key: YARN-5183 > URL: https://issues.apache.org/jira/browse/YARN-5183 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: Screen Shot 2016-05-31 at 22.41.35.png, > YARN-5183-YARN-3368.02.patch, YARN-5183-YARN-3368.1.patch, YARN-5183.01.patch > > > Responsive navbar currently not work even navbar icon is shownup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5197) RM leaks containers if running container disappears from node update
[ https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316606#comment-15316606 ] Hadoop QA commented on YARN-5197: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 150 unchanged - 1 fixed = 150 total (was 151) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 53s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808391/YARN-5197.002.patch | | JIRA Issue | YARN-5197 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 62f993b67157 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 35f255b | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/11852/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/11852/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/11852/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console
[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service
[ https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316605#comment-15316605 ] Hadoop QA commented on YARN-679: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} | {color:red} YARN-679 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-679 | | GITHUB PR | https://github.com/apache/hadoop/pull/68 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/11853/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > add an entry point that can start any Yarn service > -- > > Key: YARN-679 > URL: https://issues.apache.org/jira/browse/YARN-679 > Project: Hadoop YARN > Issue Type: New Feature > Components: api >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: YARN-679-001.patch, YARN-679-002.patch, > YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, > YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, > org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf > > Time Spent: 72h > Remaining Estimate: 0h > > There's no need to write separate .main classes for every Yarn service, given > that the startup mechanism should be identical: create, init, start, wait for > stopped -with an interrupt handler to trigger a clean shutdown on a control-c > interrrupt. > Provide one that takes any classname, and a list of config files/options -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-679) add an entry point that can start any Yarn service
[ https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated YARN-679: Attachment: YARN-679-007.patch patch 007. This is patch 006 rebased to trunk and resynced in the hope that jenkins will notice > add an entry point that can start any Yarn service > -- > > Key: YARN-679 > URL: https://issues.apache.org/jira/browse/YARN-679 > Project: Hadoop YARN > Issue Type: New Feature > Components: api >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: YARN-679-001.patch, YARN-679-002.patch, > YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, > YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, > org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf > > Time Spent: 72h > Remaining Estimate: 0h > > There's no need to write separate .main classes for every Yarn service, given > that the startup mechanism should be identical: create, init, start, wait for > stopped -with an interrupt handler to trigger a clean shutdown on a control-c > interrrupt. > Provide one that takes any classname, and a list of config files/options -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5197) RM leaks containers if running container disappears from node update
[ https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-5197: - Attachment: YARN-5197.002.patch Updated the patch for the checkstyle issue. The test failures are tracked by HADOOP-12687. > RM leaks containers if running container disappears from node update > > > Key: YARN-5197 > URL: https://issues.apache.org/jira/browse/YARN-5197 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.2, 2.6.4 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: YARN-5197.001.patch, YARN-5197.002.patch > > > Once a node reports a container running in a status update, the corresponding > RMNodeImpl will track the container in its launchedContainers map. If the > node somehow misses sending the completed container status to the RM and the > container simply disappears from subsequent heartbeats, the container will > leak in launchedContainers forever and the container completion event will > not be sent to the scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms
[ https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316435#comment-15316435 ] Jonathan Maron commented on YARN-4757: -- [~jianhe] - Thanks for the review! Answers: {quote} Here, it’s using the ‘description’ filed for constructing the DNS name. is this expected ? seems not mentioned in the doc {quote} The description field is currently the service record field that Slider (the current primary user of the ZK registry) is leveraging to relate the component name (role). I assume the reason is because the ZK path to the node relates the user, application, and component name. [~steve_l] - is that correct? Any objection to an explicit "name" attribute in the service record? {quote} we can close the CuratorService#treeCache when stop the service? {quote} Probably makes sense. {quote} only readLock is being used. wondering whether these locks are needed. {quote} This is probably due to some refactorings - it was leveraged in previous iterations of the code. I think I will re-introduce it. I see nothing in the dnsjava code to indicate that Zone implementations are thread safe. Given the dynamic nature of record registration and deletion in the yarn use case I think it would be base to synchronize access to the zone object to ensure deterministic results. {quote} question about the dnsEnabled config, if the dnsEnabled is false, what else does the RegistryDNSServer do ? Asking this because I'm wondering whether this config is actually needed. {quote} I guess this depends on what our expectations are regarding the use of DNS - is it expected to be available as a default service. If the answer is no, we could manage the inclusion of the service at this level or perhaps one level (have the RM not even add the service based on flag value?) {quote} RecordCreatorFactory: The RecordCreatroFactory#getRecordCreator is instantiating the creator instance every time this method gets called. May be singleton pattern could be useful to avoid creating a new instance every time. {quote} Certainly a possibility. The thinking here was to create simple, lightweight, stateless objects that could be used with little regard to multi-threading concerns etc. However, if some profiling indicates an issue a singleton approach may be preferable. {quote} DNSManagementOperations class is not used anywhere , can be removed? {quote} Yes - probably a leftover from a previous code iteration. {quote} a few unused methods in RegistryDNS, e.g. addDSRecord, signZones. is this intended ? {quote} For the time being - yes. DS records appear to play a role in some DNS negative response processing. Though we have made strides in better support for negative responses (NXT records), it was still somewhat unclear whether we ultimately would need to enhance support with full DS record capabilities. So I have left these methods in place until such time that I could make a better determination. {quote} what does the RegistryDNS#primaryDNS do ? {quote} The idea here was to support a use case in which the YARN DNS server was designated as a resolver for a suite of hosts in the cluster. In those instances, queries that it itself could not resolve would have to be forwarded to a "primary" DNS server for resolution. I now think this probably less likely, so we could certainly look at removing that feature. > [Umbrella] Simplified discovery of services via DNS mechanisms > -- > > Key: YARN-4757 > URL: https://issues.apache.org/jira/browse/YARN-4757 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Vinod Kumar Vavilapalli >Assignee: Jonathan Maron > Attachments: > 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- > Simplified discovery of services via DNS mechanisms.pdf, > YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, > YARN-4757-YARN-4757.003.patch > > > [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track > all related efforts.] > In addition to completing the present story of service-registry (YARN-913), > we also need to simplify the access to the registry entries. The existing > read mechanisms of the YARN Service Registry are currently limited to a > registry specific (java) API and a REST interface. In practice, this makes it > very difficult for wiring up existing clients and services. For e.g, dynamic > configuration of dependent endpoints of a service is not easy to implement > using the present registry-read mechanisms, *without* code-changes to > existing services. > A good solution to this is to expose the registry information through a more > generic and widely used discovery mechanism: DNS. Service Discovery via DNS > uses the well-known DNS
[jira] [Updated] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-5124: -- Attachment: YARN-5124.009.patch Updating patch addressing [~kasha]'s comments > Modify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 > URL: https://issues.apache.org/jira/browse/YARN-5124 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5124.001.patch, YARN-5124.002.patch, > YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, > YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, > YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch > > > Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} > in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} > that is sent to the RM -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316147#comment-15316147 ] Arun Suresh edited comment on YARN-5124 at 6/6/16 6:00 AM: --- [~kasha], thanks for the review.. I agree with 1-4 of your comments.. shall upload patch addressing them shortly.. w.r.t #5, Given that {{ExecutionTypeRequest}} is a wrapper for the actual {{ExecutionType}}, the remoteRequestsTable primary key should comprise the ExecutionType, not the ExecutionTypeRequest. I was thinking, for the case where enforceExecutionType is false, we can possibly just match a returned Container with any entry in the remoteRequestsTable (where Priority, Location and Capability match, but the ExecutionType can be anything). I intentionally left out the implementation, since currently, only Distributed Scheduling supports ExecutionTypes and it currently only supports enforceExecutionType = true. I would be happy to put at doc with a TODO there (and maybe open a JIRA) to fix it once we have the Scheduler fix that supports enforceExecutionType = false. Thoughts ? was (Author: asuresh): [~kasha], thanks for the review.. I agree with 1-4 of your comments.. shall upload patch addressing them shortly.. w.r.t #5, Given that {{ExecutionTypeRequest}} is a wrapper for the actual {{ExecutionType}}, the remoteRequestsTable primary key should comprise the ExecutionType, not the ExecutionTypeRequest. I was thinking, for the case where enforceExecutionType is false, we can possibly just match a returned Container with any entry in the remoteRequestsTable (where Priority, Location and Capability match, but the ExecutionType can be anything). I intentionally left out the implementation, since currently, the only Distributed Scheduling supports ExecutionTypes and it currently only supports enforceExecutionType = true. I would be happy to put at doc with a TODO there (and maybe open a JIRA) to fix it once we have the Scheduler fix that supports enforceExecutionType = false. Thoughts ? > Modify AMRMClient to set the ExecutionType in the ResourceRequest > - > > Key: YARN-5124 > URL: https://issues.apache.org/jira/browse/YARN-5124 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5124.001.patch, YARN-5124.002.patch, > YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, > YARN-5124.006.patch, YARN-5124.008.patch, > YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch > > > Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} > in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} > that is sent to the RM -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org