[jira] [Commented] (YARN-5937) stop-yarn.sh is not able to gracefully stop node managers
[ https://issues.apache.org/jira/browse/YARN-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15711156#comment-15711156 ] Hadoop QA commented on YARN-5937: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 13s{color} | {color:green} The patch generated 0 new + 116 unchanged - 1 fixed = 116 total (was 117) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 11s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 41s{color} | {color:green} hadoop-yarn in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5937 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840675/YARN-5937.01.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 4d1ba7995b5b 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1f7613b | | shellcheck | v0.4.5 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14142/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14142/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > stop-yarn.sh is not able to gracefully stop node managers > - > > Key: YARN-5937 > URL: https://issues.apache.org/jira/browse/YARN-5937 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: script > Attachments: YARN-5937.01.patch, nm_shutdown.log > > > stop-yarn.sh always gives following output > {code} > ./sbin/stop-yarn.sh > Stopping resourcemanager > Stopping nodemanagers > : WARNING: nodemanager did not stop gracefully after 5 seconds: > Trying to kill with kill -9 > : ERROR: Unable to kill 18097 > {code} > this was because resource manager is stopped before node managers, when the > shutdown hook manager tries to gracefully stop NM services, NM needs to > unregister with RM, and it gets timeout as NM could not connect to RM > (already stopped). See log (stop RM then run kill ) > {code} > 16/11/28 08:26:43 ERROR nodemanager.NodeManager: RECEIVED SIGNAL 15: SIGTERM > ... > 16/11/28 08:26:53 WARN util.ShutdownHookManager: ShutdownHook > 'CompositeServiceShutdownHook' timeout, java.util.concurrent.TimeoutException > java.util.concurrent.TimeoutException > at java.util.concurrent.FutureTask.get(FutureTask.java:205) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) > ... > at > org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.unRegisterNM(NodeStatusUpdaterImpl.java:291) > ... > 16/11/28 08:27:13 ERROR util.ShutdownHookManager: ShutdownHookManger shutdown > forcefully. > {code} > the shutdown hooker has a default of 10s timeout, so if RM is stopped before > NMs, they always took more than 10s to stop (in java
[jira] [Commented] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
[ https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710920#comment-15710920 ] Sangjin Lee commented on YARN-5922: --- Thanks [~haibochen] for the updated patch! Actually I think the right way to fix the unit test is to add the new properties to {{yarn-default.xml}} rather than modifying the {{TestConfigurationFieldsBase}} class. Could you please update the patch to do that? > Remove direct references of HBaseTimelineWriter/Reader in core ATS classes > -- > > Key: YARN-5922 > URL: https://issues.apache.org/jira/browse/YARN-5922 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-5922-YARN-5355.01.patch, > YARN-5922-YARN-5355.02.patch, YARN-5922.01.patch, YARN-5922.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710868#comment-15710868 ] Hadoop QA commented on YARN-5769: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 4s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 55s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 268 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api in yarn-native-services has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 0 new + 431 unchanged - 3 fixed = 431 total (was 434) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 8s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s{color} | {color:red} The patch generated 11 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5769 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841210/YARN-5769-yarn-native-services.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux febec0d4650d 3.13.0-93-generic #140-Ubuntu SMP
[jira] [Commented] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
[ https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710860#comment-15710860 ] Hadoop QA commented on YARN-5559: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 93 unchanged - 1 fixed = 94 total (was 94) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 22s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 26s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5559 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841205/YARN-5559.6.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 506728f103fa 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1f7613b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14140/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | Test R
[jira] [Commented] (YARN-5292) Support for PAUSED container state
[ https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710838#comment-15710838 ] Hadoop QA commented on YARN-5292: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 59 new + 310 unchanged - 0 fixed = 369 total (was 310) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 54s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 54s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5292 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841204/YARN-5292.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 486433c97a0f 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1f7613b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/141
[jira] [Commented] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
[ https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710737#comment-15710737 ] Akira Ajisaka commented on YARN-5559: - bq. GetClusterNodeLabelsResponsePBImpl is not thread safe. Changing the implementation of the list to CopyOnWriteArrayList. Detail: getNodeLabels method iterates {{updateNodeLabels}}, and setNodeLabelList method calls {{updateNodeLabels.addAll}}. If the two methods are called at the same time, ConcurrentModificationException can happen. > Analyse 2.8.0/3.0.0 jdiff reports and fix any issues > > > Key: YARN-5559 > URL: https://issues.apache.org/jira/browse/YARN-5559 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Wangda Tan >Assignee: Akira Ajisaka >Priority: Blocker > Labels: oct16-easy > Attachments: YARN-5559.1.patch, YARN-5559.2.patch, YARN-5559.3.patch, > YARN-5559.4.patch, YARN-5559.5.patch, YARN-5559.6.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3611) Support Docker Containers In LinuxContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710732#comment-15710732 ] Hitesh Sharma commented on YARN-3611: - Hi folks, Docker is now available on Windows and is fully supported by Docker INC (I'm talking about launching Windows containers via Docker). https://www.docker.com/microsoft Unfortunately in the current design Docker is being limited to Linux only. I think we need to revisit this and have a way to share the same code across Docker support for Windows and Linux. Another goal to keep in mind is to have DockerContainerExecutor be completely OS agnostic. As in certain cases Docker client might actually be talking to a daemon on a remote machine or a VM (which maybe Linux or Windows). Would love to hear some thoughts on how to achieve Docker support for Windows by reusing all the good work being done here. Thanks! > Support Docker Containers In LinuxContainerExecutor > --- > > Key: YARN-3611 > URL: https://issues.apache.org/jira/browse/YARN-3611 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > > Support Docker Containers In LinuxContainerExecutor > LinuxContainerExecutor provides useful functionality today with respect to > localization, cgroups based resource management and isolation for CPU, > network, disk etc. as well as security with a well-defined mechanism to > execute privileged operations using the container-executor utility. Bringing > docker support to LinuxContainerExecutor lets us use all of this > functionality when running docker containers under YARN, while not requiring > users and admins to configure and use a different ContainerExecutor. > There are several aspects here that need to be worked through : > * Mechanism(s) to let clients request docker-specific functionality - we > could initially implement this via environment variables without impacting > the client API. > * Security - both docker daemon as well as application > * Docker image localization > * Running a docker container via container-executor as a specified user > * “Isolate” the docker container in terms of CPU/network/disk/etc > * Communicating with and/or signaling the running container (ensure correct > pid handling) > * Figure out workarounds for certain performance-sensitive scenarios like > HDFS short-circuit reads > * All of these need to be achieved without changing the current behavior of > LinuxContainerExecutor -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710684#comment-15710684 ] Jian He commented on YARN-5769: --- Yep, updated the patch to change that. > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch, > YARN-5769-yarn-native-services.03.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: YARN-5769-yarn-native-services.03.patch > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch, > YARN-5769-yarn-native-services.03.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5954) Implement a CapacityScheduler policy for configuration changes
Jonathan Hung created YARN-5954: --- Summary: Implement a CapacityScheduler policy for configuration changes Key: YARN-5954 URL: https://issues.apache.org/jira/browse/YARN-5954 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung The CapacityScheduler configuration policy will extend the pluggable policy from YARN-5949 for capacity scheduler specific configuration changes (e.g. checking that a max capacity change for a queue is in the range [0-100]). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5953) Create CLI for changing YARN configurations
Jonathan Hung created YARN-5953: --- Summary: Create CLI for changing YARN configurations Key: YARN-5953 URL: https://issues.apache.org/jira/browse/YARN-5953 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung Based on the design in YARN-5734. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5952) Create REST API for changing YARN configurations
Jonathan Hung created YARN-5952: --- Summary: Create REST API for changing YARN configurations Key: YARN-5952 URL: https://issues.apache.org/jira/browse/YARN-5952 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung Based on the design in YARN-5734. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5951) Implement CapacityStoreConfigurationProvider to provide CapacityScheduler configuration from backing store
Jonathan Hung created YARN-5951: --- Summary: Implement CapacityStoreConfigurationProvider to provide CapacityScheduler configuration from backing store Key: YARN-5951 URL: https://issues.apache.org/jira/browse/YARN-5951 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung The CapacityStoreConfigurationProvider will extend the StoreConfigurationProvider to augment the latter's constructed Configuration object with capacity scheduler specific configuration, to be passed to the CapacityScheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5950) Create StoreConfigurationProvider to construct a Configuration from the backing store
Jonathan Hung created YARN-5950: --- Summary: Create StoreConfigurationProvider to construct a Configuration from the backing store Key: YARN-5950 URL: https://issues.apache.org/jira/browse/YARN-5950 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung The StoreConfigurationProvider will query the YarnConfigurationStore for various configuration keys, and construct a Configuration object out of it (to be passed to the scheduler, and possibly other YARN components). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5949) Add PluggableConfigurationPolicy interface as a component of MutableConfigurationManager
Jonathan Hung created YARN-5949: --- Summary: Add PluggableConfigurationPolicy interface as a component of MutableConfigurationManager Key: YARN-5949 URL: https://issues.apache.org/jira/browse/YARN-5949 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung The PluggableConfigurationPolicy component will allow different policies to be applied to customize how configuration changes should be applied (for example, a policy might restrict whether a configuration change by a certain user is allowed). This will be enforced by the MutableConfigurationManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
Jonathan Hung created YARN-5948: --- Summary: Implement MutableConfigurationManager for handling storage into configuration store Key: YARN-5948 URL: https://issues.apache.org/jira/browse/YARN-5948 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung The MutableConfigurationManager will take REST calls with desired client configuration changes and call YarnConfigurationStore methods to store these changes in the backing store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5947) Create DerbySchedulerConfigurationStore class using Derby as backing store
Jonathan Hung created YARN-5947: --- Summary: Create DerbySchedulerConfigurationStore class using Derby as backing store Key: YARN-5947 URL: https://issues.apache.org/jira/browse/YARN-5947 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung DerbySchedulerConfigurationStore will extend YarnConfigurationStore for storing scheduler configuration in a Derby embedded DB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
[ https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated YARN-5559: Attachment: YARN-5559.6.patch Thanks [~jianhe] for the review! 06 patch: * Fixed javac and checkstyle warnings * Renamed get/setNodeLabelsList to get/setNodeLabelList * Changed the name of the local variable in GetClusterNodeLabelsResponse#newInstance: request -> response * GetClusterNodeLabelsResponsePBImpl#getNodeLabels calls getNodeLabelsList twice. Made it once. * GetClusterNodeLabelsResponsePBImpl is not thread safe. Changing the implementation of the list to CopyOnWriteArrayList. > Analyse 2.8.0/3.0.0 jdiff reports and fix any issues > > > Key: YARN-5559 > URL: https://issues.apache.org/jira/browse/YARN-5559 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Wangda Tan >Assignee: Akira Ajisaka >Priority: Blocker > Labels: oct16-easy > Attachments: YARN-5559.1.patch, YARN-5559.2.patch, YARN-5559.3.patch, > YARN-5559.4.patch, YARN-5559.5.patch, YARN-5559.6.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5946) Create YarnConfigurationStore class
Jonathan Hung created YARN-5946: --- Summary: Create YarnConfigurationStore class Key: YARN-5946 URL: https://issues.apache.org/jira/browse/YARN-5946 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung This class provides the interface to persist YARN configurations in a backing store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5292) Support for PAUSED container state
[ https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Sharma updated YARN-5292: Attachment: YARN-5292.004.patch Adding test case and raising an event for the scheduler to know that the container was paused. > Support for PAUSED container state > -- > > Key: YARN-5292 > URL: https://issues.apache.org/jira/browse/YARN-5292 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Hitesh Sharma >Assignee: Hitesh Sharma > Attachments: YARN-5292.001.patch, YARN-5292.002.patch, > YARN-5292.003.patch, YARN-5292.004.patch, yarn-5292.pdf > > > YARN-2877 introduced OPPORTUNISTIC containers, and YARN-5216 proposes to add > capability to customize how OPPORTUNISTIC containers get preempted. > In this JIRA we propose introducing a PAUSED container state. > When a running container gets preempted, it enters the PAUSED state, where it > remains until resources get freed up on the node then the preempted container > can resume to the running state. > > One scenario where this capability is useful is work preservation. How > preemption is done, and whether the container supports it, is implementation > specific. > For instance, if the container is a virtual machine, then preempt would pause > the VM and resume would restore it back to the running state. > If the container doesn't support preemption, then preempt would default to > killing the container. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710598#comment-15710598 ] Gour Saha commented on YARN-5769: - [~jianhe] the 02 patch looks good. One minor suggestion - h6. AbstractClusterBuildingActionArgs.java {code} @Parameter(names = {ARG_LIFETIME}, description = "Life time of the application starting from now") {code} Should we modify the description to something like - "Lifetime of the application from the time of request" > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710480#comment-15710480 ] Sangjin Lee commented on YARN-5739: --- I think the {{if (singleEntityRead())}} check in ApplicationEntityReader is meaningful. In case of a multi-entity read for the applications, the app id would be empty in the context. Thus, that check needs to be done for applications if I'm not mistaken. The GenericEntityReader should NOT check that on the other hand. So in that sense, the refactoring would be a little more involved. > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5945) Add Slider debug config in yarn log4j
Gour Saha created YARN-5945: --- Summary: Add Slider debug config in yarn log4j Key: YARN-5945 URL: https://issues.apache.org/jira/browse/YARN-5945 Project: Hadoop YARN Issue Type: Sub-task Reporter: Gour Saha Fix For: yarn-native-services We should add the Slider debug config property in the yarn log4j file (commented by default). This will help us to point customers and end-users who want to run "yarn slider ..." cli command lines in debug mode, to simply edit the log4j and uncomment the line. Here is the property that needs to be added (in commented form) - {code} #log4j.logger.org.apache.slider=DEBUG {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710429#comment-15710429 ] Sangjin Lee commented on YARN-5739: --- I'm more of -0 on the name. If alternatives are not any better, I'm OK with the currently proposed name. > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5756) Add state-machine implementation for queues
[ https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710419#comment-15710419 ] Hadoop QA commented on YARN-5756: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} YARN-5756 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5756 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12837807/YARN-5756.1.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14137/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add state-machine implementation for queues > --- > > Key: YARN-5756 > URL: https://issues.apache.org/jira/browse/YARN-5756 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5756.1.patch, YARN-5756.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710412#comment-15710412 ] Li Lu commented on YARN-5739: - Also, the two augmentParams in GenericEntityReader and in ApplicationEntityReader seems quite similar. The only difference is we need to distinguish if a read is single entity read when we actually augment the params. Can we merge the two logic together? I can expose the base implementation on augmentParams but was wondering if we can further simplify the logic here to just let ApplicationEntityReader#augmentParams call super? > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710400#comment-15710400 ] Li Lu commented on YARN-5739: - But I'd certainly appreciate if there are better names! > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710357#comment-15710357 ] Li Lu commented on YARN-5739: - bq. I hate to nitpick on the name, but AbstractTimelineStorageReader sounds a little awkward to me. Can we stick to the entity reader names? How about AbstractTimelineEntityReader or BaseTimelineEntityReader? Thoughts? Avoiding the term "Entity" is a deliberate choice here. Now EntityTypeReader will not return any entity types so I'm avoiding using the term Entity in the base class. > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.
[ https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710340#comment-15710340 ] Hadoop QA commented on YARN-5746: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 68 unchanged - 0 fixed = 69 total (was 68) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueState | | | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5746 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841150/YARN-5746.5.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4454d7fcedac 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 69fb70c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14134/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/14134/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14134/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanage
[jira] [Commented] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710331#comment-15710331 ] Gour Saha commented on YARN-5769: - I was wondering why findInstance was not being called. Looks like you uploaded the 02 patch which has that call. Reviewing 02 patch now. > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5756) Add state-machine implementation for queues
[ https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710327#comment-15710327 ] Xuan Gong commented on YARN-5756: - rebase the patch. > Add state-machine implementation for queues > --- > > Key: YARN-5756 > URL: https://issues.apache.org/jira/browse/YARN-5756 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5756.1.patch, YARN-5756.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5756) Add state-machine implementation for queues
[ https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5756: Attachment: YARN-5756.2.patch > Add state-machine implementation for queues > --- > > Key: YARN-5756 > URL: https://issues.apache.org/jira/browse/YARN-5756 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5756.1.patch, YARN-5756.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710323#comment-15710323 ] Sangjin Lee commented on YARN-5739: --- Thanks for the update [~gtCarrera9]. I have some feedback specific to the refactoring. (AbstractTimelineStorageReader.java) - I hate to nitpick on the name, but {{AbstractTimelineStorageReader}} sounds a little awkward to me. Can we stick to the entity reader names? How about {{AbstractTimelineEntityReader}} or {{BaseTimelineEntityReader}}? Thoughts? - l.36: It is bit strange that subclasses such as {{TimelineEntityReader}} are public, and yet the base class is not. If the extended classes are public, then the base class (i.e. type) should also be public. - l.38: nit: let's make {{context}} {{final}} (ApplicationEntityReader.java) - I think the {{augmentParams()}} can be improved upon. Is it possible to rely on the base implementations instead of replicating very similar code to {{AbstractTimelineStorageReader.augmentParams()}}? > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: (was: YARN-5769-yarn-native-services.02.patch) > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: YARN-5769-yarn-native-services.02.patch > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710284#comment-15710284 ] Hadoop QA commented on YARN-5769: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 268 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api in yarn-native-services has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 0 new + 431 unchanged - 3 fixed = 431 total (was 434) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 11 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5769 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841161/YARN-5769-yarn-native-services.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9bf4aaf0b893 3.13.0-92-generic #139-Ubuntu SM
[jira] [Commented] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
[ https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710280#comment-15710280 ] Hadoop QA commented on YARN-5925: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 53s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5925 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841155/YARN-5925.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0d8b0e0590bb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 69fb70c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14135/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14135/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: YARN-5769-yarn-native-services.02.patch > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: (was: YARN-5769-yarn-native-services.02.patch) > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5922) Remove direct references of HBaseTimelineWriter/Reader in core ATS classes
[ https://issues.apache.org/jira/browse/YARN-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710190#comment-15710190 ] Hadoop QA commented on YARN-5922: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 50s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5922 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840148/YARN-5922.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 347d9f44d5b5 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 69fb70c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14132/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14132/console | | Powered by | Apache Yetus 0.4.0-S
[jira] [Updated] (YARN-5769) Integrate update app lifetime using feature implemented in YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5769: -- Attachment: YARN-5769-yarn-native-services.02.patch > Integrate update app lifetime using feature implemented in YARN-5611 > > > Key: YARN-5769 > URL: https://issues.apache.org/jira/browse/YARN-5769 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5769-yarn-native-services.01.patch, > YARN-5769-yarn-native-services.02.patch > > > The REST API PUT call provides capability to update the lifetime of a running > application. Once YARN-5611 is available we need to integrate it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-1593) support out-of-proc AuxiliaryServices
[ https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710178#comment-15710178 ] Haibo Chen edited comment on YARN-1593 at 11/30/16 11:25 PM: - Thanks for starting the work on this, [~vvasudev]! I’d like to understand the proposal better. A few comments/questions on the proposal. Please correct me as necessary. It seems like system containers are overloaded in the design doc. From a NM’s perspective, my understanding is that system containers are special container runtime (relative to the container types we have today in NM) provided by NM to be used by system services to run their components/instances. In other cases, system containers represent components/instances of system services on the worker nodes. In the former case, we may only need to be concerned with issues such as classpath and container executors. For ShuffleHandler for instance, it is an alternative of the in-process runtime it gets from NM today. The latter, is where we discuss whether RM or NM does the heavy-lifting of managing system containers. As you mention, no one option suits all use cases. Option 1 suits some, while option 3 suits others. I wonder if this is because we are conflating two different types of containers in the proposal - (1) framework-specific services like MR shuffle, and (2) application-specific services. Framework services are to be run on all nodes that support the framework (e.g. MR). Since these run on every node, node-level configs (option 3) would work best. Application-services (e.g. ATS AM-companion-collector), on the other hand, are application specific and need to run on a subset of cluster nodes; option 1 readily applies to these. Is this categorization accurate? And, do you see merit in differentiating between these two? bq. Allow shuffle to run on the NodeManagers without requiring it to be setup as an AuxiliaryService Not sure if I understand this correctly, IHO, we could let the user continue with their current configuration for AuxiliaryService, but just run them in containers with AuxiliaryService proxy like Junping said in the jira description. bq. Handling container status for system-containers - we will need to add logic to not act upon the container status of a system-container. Can you please elaborate more on this? Shouldn’t NM try to relaunch system containers? Does this mean that RM will take the responsibility of handling system container failures? bq. I think discovery is going to be one major piece that needs to be addressed from the beginning Agree with Sangjin that discovery problem needs to be addressed right at the beginning. For option 3, I think we can add a queryable registry in AuxiliaryServices when NM launches a proxied AuxiliaryService assuming that NM will launch the AuxiliaryServices in the right order and each AuxiliaryService knows its dependent services. bq. the NodeManager will block container requests until all the system-containers are running With global scheduling and resource affinity, NM does not necessarily need to block container launching. NM can launch system containers asynchronously and report to resource manager upon launch success, and RM can only schedule containers on those nodes if the services that the containers depend on have been launched on those nodes. But that’s way in the future I guess bq. We can’t solve the dependency management and affinity/anti-affinity requirements. (One of cons in option 3) Not quite sure how option 1 solves the affinity requirement. Can you elaborate a little more on this? To solve the dependency management issue, one thing that occurred to me, but I have not thought about in much details, is, we could have RM manages all system services together and construct a DAG of system services that need to be launched on each NM. Alternatively, RM can just decide what services need to be launched on which nodes with their dependency clearly defined, and then NM can construct the DAG themselves and launches them in topological order. This however, does put some burden on RM. was (Author: haibochen): Thanks for starting the work on this, Varun Vasudev! I’d like to understand the proposal better. A few comments/questions on the proposal. Please correct me as necessary. It seems like system containers are overloaded in the design doc. From a NM’s perspective, my understanding is that system containers are special container runtime (relative to the container types we have today in NM) provided by NM to be used by system services to run their components/instances. In other cases, system containers represent components/instances of system services on the worker nodes. In the former case, we may only need to be concerned with issues such as classpath and container executors. For ShuffleHandler for instance, it is an alter
[jira] [Commented] (YARN-1593) support out-of-proc AuxiliaryServices
[ https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710178#comment-15710178 ] Haibo Chen commented on YARN-1593: -- Thanks for starting the work on this, Varun Vasudev! I’d like to understand the proposal better. A few comments/questions on the proposal. Please correct me as necessary. It seems like system containers are overloaded in the design doc. From a NM’s perspective, my understanding is that system containers are special container runtime (relative to the container types we have today in NM) provided by NM to be used by system services to run their components/instances. In other cases, system containers represent components/instances of system services on the worker nodes. In the former case, we may only need to be concerned with issues such as classpath and container executors. For ShuffleHandler for instance, it is an alternative of the in-process runtime it gets from NM today. The latter, is where we discuss whether RM or NM does the heavy-lifting of managing system containers. As you mention, no one option suits all use cases. Option 1 suits some, while option 3 suits others. I wonder if this is because we are conflating two different types of containers in the proposal - (1) framework-specific services like MR shuffle, and (2) application-specific services. Framework services are to be run on all nodes that support the framework (e.g. MR). Since these run on every node, node-level configs (option 3) would work best. Application-services (e.g. ATS AM-companion-collector), on the other hand, are application specific and need to run on a subset of cluster nodes; option 1 readily applies to these. Is this categorization accurate? And, do you see merit in differentiating between these two? bq. Allow shuffle to run on the NodeManagers without requiring it to be setup as an AuxiliaryService Not sure if I understand this correctly, IHO, we could let the user continue with their current configuration for AuxiliaryService, but just run them in containers with AuxiliaryService proxy like Junping said in the jira description. bq. Handling container status for system-containers - we will need to add logic to not act upon the container status of a system-container. Can you please elaborate more on this? Shouldn’t NM try to relaunch system containers? Does this mean that RM will take the responsibility of handling system container failures? bq. I think discovery is going to be one major piece that needs to be addressed from the beginning Agree with Sangjin that discovery problem needs to be addressed right at the beginning. For option 3, I think we can add a queryable registry in AuxiliaryServices when NM launches a proxied AuxiliaryService assuming that NM will launch the AuxiliaryServices in the right order and each AuxiliaryService knows its dependent services. bq. the NodeManager will block container requests until all the system-containers are running With global scheduling and resource affinity, NM does not necessarily need to block container launching. NM can launch system containers asynchronously and report to resource manager upon launch success, and RM can only schedule containers on those nodes if the services that the containers depend on have been launched on those nodes. But that’s way in the future I guess bq. We can’t solve the dependency management and affinity/anti-affinity requirements. (One of cons in option 3) Not quite sure how option 1 solves the affinity requirement. Can you elaborate a little more on this? To solve the dependency management issue, one thing that occurred to me, but I have not thought about in much details, is, we could have RM manages all system services together and construct a DAG of system services that need to be launched on each NM. Alternatively, RM can just decide what services need to be launched on which nodes with their dependency clearly defined, and then NM can construct the DAG themselves and launches them in topological order. This however, does put some burden on RM. > support out-of-proc AuxiliaryServices > - > > Key: YARN-1593 > URL: https://issues.apache.org/jira/browse/YARN-1593 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager, rolling upgrade >Reporter: Ming Ma >Assignee: Varun Vasudev > Attachments: SystemContainersandSystemServices.pdf > > > AuxiliaryServices such as ShuffleHandler currently run in the same process as > NM. There are some benefits to host them in dedicated processes. > 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the > ShuffleHandler restart. If ShuffleHandler runs as a separate process, > ShuffleHandler can continue to run during NM restart. NM can reconnect the
[jira] [Commented] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710136#comment-15710136 ] Hadoop QA commented on YARN-5871: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 1s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 32 new + 15 unchanged - 6 fixed = 47 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5871 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841137/YARN-5871-YARN-2915.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux ee1bef95c63d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 6452592 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14131/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14131/testReport
[jira] [Updated] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
[ https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5925: - Attachment: YARN-5925.02.patch YARN-5925-YARN-5355.02.patch New patches to address the reported checkstyle warnings > Extract hbase-backend-exclusive utility methods from TimelineStorageUtil > > > Key: YARN-5925 > URL: https://issues.apache.org/jira/browse/YARN-5925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-5925-YARN-5355.01.patch, > YARN-5925-YARN-5355.02.patch, YARN-5925.01.patch, YARN-5925.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5933) ATS stale entries in active directory causes ApplicationNotFoundException in RM
[ https://issues.apache.org/jira/browse/YARN-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710108#comment-15710108 ] Li Lu commented on YARN-5933: - bq. AppLogs#parseSummaryLogs() can skip subsequent getAppState for Unknown apps and move them to complete after unknownActiveSecs This will abandon the ability for the ATS to quickly "recover" an application's state from unknown to known? Once an application's status becomes unknown, the timeline server will no longer check the application's status. Therefore, it is not possible to change the app's status back to known. For example, if the ATS server got isolated from the rest of the cluster temporarily, it will stop checking any app's status even though the isolation is only a short while. To solve this we need to have a separate scanning thread for "lost" applications, scanning at a different pace. > ATS stale entries in active directory causes ApplicationNotFoundException in > RM > --- > > Key: YARN-5933 > URL: https://issues.apache.org/jira/browse/YARN-5933 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph > > On Secure cluster where ATS is down, Tez job submitted will fail while > getting TIMELINE_DELEGATION_TOKEN with below exception > {code} > 0: jdbc:hive2://kerberos-2.openstacklocal:100> select csmallint from > alltypesorc group by csmallint; > INFO : Session is already open > INFO : Dag name: select csmallint from alltypesor...csmallint(Stage-1) > INFO : Tez session was closed. Reopening... > ERROR : Failed to execute tez graph. > java.lang.RuntimeException: Failed to connect to timeline server. Connection > retries limit exceeded. The posted timeline event may be missing > at > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:266) > at > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:590) > at > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:506) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken(YarnClientImpl.java:349) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:330) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:250) > at > org.apache.tez.client.TezYarnClient.submitApplication(TezYarnClient.java:72) > at org.apache.tez.client.TezClient.start(TezClient.java:409) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.closeAndOpen(TezSessionPoolManager.java:311) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:453) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:180) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > Tez YarnClient has received an applicationID from RM. On Restarting ATS now, > ATS tries to get the application report from RM and so RM will throw > ApplicationNotFoundException. ATS will keep on requesting and which f
[jira] [Commented] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710110#comment-15710110 ] Hadoop QA commented on YARN-5871: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 54 new + 21 unchanged - 0 fixed = 75 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 2 new + 160 unchanged - 0 fixed = 162 total (was 160) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 38s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 53s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5871 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841133/YARN-5871-YARN-2915.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux a150d2681c6f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precomm
[jira] [Commented] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.
[ https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710075#comment-15710075 ] Xuan Gong commented on YARN-5746: - Thanks for the review. [~jianhe] Attached a new patch > The state of the parentQueue and its childQueues should be synchronized. > > > Key: YARN-5746 > URL: https://issues.apache.org/jira/browse/YARN-5746 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, resourcemanager >Reporter: Xuan Gong >Assignee: Xuan Gong > Labels: oct16-easy > Attachments: YARN-5746.1.patch, YARN-5746.2.patch, YARN-5746.3.patch, > YARN-5746.4.patch, YARN-5746.5.patch > > > The state of the parentQueue and its childQeues need to be synchronized. > * If the state of the parentQueue becomes STOPPED, the state of its > childQueue need to become STOPPED as well. > * If we change the state of the queue to RUNNING, we should make sure the > state of all its ancestor must be RUNNING. Otherwise, we need to fail this > operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.
[ https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5746: Attachment: YARN-5746.5.patch > The state of the parentQueue and its childQueues should be synchronized. > > > Key: YARN-5746 > URL: https://issues.apache.org/jira/browse/YARN-5746 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, resourcemanager >Reporter: Xuan Gong >Assignee: Xuan Gong > Labels: oct16-easy > Attachments: YARN-5746.1.patch, YARN-5746.2.patch, YARN-5746.3.patch, > YARN-5746.4.patch, YARN-5746.5.patch > > > The state of the parentQueue and its childQeues need to be synchronized. > * If the state of the parentQueue becomes STOPPED, the state of its > childQueue need to become STOPPED as well. > * If we change the state of the queue to RUNNING, we should make sure the > state of all its ancestor must be RUNNING. Otherwise, we need to fail this > operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5756) Add state-machine implementation for queues
[ https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710022#comment-15710022 ] Li Lu commented on YARN-5756: - Hi [~xgong], I tried to apply the patch locally but there were several issues to apply to the latest trunk. One significant issue is SchedulerQueueContext.java is missing in trunk? Could you please rebase your patch? Thanks! > Add state-machine implementation for queues > --- > > Key: YARN-5756 > URL: https://issues.apache.org/jira/browse/YARN-5756 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5756.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5944) Native services AM should remain up if RM is down
[ https://issues.apache.org/jira/browse/YARN-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709988#comment-15709988 ] Hadoop QA commented on YARN-5944: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 51s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 268 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 1 new + 163 unchanged - 1 fixed = 164 total (was 164) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 11 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5944 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841145/YARN-5944-yarn-native-services.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux da20477b0e13 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / 9db0537 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/14133/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/14133/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn
[jira] [Updated] (YARN-5944) Native services AM should remain up if RM is down
[ https://issues.apache.org/jira/browse/YARN-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-5944: - Attachment: YARN-5944-yarn-native-services.002.patch Thanks for taking a look, [~gsaha]. Attaching a new patch that unsets the failover max attempts. > Native services AM should remain up if RM is down > - > > Key: YARN-5944 > URL: https://issues.apache.org/jira/browse/YARN-5944 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-5944-yarn-native-services.001.patch, > YARN-5944-yarn-native-services.002.patch > > > If the RM is down, the native services AM will retry connecting the to RM > until yarn.resourcemanager.connect.max-wait.ms has been exceeded. At that > point, the AM will stop itself. For a long running service, we would prefer > the AM to stay up while the RM is down. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler
[ https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709921#comment-15709921 ] Hudson commented on YARN-5761: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10920 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10920/]) YARN-5761. Separate QueueManager from Scheduler. (Xuan Gong via (gtcarrera9: rev 69fb70c31aa277f7fb14b05c0185ddc5cd90793d) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerQueueManager.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java > Separate QueueManager from Scheduler > > > Key: YARN-5761 > URL: https://issues.apache.org/jira/browse/YARN-5761 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Xuan Gong >Assignee: Xuan Gong > Labels: oct16-medium > Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, > YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, > YARN-5761.6.patch, YARN-5761.7.patch, YARN-5761.7.patch, YARN-5761.8.patch > > > Currently, in scheduler code, we are doing queue manager and scheduling work. > We'd better separate the queue manager out of scheduler logic. In that case, > it would be much easier and safer to extend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5941) Slider handles "per.component" for multiple components incorrectly
[ https://issues.apache.org/jira/browse/YARN-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5941: Fix Version/s: yarn-native-services > Slider handles "per.component" for multiple components incorrectly > -- > > Key: YARN-5941 > URL: https://issues.apache.org/jira/browse/YARN-5941 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-5941-yarn-native-services.001.patch > > > When multiple components are started by slider and each component should have > a different property file, "per.component" should be set to true for each > component. > {code:title=component1} > 'properties': { > 'site.app-site.job-builder.class': 'xxx', > 'site.app-site.rpc.server.hostname': 'xxx', > 'site.app-site.per.component': 'true' > } > {code} > {code:title=component2} > 'properties': { > 'site.app-site.job-builder.class.component2': > 'yyy', > 'site.app-site.rpc.server.hostname.component2': > 'yyy', > 'site.app-site.per.component': 'true' > } > {code} > While doing that, one of the component's property file gets > "per.component"="true" in the slider generated property file. > {code:title=property file for component1} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > per.component=true > job-builder.class=xxx > rpc.server.hostname=xxx{code} > {code:title=property file for component2} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > job-builder.class.component2=yyy > rpc.server.hostname.component2=yyy{code} > "per.component" should not be added in any component's property file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5925) Extract hbase-backend-exclusive utility methods from TimelineStorageUtil
[ https://issues.apache.org/jira/browse/YARN-5925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5925: - Summary: Extract hbase-backend-exclusive utility methods from TimelineStorageUtil (was: Extract hbase-backend-exclusive utility methods from TImelineStorageUtil) > Extract hbase-backend-exclusive utility methods from TimelineStorageUtil > > > Key: YARN-5925 > URL: https://issues.apache.org/jira/browse/YARN-5925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-5925-YARN-5355.01.patch, YARN-5925.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5871: --- Attachment: YARN-5871-YARN-2915.04.patch Addressing some of Yetus complaints > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Labels: federation > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch, YARN-5871-YARN-2915.02.patch, > YARN-5871-YARN-2915.03.patch, YARN-5871-YARN-2915.04.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5871: --- Attachment: YARN-5871-YARN-2915.03.patch > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Labels: federation > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch, YARN-5871-YARN-2915.02.patch, > YARN-5871-YARN-2915.03.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write
[ https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709863#comment-15709863 ] Junping Du commented on YARN-5915: -- bq. Is the output stream unbuffered such that the write is acting like a flush? That's also what I am suspecting. Shall we assume the behavior of buffered or unbuffered output stream be consistent in our case? The _flushBuffer() is hard to do so. May be good to buffer things at JsonGenerator? {noformat} protected final void _flushBuffer() throws IOException { int len = _outputTail - _outputHead; if (len > 0) { int offset = _outputHead; _outputTail = _outputHead = 0; _writer.write(_outputBuffer, offset, len); } } {noformat} > ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every > event write > > > Key: YARN-5915 > URL: https://issues.apache.org/jira/browse/YARN-5915 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 3.0.0-alpha1 >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Attachments: YARN-5915.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5694) ZKRMStateStore can prevent the transition to standby in branch-2.7 if the ZK node is unreachable
[ https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709850#comment-15709850 ] Daniel Templeton edited comment on YARN-5694 at 11/30/16 9:35 PM: -- The test failure looks legit (which is odd since it worked locally), but the rest of the issues are bogus. I'll take a closer look at the test failure. was (Author: templedf): The test failure looks legit (which is odd since it worked locally), but the rest of the issues are bogus. > ZKRMStateStore can prevent the transition to standby in branch-2.7 if the ZK > node is unreachable > > > Key: YARN-5694 > URL: https://issues.apache.org/jira/browse/YARN-5694 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.3 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Labels: oct16-medium > Attachments: YARN-5694.001.patch, YARN-5694.002.patch, > YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, > YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, > YARN-5694.008.patch, YARN-5694.branch-2.6.001.patch, > YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch, > YARN-5694.branch-2.7.004.patch, YARN-5694.branch-2.7.005.patch > > > {{ZKRMStateStore.doStoreMultiWithRetries()}} holds the lock while trying to > talk to ZK. If the connection fails, it will retry while still holding the > lock. The retries are intended to be strictly time limited, but in the case > that the ZK node is unreachable, the time limit fails, resulting in the > thread holding the lock for over an hour. Transitioning the RM to standby > requires that same lock, so in exactly the case that the RM should be > transitioning to standby, the {{VerifyActiveStatusThread}} blocks it from > happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5694) ZKRMStateStore can prevent the transition to standby in branch-2.7 if the ZK node is unreachable
[ https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709850#comment-15709850 ] Daniel Templeton commented on YARN-5694: The test failure looks legit (which is odd since it worked locally), but the rest of the issues are bogus. > ZKRMStateStore can prevent the transition to standby in branch-2.7 if the ZK > node is unreachable > > > Key: YARN-5694 > URL: https://issues.apache.org/jira/browse/YARN-5694 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.3 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Labels: oct16-medium > Attachments: YARN-5694.001.patch, YARN-5694.002.patch, > YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, > YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, > YARN-5694.008.patch, YARN-5694.branch-2.6.001.patch, > YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch, > YARN-5694.branch-2.7.004.patch, YARN-5694.branch-2.7.005.patch > > > {{ZKRMStateStore.doStoreMultiWithRetries()}} holds the lock while trying to > talk to ZK. If the connection fails, it will retry while still holding the > lock. The retries are intended to be strictly time limited, but in the case > that the ZK node is unreachable, the time limit fails, resulting in the > thread holding the lock for over an hour. Transitioning the RM to standby > requires that same lock, so in exactly the case that the RM should be > transitioning to standby, the {{VerifyActiveStatusThread}} blocks it from > happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5871: --- Attachment: YARN-5871-YARN-2915.02.patch Uploading the right patch. > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Labels: federation > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch, YARN-5871-YARN-2915.02.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler
[ https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709840#comment-15709840 ] Li Lu commented on YARN-5761: - Will commit this patch shortly. > Separate QueueManager from Scheduler > > > Key: YARN-5761 > URL: https://issues.apache.org/jira/browse/YARN-5761 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Xuan Gong >Assignee: Xuan Gong > Labels: oct16-medium > Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, > YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, > YARN-5761.6.patch, YARN-5761.7.patch, YARN-5761.7.patch, YARN-5761.8.patch > > > Currently, in scheduler code, we are doing queue manager and scheduling work. > We'd better separate the queue manager out of scheduler logic. In that case, > it would be much easier and safer to extend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5694) ZKRMStateStore can prevent the transition to standby in branch-2.7 if the ZK node is unreachable
[ https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709835#comment-15709835 ] Hadoop QA commented on YARN-5694: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 42s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 52s{color} | {color:green} branch-2.6 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} branch-2.6 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} branch-2.6 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} branch-2.6 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} branch-2.6 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} branch-2.6 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} branch-2.6 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.6 failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} branch-2.6 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3765 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 32s{color} | {color:red} The patch 74 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 1s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 43s{color} | {color:red} The patch generated 127 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_111 Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | | | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore | | | hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore | | JDK v1.7.0_121 Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization
[jira] [Commented] (YARN-5709) Cleanup Curator-based leader election code
[ https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709834#comment-15709834 ] Jian He commented on YARN-5709: --- When I had offline discussion with [~xgong], the ultimate goal is to remove the old EmbeddedElectorService, as we saw the hadoop common's elector implementation kept on having issues one after another in our internal stress testing. We wanted to just replace that with curator implementation. After all, there's no point maintaining two. bq. I feel the code should be checking for !curatorBased instead of isEmbeddedElector which code is this ? bq. The code that initializes the elector should be at the same place irrespective of whether it is curator-based or not. Does this mean CuratorLeaderElector initialization should be inside the AdminService ? I don't think it needs to be the case even for the old EmbeddedElectorService too, if you look at the implementation, there's no dependency between the EmbeddedElectorService and AdminService at all. > Cleanup Curator-based leader election code > -- > > Key: YARN-5709 > URL: https://issues.apache.org/jira/browse/YARN-5709 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Critical > > While reviewing YARN-5677 and YARN-5694, I noticed we could make the > curator-based election code cleaner. It is nicer to get this fixed in 2.8 > before we ship it, but this can be done at a later time as well. > # By EmbeddedElector, we meant it was running as part of the RM daemon. Since > the Curator-based elector is also running embedded, I feel the code should be > checking for {{!curatorBased}} instead of {{isEmbeddedElector}} > # {{LeaderElectorService}} should probably be named > {{CuratorBasedEmbeddedElectorService}} or some such. > # The code that initializes the elector should be at the same place > irrespective of whether it is curator-based or not. > # We seem to be caching the CuratorFramework instance in RM. It makes more > sense for it to be in RMContext. If others are okay with it, we might even be > better of having {{RMContext#getCurator()}} method to lazily create the > curator framework and then cache it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler
[ https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709832#comment-15709832 ] Eric Payne commented on YARN-5889: -- {quote} bq. It seems like this should be longer than 1 ms. It could be possible that containers are released and created very fast in a big cluster. {quote} [~sunilg], I now realize that with this design, the {{preComputedUserLimit}} cache will become out of date very quickly if the {{ComputeUserLimitAsyncThread}} thread is not run in a very tight loop. Even with that, {{preComputedUserLimit}} could still be out of date at the moment the scheduler needs to fill a large request. On the other hand, with this design the user limit resource is being calculated a lot more often than it is currently. Currently, it is only being calculated during the scheduler loop, and only then for apps that are asking for resources. However, this design calculates it twice every millisecond (once with partition exclusivity and once without). If a cluster is not full and has mostly apps with long-running containers, then this is being calculated thousands of times when it doesn't need to be. Instead could we add a boolean flag to {{UserToPartitionRecord}}? This flag would be set when a container is allocated or releaseed for an app from that user. Then, whenever {{getComputedUserLimit}} is called, if the flag is set, it calls {{computeUserLimit}} and clears the flag. What do you think? > Improve user-limit calculation in capacity scheduler > > > Key: YARN-5889 > URL: https://issues.apache.org/jira/browse/YARN-5889 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5889.v0.patch, YARN-5889.v1.patch, > YARN-5889.v2.patch > > > Currently user-limit is computed during every heartbeat allocation cycle with > a write lock. To improve performance, this tickets is focussing on moving > user-limit calculation out of heartbeat allocation flow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application
[ https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709830#comment-15709830 ] Li Lu commented on YARN-5739: - Any more comments folks? Thanks! > Provide timeline reader API to list available timeline entity types for one > application > --- > > Key: YARN-5739 > URL: https://issues.apache.org/jira/browse/YARN-5739 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5739-YARN-5355.001.patch, > YARN-5739-YARN-5355.002.patch, YARN-5739-YARN-5355.003.patch, > YARN-5739-YARN-5355.004.patch, YARN-5739-YARN-5355.005.patch > > > Right now we only show a part of available timeline entity data in the new > YARN UI. However, some data (especially library specific data) are not > possible to be queried out by the web UI. It will be appealing for the UI to > provide an "entity browser" for each YARN application. Actually, simply > dumping out available timeline entities (with proper pagination, of course) > would be pretty helpful for UI users. > On timeline side, we're not far away from this goal. Right now I believe the > only thing missing is to list all available entity types within one > application. The challenge here is that we're not storing this data for each > application, but given this kind of call is relatively rare (compare to > writes and updates) we can perform some scanning during the read time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709820#comment-15709820 ] Hadoop QA commented on YARN-5871: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 9 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 22s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 38 new + 21 unchanged - 0 fixed = 59 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 2 new + 160 unchanged - 0 fixed = 162 total (was 160) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 18s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 54s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.federation.policies.amrmproxy.TestBroadcastAMRMProxyFederationPolicy | | | hadoop.yarn.server.federation.policies.amrmproxy.TestLocalityMulticastAMRMProxyPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5871 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841118/YARN-5871-YARN-2915.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux c5a420243536 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups
[ https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709797#comment-15709797 ] Hadoop QA commented on YARN-5849: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 26 unchanged - 23 fixed = 26 total (was 49) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 230 unchanged - 5 fixed = 230 total (was 235) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {col
[jira] [Commented] (YARN-5941) Slider handles "per.component" for multiple components incorrectly
[ https://issues.apache.org/jira/browse/YARN-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709788#comment-15709788 ] Gour Saha commented on YARN-5941: - 001 patch looks good to me. +1. > Slider handles "per.component" for multiple components incorrectly > -- > > Key: YARN-5941 > URL: https://issues.apache.org/jira/browse/YARN-5941 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora >Assignee: Billie Rinaldi > Attachments: YARN-5941-yarn-native-services.001.patch > > > When multiple components are started by slider and each component should have > a different property file, "per.component" should be set to true for each > component. > {code:title=component1} > 'properties': { > 'site.app-site.job-builder.class': 'xxx', > 'site.app-site.rpc.server.hostname': 'xxx', > 'site.app-site.per.component': 'true' > } > {code} > {code:title=component2} > 'properties': { > 'site.app-site.job-builder.class.component2': > 'yyy', > 'site.app-site.rpc.server.hostname.component2': > 'yyy', > 'site.app-site.per.component': 'true' > } > {code} > While doing that, one of the component's property file gets > "per.component"="true" in the slider generated property file. > {code:title=property file for component1} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > per.component=true > job-builder.class=xxx > rpc.server.hostname=xxx{code} > {code:title=property file for component2} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > job-builder.class.component2=yyy > rpc.server.hostname.component2=yyy{code} > "per.component" should not be added in any component's property file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5932) Retrospect moveApplicationToQueue in align with YARN-5611
[ https://issues.apache.org/jira/browse/YARN-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709783#comment-15709783 ] Jian He commented on YARN-5932: --- one question about move, what if the target queue goes over its capacity limit, will the move continue ? - change this to if (! (Running && accepted) ) ? {code} if (EnumSet.of(RMAppState.NEW, RMAppState.NEW_SAVING, RMAppState.SUBMITTED, RMAppState.FINAL_SAVING, RMAppState.FINISHING, RMAppState.FINISHED, RMAppState.KILLED, RMAppState.KILLING, RMAppState.FAILED) {code} - the type cast is not needed {code} ((RMAppImpl) app).setQueue(queue); FSAppAttempt attempt = (FSAppAttempt) app.getCurrentAppAttempt(); {code} > Retrospect moveApplicationToQueue in align with YARN-5611 > - > > Key: YARN-5932 > URL: https://issues.apache.org/jira/browse/YARN-5932 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, resourcemanager >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5932.v0.patch, YARN-5932.v1.patch > > > All dynamic api's of an application's state change could follow a general > design approach. Currently priority and app timeouts are following this > approach all corner cases. > *Steps* > - Do a pre-validate check to ensure that changes are fine. > - Update this information to state-store > - Perform real move operation and update in-memory data structures. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5928) Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers
[ https://issues.apache.org/jira/browse/YARN-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5928: - Summary: Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers (was: Move HBase backend code into a new module that the hadoop-yarn-server-timelineservice module depends on only at runtime) > Move ATSv2 HBase backend code into a new module that is only dependent at > runtime by yarn servers > - > > Key: YARN-5928 > URL: https://issues.apache.org/jira/browse/YARN-5928 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5709) Cleanup Curator-based leader election code
[ https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709739#comment-15709739 ] Junping Du commented on YARN-5709: -- Sure. Thanks for update, Daniel. CC [~jianhe] in case he want to review this code also. > Cleanup Curator-based leader election code > -- > > Key: YARN-5709 > URL: https://issues.apache.org/jira/browse/YARN-5709 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Critical > > While reviewing YARN-5677 and YARN-5694, I noticed we could make the > curator-based election code cleaner. It is nicer to get this fixed in 2.8 > before we ship it, but this can be done at a later time as well. > # By EmbeddedElector, we meant it was running as part of the RM daemon. Since > the Curator-based elector is also running embedded, I feel the code should be > checking for {{!curatorBased}} instead of {{isEmbeddedElector}} > # {{LeaderElectorService}} should probably be named > {{CuratorBasedEmbeddedElectorService}} or some such. > # The code that initializes the elector should be at the same place > irrespective of whether it is curator-based or not. > # We seem to be caching the CuratorFramework instance in RM. It makes more > sense for it to be in RMContext. If others are okay with it, we might even be > better of having {{RMContext#getCurator()}} method to lazily create the > curator framework and then cache it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups
[ https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709647#comment-15709647 ] Miklos Szegedi commented on YARN-5849: -- Thank you [~templedf]. I fixed the issue. > Automatically create YARN control group for pre-mounted cgroups > --- > > Key: YARN-5849 > URL: https://issues.apache.org/jira/browse/YARN-5849 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-5849.000.patch, YARN-5849.001.patch, > YARN-5849.002.patch, YARN-5849.003.patch, YARN-5849.004.patch, > YARN-5849.005.patch, YARN-5849.006.patch > > > Yarn can be launched with linux-container-executor.cgroups.mount set to > false. It will search for the cgroup mount paths set up by the administrator > parsing the /etc/mtab file. You can also specify > resource.percentage-physical-cpu-limit to limit the CPU resources assigned to > containers. > linux-container-executor.cgroups.hierarchy is the root of the settings of all > YARN containers. If this is specified but not created YARN will fail at > startup: > Caused by: java.io.FileNotFoundException: > /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied) > org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263) > This JIRA is about automatically creating YARN control group in the case > above. It reduces the cost of administration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups
[ https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-5849: - Attachment: YARN-5849.006.patch Addressing comments > Automatically create YARN control group for pre-mounted cgroups > --- > > Key: YARN-5849 > URL: https://issues.apache.org/jira/browse/YARN-5849 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-5849.000.patch, YARN-5849.001.patch, > YARN-5849.002.patch, YARN-5849.003.patch, YARN-5849.004.patch, > YARN-5849.005.patch, YARN-5849.006.patch > > > Yarn can be launched with linux-container-executor.cgroups.mount set to > false. It will search for the cgroup mount paths set up by the administrator > parsing the /etc/mtab file. You can also specify > resource.percentage-physical-cpu-limit to limit the CPU resources assigned to > containers. > linux-container-executor.cgroups.hierarchy is the root of the settings of all > YARN containers. If this is specified but not created YARN will fail at > startup: > Caused by: java.io.FileNotFoundException: > /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied) > org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263) > This JIRA is about automatically creating YARN control group in the case > above. It reduces the cost of administration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709640#comment-15709640 ] Carlo Curino commented on YARN-5871: Marking as patch available to trigger initial runs of Yetus... this is a WIP. > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Labels: federation > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709633#comment-15709633 ] Carlo Curino commented on YARN-5871: I added to this also changes to the {{RouterPolicyFacade}} to support reservation routing, and fixed test wiring issues for the AMRMProxy side. > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5871) Add support for reservation-based routing.
[ https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5871: --- Attachment: YARN-5871-YARN-2915.01.patch > Add support for reservation-based routing. > -- > > Key: YARN-5871 > URL: https://issues.apache.org/jira/browse/YARN-5871 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5871-YARN-2915.01.patch, > YARN-5871-YARN-2915.01.patch > > > Adding policies that can route reservations, and that then route applications > to where the reservation have been placed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5944) Native services AM should remain up if RM is down
[ https://issues.apache.org/jira/browse/YARN-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709598#comment-15709598 ] Gour Saha commented on YARN-5944: - [~billie.rinaldi] do we have to set anything to this config _*yarn.client.failover-max-attempts*_? Seems like it is a client specific property but when HA is enabled it can override _yarn.resourcemanager.connect.max-wait.ms_. So not sure what we should do to it. Here is the description from https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml - {quote} When HA is enabled, the max number of times FailoverProxyProvider should attempt failover. When set, this overrides the yarn.resourcemanager.connect.max-wait.ms. When not set, this is inferred from yarn.resourcemanager.connect.max-wait.ms. {quote} Should we explicitly clear the value of _yarn.client.failover-max-attempts_ in Slider AM? > Native services AM should remain up if RM is down > - > > Key: YARN-5944 > URL: https://issues.apache.org/jira/browse/YARN-5944 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-5944-yarn-native-services.001.patch > > > If the RM is down, the native services AM will retry connecting the to RM > until yarn.resourcemanager.connect.max-wait.ms has been exceeded. At that > point, the AM will stop itself. For a long running service, we would prefer > the AM to stay up while the RM is down. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5942) "Overridden" is misspelled as "overriden" in FairScheduler.md
[ https://issues.apache.org/jira/browse/YARN-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709556#comment-15709556 ] Hudson commented on YARN-5942: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10918 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10918/]) YARN-5942. "Overridden" is misspelled as "overriden" in FairScheduler.md (templedf: rev 4fca94fbdad16e845e670758939aabb7a97154d9) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > "Overridden" is misspelled as "overriden" in FairScheduler.md > - > > Key: YARN-5942 > URL: https://issues.apache.org/jira/browse/YARN-5942 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Heather Sutherland >Priority: Trivial > Labels: newbie > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: FairScheduler.md, YARN-5942.001.patch > > > {noformat} > % grep -i overriden > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > * **A queueMaxAppsDefault element**: which sets the default running app limit > for queues; overriden by maxRunningApps element in each queue. > * **A queueMaxResourcesDefault element**: which sets the default max resource > limit for queue; overriden by maxResources element in each queue. > * **A queueMaxAMShareDefault element**: which sets the default AM resource > limit for queue; overriden by maxAMShare element in each queue. > * **A defaultQueueSchedulingPolicy element**: which sets the default > scheduling policy for queues; overriden by the schedulingPolicy element in > each queue if specified. Defaults to "fair". > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
[ https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709497#comment-15709497 ] Jian He commented on YARN-5559: --- sorry, I meant get/setNodeLabelList ... no 's'.. would you also if the jenkins warnings are related > Analyse 2.8.0/3.0.0 jdiff reports and fix any issues > > > Key: YARN-5559 > URL: https://issues.apache.org/jira/browse/YARN-5559 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Wangda Tan >Assignee: Akira Ajisaka >Priority: Blocker > Labels: oct16-easy > Attachments: YARN-5559.1.patch, YARN-5559.2.patch, YARN-5559.3.patch, > YARN-5559.4.patch, YARN-5559.5.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5709) Cleanup Curator-based leader election code
[ https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709482#comment-15709482 ] Daniel Templeton commented on YARN-5709: I was actually just about to start work on it today. It's something that [~kasha] really wanted to get in before the confusing naming gets any more entrenched. > Cleanup Curator-based leader election code > -- > > Key: YARN-5709 > URL: https://issues.apache.org/jira/browse/YARN-5709 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Critical > > While reviewing YARN-5677 and YARN-5694, I noticed we could make the > curator-based election code cleaner. It is nicer to get this fixed in 2.8 > before we ship it, but this can be done at a later time as well. > # By EmbeddedElector, we meant it was running as part of the RM daemon. Since > the Curator-based elector is also running embedded, I feel the code should be > checking for {{!curatorBased}} instead of {{isEmbeddedElector}} > # {{LeaderElectorService}} should probably be named > {{CuratorBasedEmbeddedElectorService}} or some such. > # The code that initializes the elector should be at the same place > irrespective of whether it is curator-based or not. > # We seem to be caching the CuratorFramework instance in RM. It makes more > sense for it to be in RMContext. If others are okay with it, we might even be > better of having {{RMContext#getCurator()}} method to lazily create the > curator framework and then cache it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5942) "Overridden" is misspelled as "overriden" in FairScheduler.md
[ https://issues.apache.org/jira/browse/YARN-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709479#comment-15709479 ] Hadoop QA commented on YARN-5942: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5942 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841091/YARN-5942.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux f42d6f1b0343 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / be5a757 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/14126/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > "Overridden" is misspelled as "overriden" in FairScheduler.md > - > > Key: YARN-5942 > URL: https://issues.apache.org/jira/browse/YARN-5942 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Heather Sutherland >Priority: Trivial > Labels: newbie > Attachments: FairScheduler.md, YARN-5942.001.patch > > > {noformat} > % grep -i overriden > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > * **A queueMaxAppsDefault element**: which sets the default running app limit > for queues; overriden by maxRunningApps element in each queue. > * **A queueMaxResourcesDefault element**: which sets the default max resource > limit for queue; overriden by maxResources element in each queue. > * **A queueMaxAMShareDefault element**: which sets the default AM resource > limit for queue; overriden by maxAMShare element in each queue. > * **A defaultQueueSchedulingPolicy element**: which sets the default > scheduling policy for queues; overriden by the schedulingPolicy element in > each queue if specified. Defaults to "fair". > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5709) Cleanup Curator-based leader election code
[ https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709474#comment-15709474 ] Junping Du commented on YARN-5709: -- Hi [~templedf], any progress on this issue? If not, shall we downgrade it to major and defer to 2.9 given this is just refactor effort? > Cleanup Curator-based leader election code > -- > > Key: YARN-5709 > URL: https://issues.apache.org/jira/browse/YARN-5709 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Critical > > While reviewing YARN-5677 and YARN-5694, I noticed we could make the > curator-based election code cleaner. It is nicer to get this fixed in 2.8 > before we ship it, but this can be done at a later time as well. > # By EmbeddedElector, we meant it was running as part of the RM daemon. Since > the Curator-based elector is also running embedded, I feel the code should be > checking for {{!curatorBased}} instead of {{isEmbeddedElector}} > # {{LeaderElectorService}} should probably be named > {{CuratorBasedEmbeddedElectorService}} or some such. > # The code that initializes the elector should be at the same place > irrespective of whether it is curator-based or not. > # We seem to be caching the CuratorFramework instance in RM. It makes more > sense for it to be in RMContext. If others are okay with it, we might even be > better of having {{RMContext#getCurator()}} method to lazily create the > curator framework and then cache it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-4665) Asynch submit can lose application submissions
[ https://issues.apache.org/jira/browse/YARN-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton resolved YARN-4665. Resolution: Invalid I'm closing this issue as invalid. Turns out what I was seeing was actually quirks in the RM failover, which are now addressed by YARN-5677 and YARN-5694. > Asynch submit can lose application submissions > -- > > Key: YARN-4665 > URL: https://issues.apache.org/jira/browse/YARN-4665 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.1.0-beta >Reporter: Daniel Templeton >Assignee: Daniel Templeton > > The change introduced in YARN-514 opens up a hole into which applications can > fall and be lost. Prior to YARN-514, the {{submitApplication()}} call did > not complete until the application state was persisted to the state store. > After YARN-514, the {{submitApplication()}} call is asynchronous, with the > application state being saved later. > If the state store is slow or unresponsive, it may be that an application's > state may not be persisted for quite a while. During that time, if the RM > fails (over), all applications that have not yet been persisted to the state > store will be lost. If the active RM loses ZK connectivity, a significant > number of job submissions can pile up before the ZK connection times out, > resulting in a large pile of client failures when it finally does. > This issue is inherent in the design of YARN-514. I see three solutions: > 1. Add a WAL to the state store. HBase does it, so we know how to do it. It > seems like a heavy solution to the original problem, however. It's certainly > not a trivial change. > 2. Revert YARN-514 and update the RPC layer to allow a connection to be > parked if it's doing something that may take a while. This is a generally > useful feature but could be a deep rabbit hole. > 3. Revert YARN-514 and add back-pressure to the job submission. For example, > we set a maximum number of threads that can simultaneously be assigned to > handle job submissions. When that threshold is reached, new job submissions > get a try-again-later response. This is also a generally useful feature and > should be a fairly constrained set of changes. > I think the third option is the most approachable. It's the smallest change, > and it adds useful behavior beyond solving the original issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5942) "Overridden" is misspelled as "overriden" in FairScheduler.md
[ https://issues.apache.org/jira/browse/YARN-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heather Sutherland updated YARN-5942: - Attachment: YARN-5942.001.patch > "Overridden" is misspelled as "overriden" in FairScheduler.md > - > > Key: YARN-5942 > URL: https://issues.apache.org/jira/browse/YARN-5942 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Heather Sutherland >Priority: Trivial > Labels: newbie > Attachments: FairScheduler.md, YARN-5942.001.patch > > > {noformat} > % grep -i overriden > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > * **A queueMaxAppsDefault element**: which sets the default running app limit > for queues; overriden by maxRunningApps element in each queue. > * **A queueMaxResourcesDefault element**: which sets the default max resource > limit for queue; overriden by maxResources element in each queue. > * **A queueMaxAMShareDefault element**: which sets the default AM resource > limit for queue; overriden by maxAMShare element in each queue. > * **A defaultQueueSchedulingPolicy element**: which sets the default > scheduling policy for queues; overriden by the schedulingPolicy element in > each queue if specified. Defaults to "fair". > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5548) Use MockRMMemoryStateStore to reduce test failures
[ https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709418#comment-15709418 ] Hadoop QA commented on YARN-5548: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 3 new + 417 unchanged - 4 fixed = 420 total (was 421) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 11s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5548 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841084/YARN-5548.0008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 18efdd79bf48 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 625df87 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/14124/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/14124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/14124/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/Pr
[jira] [Commented] (YARN-5941) Slider handles "per.component" for multiple components incorrectly
[ https://issues.apache.org/jira/browse/YARN-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709403#comment-15709403 ] Hadoop QA commented on YARN-5941: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 56s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 1s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 268 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 5 new + 52 unchanged - 1 fixed = 57 total (was 53) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s{color} | {color:red} The patch generated 11 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5941 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841088/YARN-5941-yarn-native-services.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c70951965f4b 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / c8d63b1 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/14125/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/14125/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-sl
[jira] [Updated] (YARN-5943) Write native services container stderr file to log directory
[ https://issues.apache.org/jira/browse/YARN-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5943: Fix Version/s: yarn-native-services > Write native services container stderr file to log directory > > > Key: YARN-5943 > URL: https://issues.apache.org/jira/browse/YARN-5943 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-5943-yarn-native-services.001.patch > > > The stderr file is being written to the current directory, which is the > container local directory. It should be written to the container log > directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5941) Slider handles "per.component" for multiple components incorrectly
[ https://issues.apache.org/jira/browse/YARN-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-5941: - Attachment: YARN-5941-yarn-native-services.001.patch > Slider handles "per.component" for multiple components incorrectly > -- > > Key: YARN-5941 > URL: https://issues.apache.org/jira/browse/YARN-5941 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora >Assignee: Billie Rinaldi > Attachments: YARN-5941-yarn-native-services.001.patch > > > When multiple components are started by slider and each component should have > a different property file, "per.component" should be set to true for each > component. > {code:title=component1} > 'properties': { > 'site.app-site.job-builder.class': 'xxx', > 'site.app-site.rpc.server.hostname': 'xxx', > 'site.app-site.per.component': 'true' > } > {code} > {code:title=component2} > 'properties': { > 'site.app-site.job-builder.class.component2': > 'yyy', > 'site.app-site.rpc.server.hostname.component2': > 'yyy', > 'site.app-site.per.component': 'true' > } > {code} > While doing that, one of the component's property file gets > "per.component"="true" in the slider generated property file. > {code:title=property file for component1} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > per.component=true > job-builder.class=xxx > rpc.server.hostname=xxx{code} > {code:title=property file for component2} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > job-builder.class.component2=yyy > rpc.server.hostname.component2=yyy{code} > "per.component" should not be added in any component's property file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider
[ https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709328#comment-15709328 ] Hudson commented on YARN-4997: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10915 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10915/]) YARN-4997. Update fair scheduler to use pluggable auth provider (templedf: rev b3befc021b0e2d63d1a3710ea450797d1129f1f5) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java > Update fair scheduler to use pluggable auth provider > > > Key: YARN-4997 > URL: https://issues.apache.org/jira/browse/YARN-4997 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Tao Jie > Fix For: 3.0.0-alpha2 > > Attachments: YARN-4997-001.patch, YARN-4997-002.patch, > YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, > YARN-4997-006.patch, YARN-4997-007.patch, YARN-4997-008.patch, > YARN-4997-009.patch, YARN-4997-010.patch, YARN-4997-011.patch > > > Now that YARN-3100 has made the authorization pluggable, it should be > supported by the fair scheduler. YARN-3100 only updated the capacity > scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5685) Non-embedded HA failover is broken
[ https://issues.apache.org/jira/browse/YARN-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709323#comment-15709323 ] Daniel Templeton commented on YARN-5685: Nope. This JIRA doesn't touch those bits of the code. YARN-5694 or, ever better, YARN-5709 are better places to make those changes. > Non-embedded HA failover is broken > -- > > Key: YARN-5685 > URL: https://issues.apache.org/jira/browse/YARN-5685 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Labels: oct16-hard > Attachments: YARN-5685.001.patch, YARN-5685.002.patch > > > If HA is enabled with automatic failover enabled and embedded failover > disabled, all RMs all come up in standby state. To make one of them active, > the {{--forcemanual}} flag must be used when manually triggering the state > change. Should the active go down, the standby will not become active and > must be manually transitioned with the {{--forcemanual}} flag. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5943) Write native services container stderr file to log directory
[ https://issues.apache.org/jira/browse/YARN-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709269#comment-15709269 ] Gour Saha commented on YARN-5943: - +1 for the 001 patch > Write native services container stderr file to log directory > > > Key: YARN-5943 > URL: https://issues.apache.org/jira/browse/YARN-5943 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-5943-yarn-native-services.001.patch > > > The stderr file is being written to the current directory, which is the > container local directory. It should be written to the container log > directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider
[ https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709263#comment-15709263 ] Daniel Templeton commented on YARN-4997: [~Tao Jie], would you mind checking if the change you made in the {{FairScheduler}} test needs to be made in any other tests? If so, please file a new JIRA. > Update fair scheduler to use pluggable auth provider > > > Key: YARN-4997 > URL: https://issues.apache.org/jira/browse/YARN-4997 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Tao Jie > Fix For: 3.0.0-alpha2 > > Attachments: YARN-4997-001.patch, YARN-4997-002.patch, > YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, > YARN-4997-006.patch, YARN-4997-007.patch, YARN-4997-008.patch, > YARN-4997-009.patch, YARN-4997-010.patch, YARN-4997-011.patch > > > Now that YARN-3100 has made the authorization pluggable, it should be > supported by the fair scheduler. YARN-3100 only updated the capacity > scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5942) "Overridden" is misspelled as "overriden" in FairScheduler.md
[ https://issues.apache.org/jira/browse/YARN-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709196#comment-15709196 ] Daniel Templeton commented on YARN-5942: Thanks for the quick turn-around, [~hsutherland]. Could you please upload the patch file instead of the modified file? > "Overridden" is misspelled as "overriden" in FairScheduler.md > - > > Key: YARN-5942 > URL: https://issues.apache.org/jira/browse/YARN-5942 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Heather Sutherland >Priority: Trivial > Labels: newbie > Attachments: FairScheduler.md > > > {noformat} > % grep -i overriden > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > * **A queueMaxAppsDefault element**: which sets the default running app limit > for queues; overriden by maxRunningApps element in each queue. > * **A queueMaxResourcesDefault element**: which sets the default max resource > limit for queue; overriden by maxResources element in each queue. > * **A queueMaxAMShareDefault element**: which sets the default AM resource > limit for queue; overriden by maxAMShare element in each queue. > * **A defaultQueueSchedulingPolicy element**: which sets the default > scheduling policy for queues; overriden by the schedulingPolicy element in > each queue if specified. Defaults to "fair". > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5548) Use MockRMMemoryStateStore to reduce test failures
[ https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bibin A Chundatt updated YARN-5548: --- Attachment: YARN-5548.0008.patch > Use MockRMMemoryStateStore to reduce test failures > -- > > Key: YARN-5548 > URL: https://issues.apache.org/jira/browse/YARN-5548 > Project: Hadoop YARN > Issue Type: Test >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > Labels: oct16-easy, test > Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, > YARN-5548.0003.patch, YARN-5548.0004.patch, YARN-5548.0005.patch, > YARN-5548.0006.patch, YARN-5548.0007.patch, YARN-5548.0008.patch > > > https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/ > {noformat} > Error Message > Stacktrace > java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: > 1471885197388 } application_name: "" queue: "default" priority { priority: 0 > } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 > resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" > keep_containers_across_application_attempts: false > attempt_failures_validity_interval: 0 am_container_resource_request { > priority { priority: 0 } resource_name: "*" capability { memory: 1024 > virtual_cores: 1 } num_containers: 0 relax_locality: true > node_label_expression: "" execution_type_request { execution_type: GUARANTEED > enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 > application_state: RMAPP_FINISHED finish_time: 1471885197478> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotNull(Assert.java:664) > at org.junit.Assert.assertNull(Assert.java:646) > at org.junit.Assert.assertNull(Assert.java:656) > at > org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider
[ https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709185#comment-15709185 ] Daniel Templeton commented on YARN-4997: I'll let you slide on the method length checkstyle complaints, and it looks like the test failure is unrelated. +1 > Update fair scheduler to use pluggable auth provider > > > Key: YARN-4997 > URL: https://issues.apache.org/jira/browse/YARN-4997 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Tao Jie > Attachments: YARN-4997-001.patch, YARN-4997-002.patch, > YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, > YARN-4997-006.patch, YARN-4997-007.patch, YARN-4997-008.patch, > YARN-4997-009.patch, YARN-4997-010.patch, YARN-4997-011.patch > > > Now that YARN-3100 has made the authorization pluggable, it should be > supported by the fair scheduler. YARN-3100 only updated the capacity > scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5929) Missing scheduling policy in the FS queue metric.
[ https://issues.apache.org/jira/browse/YARN-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709179#comment-15709179 ] Daniel Templeton commented on YARN-5929: Looks like you need to rebase after YARN-5890. > Missing scheduling policy in the FS queue metric. > -- > > Key: YARN-5929 > URL: https://issues.apache.org/jira/browse/YARN-5929 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5929.001.patch, YARN-5929.002.patch, > YARN-5929.003.patch, YARN-5929.004.patch > > > It should be there since YARN-4878. But it doesn't. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5942) "Overridden" is misspelled as "overriden" in FairScheduler.md
[ https://issues.apache.org/jira/browse/YARN-5942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heather Sutherland updated YARN-5942: - Attachment: FairScheduler.md > "Overridden" is misspelled as "overriden" in FairScheduler.md > - > > Key: YARN-5942 > URL: https://issues.apache.org/jira/browse/YARN-5942 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Heather Sutherland >Priority: Trivial > Labels: newbie > Attachments: FairScheduler.md > > > {noformat} > % grep -i overriden > hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md > * **A queueMaxAppsDefault element**: which sets the default running app limit > for queues; overriden by maxRunningApps element in each queue. > * **A queueMaxResourcesDefault element**: which sets the default max resource > limit for queue; overriden by maxResources element in each queue. > * **A queueMaxAMShareDefault element**: which sets the default AM resource > limit for queue; overriden by maxAMShare element in each queue. > * **A defaultQueueSchedulingPolicy element**: which sets the default > scheduling policy for queues; overriden by the schedulingPolicy element in > each queue if specified. Defaults to "fair". > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5929) Missing scheduling policy in the FS queue metric.
[ https://issues.apache.org/jira/browse/YARN-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709168#comment-15709168 ] Daniel Templeton commented on YARN-5929: +1 on the latest patch. > Missing scheduling policy in the FS queue metric. > -- > > Key: YARN-5929 > URL: https://issues.apache.org/jira/browse/YARN-5929 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5929.001.patch, YARN-5929.002.patch, > YARN-5929.003.patch, YARN-5929.004.patch > > > It should be there since YARN-4878. But it doesn't. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5941) Slider handles "per.component" for multiple components incorrectly
[ https://issues.apache.org/jira/browse/YARN-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709137#comment-15709137 ] Gour Saha commented on YARN-5941: - [~billie.rinaldi] +1 for the second approach with conf prefix. > Slider handles "per.component" for multiple components incorrectly > -- > > Key: YARN-5941 > URL: https://issues.apache.org/jira/browse/YARN-5941 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora >Assignee: Billie Rinaldi > > When multiple components are started by slider and each component should have > a different property file, "per.component" should be set to true for each > component. > {code:title=component1} > 'properties': { > 'site.app-site.job-builder.class': 'xxx', > 'site.app-site.rpc.server.hostname': 'xxx', > 'site.app-site.per.component': 'true' > } > {code} > {code:title=component2} > 'properties': { > 'site.app-site.job-builder.class.component2': > 'yyy', > 'site.app-site.rpc.server.hostname.component2': > 'yyy', > 'site.app-site.per.component': 'true' > } > {code} > While doing that, one of the component's property file gets > "per.component"="true" in the slider generated property file. > {code:title=property file for component1} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > per.component=true > job-builder.class=xxx > rpc.server.hostname=xxx{code} > {code:title=property file for component2} > #Generated by Apache Slider > #Tue Nov 29 23:20:25 UTC 2016 > job-builder.class.component2=yyy > rpc.server.hostname.component2=yyy{code} > "per.component" should not be added in any component's property file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org