[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application
[ https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335495#comment-15335495 ] Hadoop QA commented on YARN-5224: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 46 unchanged - 1 fixed = 48 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 56s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 5s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 31s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager | | | hadoop.yarn.client.api.impl.TestYarnClient | | | hadoop.yarn.client.cli.TestLogsCLI | | | hadoop.yarn.client.api.impl.TestAMRMProxy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811291/YARN-5224.5.patch | | JIRA Issue | YARN-5224 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 132704a5a84c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.a
[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List
[ https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335468#comment-15335468 ] Hadoop QA commented on YARN-5200: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 82 unchanged - 7 fixed = 87 total (was 89) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 9s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient | | | hadoop.yarn.client.cli.TestLogsCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811290/YARN-5200.3.patch | | JIRA Issue | YARN-5200 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4cf396b1a588 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12059/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12059/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit test logs | https://builds.apache.org/job/P
[jira] [Commented] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335467#comment-15335467 ] Hadoop QA commented on YARN-5266: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 5 new + 81 unchanged - 5 fixed = 86 total (was 86) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 43s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 46s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI | | | hadoop.yarn.client.api.impl.TestYarnClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811292/YARN-5266.2.patch | | JIRA Issue | YARN-5266 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c0fe06544edd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12060/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12060/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12060/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12060/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://builds.apache.org/job/PreCommit-YARN-Buil
[jira] [Commented] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335439#comment-15335439 ] Xuan Gong commented on YARN-5266: - Fix the checkstyle issue > Wrong exit code while trying to get app logs using regex via CLI > > > Key: YARN-5266 > URL: https://issues.apache.org/jira/browse/YARN-5266 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Sumana Sathish >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-5266.1.patch, YARN-5266.2.patch > > > The test is trying to do negative test by passing regex as 'ds+' and expects > exit code != 0. > *Exit Code is zero and the error message is typed more than once* > {code} > RUNNING: /usr/hdp/current/hadoop-yarn-client/bin/yarn logs -applicationId > application_1465500362360_0016 -logFiles ds+ > Can not find any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,079|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,216|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,331|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,432|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5266: Attachment: YARN-5266.2.patch > Wrong exit code while trying to get app logs using regex via CLI > > > Key: YARN-5266 > URL: https://issues.apache.org/jira/browse/YARN-5266 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Sumana Sathish >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-5266.1.patch, YARN-5266.2.patch > > > The test is trying to do negative test by passing regex as 'ds+' and expects > exit code != 0. > *Exit Code is zero and the error message is typed more than once* > {code} > RUNNING: /usr/hdp/current/hadoop-yarn-client/bin/yarn logs -applicationId > application_1465500362360_0016 -logFiles ds+ > Can not find any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,079|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,216|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,331|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,432|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application
[ https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5224: Attachment: YARN-5224.5.patch fix the checkstyle issue > Logs for a completed container are not available in the yarn logs output for > a live application > --- > > Key: YARN-5224 > URL: https://issues.apache.org/jira/browse/YARN-5224 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Siddharth Seth >Assignee: Xuan Gong > Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, > YARN-5224.4.patch, YARN-5224.5.patch > > > This affects 'short' jobs like MapReduce and Tez more than long running apps. > Related: YARN-5193 (but that only covers long running apps) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5200) Improve yarn logs to get Container List
[ https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5200: Attachment: YARN-5200.3.patch fix the checkstyle issue > Improve yarn logs to get Container List > --- > > Key: YARN-5200 > URL: https://issues.apache.org/jira/browse/YARN-5200 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
[ https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335431#comment-15335431 ] Wangda Tan commented on YARN-4844: -- Opened YARN-5270 to track fixes of such issues. > Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource > - > > Key: YARN-4844 > URL: https://issues.apache.org/jira/browse/YARN-4844 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Fix For: 2.8.0 > > Attachments: YARN-4844-branch-2.8.0016_.patch, > YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch, > YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch, > YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, > YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, > YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, > YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, > YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, > YARN-4844.9.branch-2.patch > > > We use int32 for memory now, if a cluster has 10k nodes, each node has 210G > memory, we will get a negative total cluster memory. > And another case that easier overflows int32 is: we added all pending > resources of running apps to cluster's total pending resources. If a > problematic app requires too much resources (let's say 1M+ containers, each > of them has 3G containers), int32 will be not enough. > Even if we can cap each app's pending request, we cannot handle the case that > there're many running apps, each of them has capped but still significant > numbers of pending resources. > So we may possibly need to add getMemoryLong/getVirtualCoreLong to > o.a.h.y.api.records.Resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5270) Solve miscellaneous issues caused by YARN-4844
[ https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-5270: - Summary: Solve miscellaneous issues caused by YARN-4844 (was: Solve miscellaneous issues of YARN-4844) > Solve miscellaneous issues caused by YARN-4844 > -- > > Key: YARN-5270 > URL: https://issues.apache.org/jira/browse/YARN-5270 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Critical > > Such as javac warnings reported by YARN-5077 and type converting issues in > Resources class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4685) AM blacklisting result in application to get hanged
[ https://issues.apache.org/jira/browse/YARN-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-4685: Attachment: YARN-4685-workaround.patch > AM blacklisting result in application to get hanged > --- > > Key: YARN-4685 > URL: https://issues.apache.org/jira/browse/YARN-4685 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-4685-workaround.patch > > > AM blacklist addition or removal is updated only when RMAppAttempt is > scheduled i.e {{RMAppAttemptImpl#ScheduleTransition#transition}}. But once > attempt is scheduled if there is any removeNode/addNode in cluster then this > is not updated to {{BlackListManager#refreshNodeHostCount}}. This leads > BlackListManager to operate on stale NM's count. And application is in > ACCEPTED state and wait forever even if blacklisted nodes are reconnected > with clearing disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5270) Solve miscellaneous issues of YARN-4844
Wangda Tan created YARN-5270: Summary: Solve miscellaneous issues of YARN-4844 Key: YARN-5270 URL: https://issues.apache.org/jira/browse/YARN-5270 Project: Hadoop YARN Issue Type: Bug Reporter: Wangda Tan Assignee: Wangda Tan Priority: Critical Such as javac warnings reported by YARN-5077 and type converting issues in Resources class. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
[ https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335418#comment-15335418 ] Wangda Tan commented on YARN-4844: -- [~kasha], bq. getMemory is deprecated, but getVirtualCores is not The reason why only update getMemory is it is the real problem. In the near future, virtualCores is not likely go beyond max value of int. Considering size of the patch, I only updated getMemory. bq. getMemory is deprecated and recommends using getMemorySize, but getMemorySize is unstable. Seems like the users are stuck between rock and a hard place? I was thinking this is the first release of the new API, probably we could update it. I'm open update it to Evolving or even Stable API if you think it is required. bq. Is the recommendation to use the long version for everything - individual resource-requests and variables that are used to capture aggregates? If yes, shouldn't we update all current usages to the long version? I've tried updated most of them, except few APIs (like mapreduce.JobStatus), getMemory is used by YARN/MR 1k+ times, I believe there're missed places. I can address them before release of 2.8. bq. Also, do you think we can get this in 2.9 instead so we can be sure other stuff doesn't break? I would prefer to leave it in 2.8, this is the real problem that we saw a couple of cases, and basically client can do nothing except restart services. I've tried to build several YARN downstream projects such as Spark/Slider/Tez against this patch, all of them can be built with the api fixes: https://issues.apache.org/jira/secure/attachment/12810580/YARN-4844-branch-2.8.addendum.2.patch Considering there're still 15+ pending blockers and critical issues for 2.8, there're at least few weeks to finish 2.8, we can test more downstream projects if you want. bq. Also, noticed that some of the helper methods in Resources seem to using getMemorySize for calculations but typecasting to int as in this example: I will double check them as well as issues you found at YARN-5077. I plan to create a new JIRA to address these issues instead of overloading this one. > Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource > - > > Key: YARN-4844 > URL: https://issues.apache.org/jira/browse/YARN-4844 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Fix For: 2.8.0 > > Attachments: YARN-4844-branch-2.8.0016_.patch, > YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch, > YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch, > YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, > YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, > YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, > YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, > YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, > YARN-4844.9.branch-2.patch > > > We use int32 for memory now, if a cluster has 10k nodes, each node has 210G > memory, we will get a negative total cluster memory. > And another case that easier overflows int32 is: we added all pending > resources of running apps to cluster's total pending resources. If a > problematic app requires too much resources (let's say 1M+ containers, each > of them has 3G containers), int32 will be not enough. > Even if we can cap each app's pending request, we cannot handle the case that > there're many running apps, each of them has capped but still significant > numbers of pending resources. > So we may possibly need to add getMemoryLong/getVirtualCoreLong to > o.a.h.y.api.records.Resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5265) Make HBase configuration for the timeline service configurable
[ https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5265: --- Attachment: YARN-5265-YARN-2928.02.patch > Make HBase configuration for the timeline service configurable > -- > > Key: YARN-5265 > URL: https://issues.apache.org/jira/browse/YARN-5265 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis > Attachments: ATS v2 cluster deployment v1.png, > YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch > > > Currently we create "default" HBase configurations, this works as long as the > user places the appropriate configuration on the classpath. > This works fine for a standalone Hadoop cluster. > However, if a user wants to monitor an HBase cluster and has a separate ATS > HBase cluster, then it can become tricky to create the right classpath for > the nodemanagers and still have tasks have their separate configs. > It will be much easier to add a yarn configuration to let cluster admins > configure which HBase to connect to to write ATS metrics to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable
[ https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335410#comment-15335410 ] Joep Rottinghuis commented on YARN-5265: Contemplating what is better in FlowScanner line 104: {code} Configuration hbaseConf = env.getConfiguration(); {code} or {code} Configuration hbaseConf = TimelineStorageUtils.getTimelineServiceHBaseConf(env.getConfiguration(); {code} I think the former (existing code) may be better because this is a yarn config, which should be picked up from yarn-site.xml and not be overwritten by a level of indirection from a hbase-site.xml. > Make HBase configuration for the timeline service configurable > -- > > Key: YARN-5265 > URL: https://issues.apache.org/jira/browse/YARN-5265 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis > Attachments: ATS v2 cluster deployment v1.png, > YARN-5265-YARN-2928.01.patch > > > Currently we create "default" HBase configurations, this works as long as the > user places the appropriate configuration on the classpath. > This works fine for a standalone Hadoop cluster. > However, if a user wants to monitor an HBase cluster and has a separate ATS > HBase cluster, then it can become tricky to create the right classpath for > the nodemanagers and still have tasks have their separate configs. > It will be much easier to add a yarn configuration to let cluster admins > configure which HBase to connect to to write ATS metrics to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5214) Pending on synchronized method DirectoryCollection#checkDirs can hang NM's NodeStatusUpdater
[ https://issues.apache.org/jira/browse/YARN-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335408#comment-15335408 ] Wangda Tan commented on YARN-5214: -- Thanks [~djp], I think the patch added RW lock can generally reduce the time spent on locking. However, I think this may not be able to solve the entire problem. Per my understanding, even after R/W lock changes, when anything bad happens on disks, DirectoryCollection will be stuck under write locks, so NodeStatusUpdater will be blocked as well. I think there're two fixes that we can make to tackle the problem: 1) In short term, errorDirs/fullDirs/localDirs are copy-on-write list, so we don't need to acquire lock getGoodDirs/getFailedDirs/getFailedDirs. This could lead to inconsistency data in rare cases, but I think in general this is safe and inconsistency data will be updated in next heartbeat. 2) In longer term, we may need to consider a DirectoryCollection stuck under busy IO is unhealthy state, NodeStatusUpdater should be able to report such status to RM, so RM will avoid allocating any new containers to such nodes. [~nroberts] suggested the same thing. Thoughts? > Pending on synchronized method DirectoryCollection#checkDirs can hang NM's > NodeStatusUpdater > > > Key: YARN-5214 > URL: https://issues.apache.org/jira/browse/YARN-5214 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Junping Du >Assignee: Junping Du >Priority: Critical > Attachments: YARN-5214.patch > > > In one cluster, we notice NM's heartbeat to RM is suddenly stopped and wait a > while and marked LOST by RM. From the log, the NM daemon is still running, > but jstack hints NM's NodeStatusUpdater thread get blocked: > 1. Node Status Updater thread get blocked by 0x8065eae8 > {noformat} > "Node Status Updater" #191 prio=5 os_prio=0 tid=0x7f0354194000 nid=0x26fa > waiting for monitor entry [0x7f035945a000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.getFailedDirs(DirectoryCollection.java:170) > - waiting to lock <0x8065eae8> (a > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection) > at > org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getDisksHealthReport(LocalDirsHandlerService.java:287) > at > org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport(NodeHealthCheckerService.java:58) > at > org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.getNodeStatus(NodeStatusUpdaterImpl.java:389) > at > org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.access$300(NodeStatusUpdaterImpl.java:83) > at > org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:643) > at java.lang.Thread.run(Thread.java:745) > {noformat} > 2. The actual holder of this lock is DiskHealthMonitor: > {noformat} > "DiskHealthMonitor-Timer" #132 daemon prio=5 os_prio=0 tid=0x7f0397393000 > nid=0x26bd runnable [0x7f035e511000] >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createDirectory(Native Method) > at java.io.File.mkdir(File.java:1316) > at > org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:67) > at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.verifyDirUsingMkdir(DirectoryCollection.java:340) > at > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:312) > at > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:231) > - locked <0x8065eae8> (a > org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection) > at > org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:389) > at > org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$400(LocalDirsHandlerService.java:50) > at > org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:122) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > {noformat} > This disk operation could take longer time than expectation especially in > high IO throughput case and we should have fine-grained lock for related > operations here. > The same issue on HDFS get raised and fixed in HDFS-7489, and we probably > should have similar fi
[jira] [Commented] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat
[ https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335407#comment-15335407 ] Hadoop QA commented on YARN-5262: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 20s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 34s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811277/0002-YARN-5262.patch | | JIRA Issue | YARN-5262 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b6119aa58446 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12058/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12058/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM > heartbeat > --- > > Key: YARN-5262 > URL: https://issues.apache.org/jira/browse/YARN-5262 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5262.patch, 000
[jira] [Updated] (YARN-5265) Make HBase configuration for the timeline service configurable
[ https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5265: --- Attachment: YARN-5265-YARN-2928.01.patch YARN-5265-YARN-2928.01.patch: Untested initial outline of what I think might work. Need to do manual testing and come up with a good way to create a unit test for this. I'm still considering if we want to use the same approach in the Coprocessor. It would probably not be needed, as the only cluster where those are deployed will be on the HBase cluster hosting the timeline service tables. I might still add it so that folks can choose to have separate config variables (timeouts etc) for the timeline coprocessors than for other ones. This also reminds me that we should update the documentation to clarify that the yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds value needs to be present in the config on the region server. > Make HBase configuration for the timeline service configurable > -- > > Key: YARN-5265 > URL: https://issues.apache.org/jira/browse/YARN-5265 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis > Attachments: ATS v2 cluster deployment v1.png, > YARN-5265-YARN-2928.01.patch > > > Currently we create "default" HBase configurations, this works as long as the > user places the appropriate configuration on the classpath. > This works fine for a standalone Hadoop cluster. > However, if a user wants to monitor an HBase cluster and has a separate ATS > HBase cluster, then it can become tricky to create the right classpath for > the nodemanagers and still have tasks have their separate configs. > It will be much easier to add a yarn configuration to let cluster admins > configure which HBase to connect to to write ATS metrics to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5269: --- Attachment: (was: YARN-5269-YARN-2928.01.patch) > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5269: --- Comment: was deleted (was: YARN-5269-YARN-2928.01.patch: Untested initial outline of what I think might work. Need to do manual testing and come up with a good way to create a unit test for this. I'm still considering if we want to use the same approach in the Coprocessor. It would probably not be needed, as the only cluster where those are deployed will be on the HBase cluster hosting the timeline service tables. I might still add it so that folks can choose to have separate config variables (timeouts etc) for the timeline coprocessors than for other ones. This also reminds me that we should update the documentation to clarify that the yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds value needs to be present in the config on the region server. ) > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5269: --- Attachment: YARN-5269-YARN-2928.01.patch YARN-5269-YARN-2928.01.patch: Untested initial outline of what I think might work. Need to do manual testing and come up with a good way to create a unit test for this. I'm still considering if we want to use the same approach in the Coprocessor. It would probably not be needed, as the only cluster where those are deployed will be on the HBase cluster hosting the timeline service tables. I might still add it so that folks can choose to have separate config variables (timeouts etc) for the timeline coprocessors than for other ones. This also reminds me that we should update the documentation to clarify that the yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds value needs to be present in the config on the region server. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis > Attachments: YARN-5269-YARN-2928.01.patch > > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat
[ https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5262: Attachment: 0002-YARN-5262.patch Updated patch with fixing findbug warning > Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM > heartbeat > --- > > Key: YARN-5262 > URL: https://issues.apache.org/jira/browse/YARN-5262 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5262.patch, 0002-YARN-5262.patch > > > It is observed that RM triggers an one event for every > ApplicationMaster#allocate request in the following trace. This is not > necessarily required and it can be optimized such that send only if any > containers are there to acknowledge to NodeManager. > {code} > RMAppAttemptImpl.sendFinishedContainersToNM() line: 1871 > RMAppAttemptImpl.pullJustFinishedContainers() line: 805 > ApplicationMasterService.allocate(AllocateRequest) line: 567 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335319#comment-15335319 ] Varun Saxena commented on YARN-5174: I addressed it in YARN-5052. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, > YARN-5174-YARN-2928.04.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335308#comment-15335308 ] Naganarasimha G R commented on YARN-5174: - Thanks for the patch [~sjlee0], i will try to review the patch and post my comments but earlier point {{"change the left nav"}} is not addressed i presume ? > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, > YARN-5174-YARN-2928.04.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335226#comment-15335226 ] Karthik Kambatla commented on YARN-5077: The javac warning appears to be due to the changes introduced by YARN-4844. Can you update the call to getMemory to use getMemorySize instead? > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch, > YARN-5077.009.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
[ https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335221#comment-15335221 ] Karthik Kambatla commented on YARN-4844: Also, noticed that some of the helper methods in Resources seem to using getMemorySize for calculations but typecasting to int as in this example: {code} public static Resource multiplyTo(Resource lhs, double by) { lhs.setMemory((int)(lhs.getMemorySize() * by)); lhs.setVirtualCores((int)(lhs.getVirtualCores() * by)); return lhs; } {code} > Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource > - > > Key: YARN-4844 > URL: https://issues.apache.org/jira/browse/YARN-4844 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Fix For: 2.8.0 > > Attachments: YARN-4844-branch-2.8.0016_.patch, > YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch, > YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch, > YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, > YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, > YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, > YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, > YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, > YARN-4844.9.branch-2.patch > > > We use int32 for memory now, if a cluster has 10k nodes, each node has 210G > memory, we will get a negative total cluster memory. > And another case that easier overflows int32 is: we added all pending > resources of running apps to cluster's total pending resources. If a > problematic app requires too much resources (let's say 1M+ containers, each > of them has 3G containers), int32 will be not enough. > Even if we can cap each app's pending request, we cannot handle the case that > there're many running apps, each of them has capped but still significant > numbers of pending resources. > So we may possibly need to add getMemoryLong/getVirtualCoreLong to > o.a.h.y.api.records.Resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
[ https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335220#comment-15335220 ] Karthik Kambatla commented on YARN-4844: Also, do you think we can get this in 2.9 instead so we can be sure other stuff doesn't break? > Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource > - > > Key: YARN-4844 > URL: https://issues.apache.org/jira/browse/YARN-4844 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Fix For: 2.8.0 > > Attachments: YARN-4844-branch-2.8.0016_.patch, > YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch, > YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch, > YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, > YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, > YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, > YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, > YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, > YARN-4844.9.branch-2.patch > > > We use int32 for memory now, if a cluster has 10k nodes, each node has 210G > memory, we will get a negative total cluster memory. > And another case that easier overflows int32 is: we added all pending > resources of running apps to cluster's total pending resources. If a > problematic app requires too much resources (let's say 1M+ containers, each > of them has 3G containers), int32 will be not enough. > Even if we can cap each app's pending request, we cannot handle the case that > there're many running apps, each of them has capped but still significant > numbers of pending resources. > So we may possibly need to add getMemoryLong/getVirtualCoreLong to > o.a.h.y.api.records.Resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5246) NMWebAppFilter web redirects drop query parameters
[ https://issues.apache.org/jira/browse/YARN-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335206#comment-15335206 ] Junping Du commented on YARN-5246: -- Thanks [~vvasudev] for delivering the patch. The patch looks good in overall. Just remove some unnecessary import as checkstyle report should be fine. > NMWebAppFilter web redirects drop query parameters > -- > > Key: YARN-5246 > URL: https://issues.apache.org/jira/browse/YARN-5246 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5246.001.patch > > > The NMWebAppFilter drops query parameters when it carries out a redirect to > the log server. This leads to problems when users have simple web > authentication setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
[ https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335197#comment-15335197 ] Karthik Kambatla commented on YARN-4844: Sorry for coming in late. Stumbled on this by way of reviewing YARN-5077 that starts showing a javac warning for using getMemory(). I see a few inconsistencies here: # getMemory is deprecated, but getVirtualCores is not # getMemory is deprecated and recommends using getMemorySize, but getMemorySize is unstable. Seems like the users are stuck between rock and a hard place? # Is the recommendation to use the long version for everything - individual resource-requests and variables that are used to capture aggregates? If yes, shouldn't we update all current usages to the long version? > Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource > - > > Key: YARN-4844 > URL: https://issues.apache.org/jira/browse/YARN-4844 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Fix For: 2.8.0 > > Attachments: YARN-4844-branch-2.8.0016_.patch, > YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch, > YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch, > YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, > YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, > YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, > YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, > YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, > YARN-4844.9.branch-2.patch > > > We use int32 for memory now, if a cluster has 10k nodes, each node has 210G > memory, we will get a negative total cluster memory. > And another case that easier overflows int32 is: we added all pending > resources of running apps to cluster's total pending resources. If a > problematic app requires too much resources (let's say 1M+ containers, each > of them has 3G containers), int32 will be not enough. > Even if we can cap each app's pending request, we cannot handle the case that > there're many running apps, each of them has capped but still significant > numbers of pending resources. > So we may possibly need to add getMemoryLong/getVirtualCoreLong to > o.a.h.y.api.records.Resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application
[ https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335139#comment-15335139 ] Hadoop QA commented on YARN-5224: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 46 unchanged - 1 fixed = 55 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 279 unchanged - 0 fixed = 280 total (was 279) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 4s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 3s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 40s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 18s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811250/YARN-5224.4.patch | | JIRA Issue | YARN-5224 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e134a51bd5ff 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf78040 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12057/artifact/patch
[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335116#comment-15335116 ] Hadoop QA commented on YARN-5077: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 253 unchanged - 4 fixed = 253 total (was 257) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 31m 21s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 1s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811248/YARN-5077.009.patch | | JIRA Issue | YARN-5077 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux dfd0ff03488a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf78040 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/12056/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12056/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12056/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Fix FSLeafQueue#getFairShare() for
[jira] [Created] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
Joep Rottinghuis created YARN-5269: -- Summary: Bubble exceptions and errors all the way up the calls, including to clients. Key: YARN-5269 URL: https://issues.apache.org/jira/browse/YARN-5269 Project: Hadoop YARN Issue Type: Sub-task Components: timelineserver Affects Versions: YARN-2928 Reporter: Joep Rottinghuis Currently we ignore (swallow) exception from the HBase side in many cases (reads and writes). Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) nor the #putEntitiesAsync method return any value. For the second drop we may want to consider how we properly bubble up exceptions throughout the write and reader call paths and if we want to return a response in putEntities and some future kind of result for putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335095#comment-15335095 ] Joep Rottinghuis commented on YARN-5174: Documentation looks good to me. One thing we can do in the next release (not needed for this jira imho) is have a summary for arguments to the various rest methods so that each argument is defined once. Only if an argument with the same name works differently for a particular API call can we then document the differences. That will save overall length of our doc and decrease the repeated text. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, > YARN-5174-YARN-2928.04.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application
[ https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5224: Attachment: YARN-5224.4.patch > Logs for a completed container are not available in the yarn logs output for > a live application > --- > > Key: YARN-5224 > URL: https://issues.apache.org/jira/browse/YARN-5224 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Siddharth Seth >Assignee: Xuan Gong > Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, > YARN-5224.4.patch > > > This affects 'short' jobs like MapReduce and Tez more than long running apps. > Related: YARN-5193 (but that only covers long running apps) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application
[ https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335048#comment-15335048 ] Xuan Gong commented on YARN-5224: - Thanks for the review. vinod. Attached a new patch to address your comments. Added a web service with path "/containers/$containerid/logs" to return names/file size of the log-files, a new webservice with the path "/containers/$containerid$/logs/$filename" to return contents > Logs for a completed container are not available in the yarn logs output for > a live application > --- > > Key: YARN-5224 > URL: https://issues.apache.org/jira/browse/YARN-5224 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Siddharth Seth >Assignee: Xuan Gong > Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch > > > This affects 'short' jobs like MapReduce and Tez more than long running apps. > Related: YARN-5193 (but that only covers long running apps) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties
[ https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335043#comment-15335043 ] Arun Suresh commented on YARN-5221: --- Thanks for the review [~leftnoteasy] bq. How to handle container version mismatch issue or other issues which cause update failure? In the patch, the checking is done in the {{ApplicationMasterService}} and it just logs the mismatch. I agree, let me update the {{AllocateResponse}} to include something like failed update requests (with a message giving the reason) bq. org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation: should we merge increase/decrease container list ? We could, but since that is not part of the API per se I was thinking that maybe its fine as it is.. we can probably file a follow-up JIRA I agree with the rest of your comments... will update the patch shortly addressing them. > Expose UpdateResourceRequest API to allow AM to request for change in > container properties > -- > > Key: YARN-5221 > URL: https://issues.apache.org/jira/browse/YARN-5221 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5221.001.patch, YARN-5221.002.patch, > YARN-5221.003.patch, YARN-5221.004.patch > > > YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease > of Container Resources after initial allocation. > YARN-5085 proposes to allow an AM to request for a change of Container > ExecutionType. > This JIRA proposes to unify both of the above into an Update Container API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335038#comment-15335038 ] Hadoop QA commented on YARN-5174: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 40s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 9s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:cf2ee45 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811246/YARN-5174-YARN-2928.04.patch | | JIRA Issue | YARN-5174 | | Optional Tests | asflicense mvnsite | | uname | Linux 2135a962b935 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2928 / b9b9068 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12055/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, > YARN-5174-YARN-2928.04.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: YARN-5077.009.patch Upload patch 009 to fix the patch cannot apply issue. > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch, > YARN-5077.009.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5174: -- Attachment: YARN-5174-YARN-2928.04.patch Fixed a tab. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, > YARN-5174-YARN-2928.04.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334994#comment-15334994 ] Hadoop QA commented on YARN-5174: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 42s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:cf2ee45 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811242/YARN-5174-YARN-2928.03.patch | | JIRA Issue | YARN-5174 | | Optional Tests | asflicense mvnsite | | uname | Linux 7e87b8fa0845 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2928 / b9b9068 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/12054/artifact/patchprocess/whitespace-tabs.txt | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12054/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars
[ https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334970#comment-15334970 ] Hadoop QA commented on YARN-5233: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 216 unchanged - 1 fixed = 216 total (was 217) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s {color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 26s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811237/YARN-5233-trunk.002.patch | | JIRA Issue | YARN-5233 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux c52f55afdb57 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b1674ca | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12051/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-ya
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5174: -- Attachment: YARN-5174-YARN-2928.03.patch Sorry I messed up the latest patch (forgot the image). Fixed. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, YARN-5174-YARN-2928.03.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5261) Lease/Reclaim Extension to Yarn
[ https://issues.apache.org/jira/browse/YARN-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334930#comment-15334930 ] Jason Lowe commented on YARN-5261: -- This is very similar work to YARN-5215 and YARN-1011 / YARN-5202. It would be good to understand how this is fundamentally different, because at first glance it appears the desired behavior can be accomplished via one or more of those other efforts. > Lease/Reclaim Extension to Yarn > --- > > Key: YARN-5261 > URL: https://issues.apache.org/jira/browse/YARN-5261 > Project: Hadoop YARN > Issue Type: New Feature > Components: scheduler >Reporter: Yu > > In some clusters outside of Yarn, the machines resources are not fully > utilized, e.g., resource usage is quite low at night. > To better utilize the resources while keep the existing SLA of the cluster, > Lease/Reclaim Extension to Yarn is introduced. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5174: -- Attachment: The YARN Timeline Service v2.pdf > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, The YARN Timeline > Service v2.pdf, YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5174: -- Attachment: (was: ATSv2_Documentation.pdf) > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5174: -- Attachment: YARN-5174-YARN-2928.03.patch Posted patch v.3. On top of Varun's work, I made the following changes: - specified the HBase version requirement - corrected the coprocessor configuration key - removed the writer/reader class configuration from the section for enabling timeline service v.2 - removed mention of the web UI (which didn't make this milestone) - miscellaneous: corrected typos, wrapped long lines, etc. Please review. Also, if you'd like to make more edits, please comment here so that we know who's actively working on it. Thanks! > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, > YARN-5174-YARN-2928.03.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5233) Support for specifying a path for ATS plugin jars
[ https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Lu updated YARN-5233: Attachment: YARN-5233-trunk.002.patch New patch to fix checkstyle issues. > Support for specifying a path for ATS plugin jars > - > > Key: YARN-5233 > URL: https://issues.apache.org/jira/browse/YARN-5233 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: 2.8.0 >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5233-trunk.001.patch, YARN-5233-trunk.002.patch > > > Third-party plugins need to add their jars to ATS. Most of the times, > isolation is not needed. However, there needs to be a way to specify the > path. For now, the jars on that path can be added to default classloader. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars
[ https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334870#comment-15334870 ] Hadoop QA commented on YARN-5233: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 216 unchanged - 1 fixed = 217 total (was 217) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s {color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 31s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811229/YARN-5233-trunk.001.patch | | JIRA Issue | YARN-5233 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 9326de8e8706 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 127d2c7 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12049/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | |
[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334818#comment-15334818 ] Hadoop QA commented on YARN-5077: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} YARN-5077 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811228/YARN-5077.008.patch | | JIRA Issue | YARN-5077 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12050/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5233) Support for specifying a path for ATS plugin jars
[ https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Lu updated YARN-5233: Attachment: YARN-5233-trunk.001.patch Attach a patch to add ATS plugin specific classpath and load them at run time. > Support for specifying a path for ATS plugin jars > - > > Key: YARN-5233 > URL: https://issues.apache.org/jira/browse/YARN-5233 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: 2.8.0 >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5233-trunk.001.patch > > > Third-party plugins need to add their jars to ATS. Most of the times, > isolation is not needed. However, there needs to be a way to specify the > path. For now, the jars on that path can be added to default classloader. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: YARN-5077.008.patch [~kasha], thanks you very much for your comments. I uploaded patch 008 for all comments. The new patch consolidates YARN-4866, add a test case with tiny weight besides the zero weight, and remove useless functions. > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: (was: YARN-5077.008.patch) > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2928) YARN Timeline Service: Next generation
[ https://issues.apache.org/jira/browse/YARN-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-2928: -- Attachment: ATSv2BackendHBaseSchemaproposal.pdf > YARN Timeline Service: Next generation > -- > > Key: YARN-2928 > URL: https://issues.apache.org/jira/browse/YARN-2928 > Project: Hadoop YARN > Issue Type: New Feature > Components: timelineserver >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > Attachments: ATSv2.rev1.pdf, ATSv2.rev2.pdf, > ATSv2BackendHBaseSchemaproposal.pdf, Data model proposal v1.pdf, Timeline > Service Next Gen - Planning - ppt.pptx, > TimelineServiceStoragePerformanceTestSummaryYARN-2928.pdf > > > We have the application timeline server implemented in yarn per YARN-1530 and > YARN-321. Although it is a great feature, we have recognized several critical > issues and features that need to be addressed. > This JIRA proposes the design and implementation changes to address those. > This is phase 1 of this effort. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334621#comment-15334621 ] Joep Rottinghuis edited comment on YARN-5174 at 6/16/16 8:44 PM: - The property "yarn.timeline-service.hbase.app-final-value-retention-milliseconds" has been renamed to "yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds" as per YARN-5189, that still needs to be modified in this jira. In the "Enabling Timeline Service v.2" section, there is a table: "Following are the basic configurations to start Timeline service v.2:" It lists out the yarn.timeline-service.writer.class and yarn.timeline-service.reader.class properties. Now that these are the default, we don't have to list them here anymore. The simpler, the better. was (Author: jrottinghuis): The property "yarn.timeline-service.hbase.app-final-value-retention-milliseconds" has been renamed to "yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds" as per YARN-5189, that still needs to be modified in this jira. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334621#comment-15334621 ] Joep Rottinghuis commented on YARN-5174: The property "yarn.timeline-service.hbase.app-final-value-retention-milliseconds" has been renamed to "yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds" as per YARN-5189, that still needs to be modified in this jira. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5265) Make HBase configuration for the timeline service configurable
[ https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis updated YARN-5265: --- Attachment: ATS v2 cluster deployment v1.png Attaching diagram that [~vrushalic] created. This helps in visualizing the challenge. The cluster on which the nodemanagers run and/or connect to by default do not necessarily have to be the same as the HBase cluster that ATS needs to write to. There are two configurations involved. The Yarn configuration for the source cluster, that is the cluster where the timelineservice client run on (as part of the nodemanager, resourcemanager, or reader service). This configuration will have most of the Yarn ATS properties. The second configuration is the one that identifies the HBase cluster to which the timelineservice client has to write to (or read from). These settings may not be the same as the default cluster that tasks on the cluster connect to. It isn't good enough to require the ATS HBase cluster configs to be on the classpath of the nodemanagers, resource manager etc. > Make HBase configuration for the timeline service configurable > -- > > Key: YARN-5265 > URL: https://issues.apache.org/jira/browse/YARN-5265 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis > Attachments: ATS v2 cluster deployment v1.png > > > Currently we create "default" HBase configurations, this works as long as the > user places the appropriate configuration on the classpath. > This works fine for a standalone Hadoop cluster. > However, if a user wants to monitor an HBase cluster and has a separate ATS > HBase cluster, then it can become tricky to create the right classpath for > the nodemanagers and still have tasks have their separate configs. > It will be much easier to add a yarn configuration to let cluster admins > configure which HBase to connect to to write ATS metrics to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5268) DShell AM fails java.lang.InterruptedException
Sumana Sathish created YARN-5268: Summary: DShell AM fails java.lang.InterruptedException Key: YARN-5268 URL: https://issues.apache.org/jira/browse/YARN-5268 Project: Hadoop YARN Issue Type: Bug Components: yarn Reporter: Sumana Sathish Assignee: Tan, Wangda Priority: Critical Fix For: 2.9.0 Distributed Shell AM failed with the following error {Code} 16/06/16 11:08:10 INFO impl.NMClientAsyncImpl: NMClient stopped. 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application completed. Signalling finish to RM 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Diagnostics., total=16, completed=19, allocated=21, failed=4 16/06/16 11:08:10 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application Master failed. exiting 16/06/16 11:08:10 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting for queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287) End of LogType:AppMaster.stderr {Code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334513#comment-15334513 ] Hadoop QA commented on YARN-5266: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 6 new + 81 unchanged - 5 fixed = 87 total (was 86) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 37s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811181/YARN-5266.1.patch | | JIRA Issue | YARN-5266 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ad66943969f0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 127d2c7 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12048/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12048/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12048/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12048/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12048/console | | Powered by | Apache Yetus 0.3.0
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334491#comment-15334491 ] Hadoop QA commented on YARN-5174: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 8s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:cf2ee45 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811179/YARN-5174-YARN-2928.02.patch | | JIRA Issue | YARN-5174 | | Optional Tests | asflicense mvnsite | | uname | Linux 638c8c3a1c52 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2928 / b9b9068 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12047/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5244) Documentation required for DNS Server implementation
[ https://issues.apache.org/jira/browse/YARN-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Maron updated YARN-5244: - Attachment: dns record creation.jpeg dns record removal.jpeg dns overview.png yarn_dns_server.md Initial documentation > Documentation required for DNS Server implementation > > > Key: YARN-5244 > URL: https://issues.apache.org/jira/browse/YARN-5244 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Maron > Attachments: dns overview.png, dns record creation.jpeg, dns record > removal.jpeg, yarn_dns_server.md > > > The DNS server requires documentation describing its functionality etc -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-4951) large IP ranges require a different naming strategy
[ https://issues.apache.org/jira/browse/YARN-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Maron resolved YARN-4951. -- Resolution: Fixed Fixed with the specification of zone-mask property > large IP ranges require a different naming strategy > --- > > Key: YARN-4951 > URL: https://issues.apache.org/jira/browse/YARN-4951 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Maron >Assignee: Jonathan Maron > Attachments: > 0001-YARN-4757-simplified-reverse-lookup-zone-approach-fo.patch > > > Large subnet definitions (e.g. specifying a mask value of 255.255.224.0) > yield a large number of potential network addresses. Therefore, the standard > naming convention of xx.xx.xx.in-addr.arpa needs to be modified to be more > general: xx.xx.in-addr.arpa. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5266: Attachment: YARN-5266.1.patch > Wrong exit code while trying to get app logs using regex via CLI > > > Key: YARN-5266 > URL: https://issues.apache.org/jira/browse/YARN-5266 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Sumana Sathish >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-5266.1.patch > > > The test is trying to do negative test by passing regex as 'ds+' and expects > exit code != 0. > *Exit Code is zero and the error message is typed more than once* > {code} > RUNNING: /usr/hdp/current/hadoop-yarn-client/bin/yarn logs -applicationId > application_1465500362360_0016 -logFiles ds+ > Can not find any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,079|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,216|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,331|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,432|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster
[ https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334467#comment-15334467 ] Hadoop QA commented on YARN-4280: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 29s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 22 new + 360 unchanged - 0 fixed = 382 total (was 360) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 28s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811171/YARN-4280.001.patch | | JIRA Issue | YARN-4280 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e5feebfa47df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aefe11 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/12043/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12043/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12043/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resour
[jira] [Commented] (YARN-3933) Race condition when calling AbstractYarnScheduler.completedContainer.
[ https://issues.apache.org/jira/browse/YARN-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334463#comment-15334463 ] Hadoop QA commented on YARN-3933: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 34s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 3 new + 4 unchanged - 0 fixed = 7 total (was 4) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 8 new + 284 unchanged - 4 fixed = 292 total (was 288) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 20s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 23s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12794386/YARN-3933.003.patch | | JIRA Issue | YARN-3933 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4a6ad92f5e8d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aefe11 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/12042/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12042/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12042/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.ap
[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334453#comment-15334453 ] Hadoop QA commented on YARN-5174: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 25s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 57s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:cf2ee45 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811179/YARN-5174-YARN-2928.02.patch | | JIRA Issue | YARN-5174 | | Optional Tests | asflicense mvnsite | | uname | Linux 44f6b7d4f40f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2928 / b9b9068 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12046/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List
[ https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334449#comment-15334449 ] Hadoop QA commented on YARN-5200: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 20 new + 86 unchanged - 3 fixed = 106 total (was 89) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 39s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient | | | hadoop.yarn.client.cli.TestLogsCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811170/YARN-5200.2.patch | | JIRA Issue | YARN-5200 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 71c2790c549a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aefe11 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12044/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12044/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit test logs | https://builds.apache.org/
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-5174: --- Assignee: Sangjin Lee (was: Varun Saxena) > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-5174: --- Attachment: YARN-5174-YARN-2928.02.patch Found few more query params which were not in lowercase. Fixed them too in the latest patch. > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-5174: --- Attachment: (was: ATSv2_Documentation.pdf) > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, Hierarchy.png, > PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, YARN-5174-YARN-2928.02.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-5174: --- Attachment: ATSv2_Documentation.pdf > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, ATSv2_Documentation.pdf, > Hierarchy.png, PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation
[ https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-5174: --- Assignee: Varun Saxena (was: Sangjin Lee) > several updates/corrections to timeline service documentation > - > > Key: YARN-5174 > URL: https://issues.apache.org/jira/browse/YARN-5174 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: ATSv2_Documentation.pdf, ATSv2_Documentation.pdf, > Hierarchy.png, PublishingApplicationDatatoYARNTimelineServicev.pdf, > YARN-5174-YARN-2928.01.patch, flow_hierarchy.png > > > One part that is missing in the documentation is the need to add > {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we > need to arrive at the minimally required client setting to connect to the > right hbase cluster. Then, we need to document it so that users know exactly > what to do to configure the cluster to use the timeline service v.2. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5256) [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations
[ https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334424#comment-15334424 ] Hadoop QA commented on YARN-5256: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 42s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 12s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 9 new + 47 unchanged - 0 fixed = 56 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 15s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage | | | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestRMAdminService | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:6d3a5f5 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811163/YARN-5256-YARN-3368.2.patch | | JIRA Issue | YARN-5256 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c6df50a90a17 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3368 / b775df6 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12041/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12041/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12041/artifact/patchprocess/patch-unit-hadoop-yar
[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties
[ https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334399#comment-15334399 ] Wangda Tan commented on YARN-5221: -- Thanks [~asuresh], A couple of comments, haven't reviewed test code yet: 1) How to handle container version mismatch issue or other issues which cause update failure? Now in the patch some are reported to AM as exception, some are silently recorded in RM's log, should we add a new list to AllocateResponse to let client know about failed-to-update containers? 2) org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation: should we merge increase/decrease container list? 3) getVersion of Container should be @Public/Unstable? Will it be used by end user? Probably it's better to make it to be long to avoid future changes. (Like we need sparsity in version number) 4) For UpdateContainerRequest, should we document following behaviors: - Can we update multiple fields of container at the same time? - What happened if we send two different update request for the same container? (First one increased container size, and second one update execution type) 5) Even though ContainerTokenIdentifier is evolving API, we can break it bylaw. But it's better to avoid changing the API as much as possible. > Expose UpdateResourceRequest API to allow AM to request for change in > container properties > -- > > Key: YARN-5221 > URL: https://issues.apache.org/jira/browse/YARN-5221 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-5221.001.patch, YARN-5221.002.patch, > YARN-5221.003.patch, YARN-5221.004.patch > > > YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease > of Container Resources after initial allocation. > YARN-5085 proposes to allow an AM to request for a change of Container > ExecutionType. > This JIRA proposes to unify both of the above into an Update Container API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node
[ https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334394#comment-15334394 ] Hadoop QA commented on YARN-5171: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 8 new + 184 unchanged - 20 fixed = 192 total (was 204) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 57s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 57s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 31s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811148/YARN-5171.006.patch | | JIRA Issue | YARN-5171 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 0d88f095a870 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c9e7138 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkst
[jira] [Commented] (YARN-4835) [YARN-3368] REST API related changes for new Web UI
[ https://issues.apache.org/jira/browse/YARN-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334378#comment-15334378 ] Varun Saxena commented on YARN-4835: With the recent work on trunk related to YARN-4904, I think we wont have to do NM side changes. > [YARN-3368] REST API related changes for new Web UI > --- > > Key: YARN-4835 > URL: https://issues.apache.org/jira/browse/YARN-4835 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Affects Versions: YARN-3368 >Reporter: Varun Saxena >Assignee: Varun Saxena > Attachments: YARN-4835-YARN-3368.01.patch, > YARN-4835-YARN-3368.02.patch > > > Following things need to be added for AM related web pages. > 1. Support task state query param in REST URL for fetching tasks. > 2. Support task attempt state query param in REST URL for fetching task > attempts. > 3. A new REST endpoint to fetch counters for each task belonging to a job. > Also have a query param for counter name. >i.e. something like : > {{/jobs/\{jobid\}/taskCounters}} > 4. A REST endpoint in NM for fetching all log files associated with a > container. Useful if logs served by NM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5235) Avoid re-creation of EventColumnNameConverter in HBaseTimelineWriterImpl#storeEvents
[ https://issues.apache.org/jira/browse/YARN-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis resolved YARN-5235. Resolution: Won't Fix Requires method signature changes in order to avoid light-weight object creation. Let's tackle this if the converter ever does become a heavy-weight instance. > Avoid re-creation of EventColumnNameConverter in > HBaseTimelineWriterImpl#storeEvents > > > Key: YARN-5235 > URL: https://issues.apache.org/jira/browse/YARN-5235 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis >Priority: Trivial > > As per discussion in YARN-5170 [~varun_saxena] noted: > bq. In HBaseTimelineWriterImpl#storeEvents, we iterate over all events in a > loop and will be creating EventColumnNameConverter object each time. Although > its not a very heavy object right now, but can't we just create it once > outside the loop ? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List
[ https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334349#comment-15334349 ] Xuan Gong commented on YARN-5200: - Thanks for the review. [~djp] bq. I assume "-show_meta_info" is something we just add recently and not show up in any hadoop releases yet Yes, this is the command which we just added recently. Attached a new patch to address othe comments > Improve yarn logs to get Container List > --- > > Key: YARN-5200 > URL: https://issues.apache.org/jira/browse/YARN-5200 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5200.1.patch, YARN-5200.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3933) Race condition when calling AbstractYarnScheduler.completedContainer.
[ https://issues.apache.org/jira/browse/YARN-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334352#comment-15334352 ] Sunil G commented on YARN-3933: --- Hi [~guoshiwei] Could you pls help to share a patch as we discussed. If you have no bandwidth, I could help to provide the same. > Race condition when calling AbstractYarnScheduler.completedContainer. > - > > Key: YARN-3933 > URL: https://issues.apache.org/jira/browse/YARN-3933 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.6.0, 2.7.0, 2.5.2, 2.7.1 >Reporter: Lavkesh Lahngir >Assignee: Shiwei Guo > Attachments: YARN-3933.001.patch, YARN-3933.002.patch, > YARN-3933.003.patch > > > In our cluster we are seeing available memory and cores being negative. > Initial inspection: > Scenario no. 1: > In capacity scheduler the method allocateContainersToNode() checks if > there are excess reservation of containers for an application, and they are > no longer needed then it calls queue.completedContainer() which causes > resources being negative. And they were never assigned in the first place. > I am still looking through the code. Can somebody suggest how to simulate > excess containers assignments ? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5200) Improve yarn logs to get Container List
[ https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5200: Attachment: YARN-5200.2.patch > Improve yarn logs to get Container List > --- > > Key: YARN-5200 > URL: https://issues.apache.org/jira/browse/YARN-5200 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-5200.1.patch, YARN-5200.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster
[ https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated YARN-4280: -- Attachment: YARN-4280.001.patch A preliminary patch that adds 'blocked' resource to assignments and queue usage. It is set when a given under-served queue cannot fit the request and is used for any further assignments to be limited based on used and blocked resources on that queue. A parent queue's blocked-resource is updated based on the children assignments' blocked-resource value. The patch may not fully address the increaseContainer scenarios. I am looking into adding a test for that. > CapacityScheduler reservations may not prevent indefinite postponement on a > busy cluster > > > Key: YARN-4280 > URL: https://issues.apache.org/jira/browse/YARN-4280 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.6.1, 2.8.0, 2.7.1 >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla > Attachments: YARN-4280.001.patch > > > Consider the following scenario: > There are 2 queues A(25% of the total capacity) and B(75%), both can run at > total cluster capacity. There are 2 applications, appX that runs on Queue A, > always asking for 1G containers(non-AM) and appY runs on Queue B asking for 2 > GB containers. > The user limit is high enough for the application to reach 100% of the > cluster resource. > appX is running at total cluster capacity, full with 1G containers releasing > only one container at a time. appY comes in with a request of 2GB container > but only 1 GB is free. Ideally, since appY is in the underserved queue, it > has higher priority and should reserve for its 2 GB request. Since this > request puts the alloc+reserve above total capacity of the cluster, > reservation is not made. appX comes in with a 1GB request and since 1GB is > still available, the request is allocated. > This can continue indefinitely causing priority inversion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5263) Fix AMRMClientAsync AbstractCallbackHandler to notify clients of Pre-emption messages
[ https://issues.apache.org/jira/browse/YARN-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334344#comment-15334344 ] Sunil G commented on YARN-5263: --- Hi [~asuresh] Thanks for filing the ticket. I have some doubts. Currently scheduler sends preempted container details (message) to AM via heartbeat. And I could see that we update preemption details into PreemptionMessage as StrictContract and normal contract. Any specific advantages for clients to know about such message. Because AM can take some action based on its loaded for preemption policy such as check point etc. May be I am missing something, pls help me to share some more information. > Fix AMRMClientAsync AbstractCallbackHandler to notify clients of Pre-emption > messages > - > > Key: YARN-5263 > URL: https://issues.apache.org/jira/browse/YARN-5263 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > > This JIRA proposes to add a callback method to > AMRMClientAsync::AbstractCallbackHandler that notifies the client AM of > Pre-emption messages sent to it by the Scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5235) Avoid re-creation of EventColumnNameConverter in HBaseTimelineWriterImpl#storeEvents
[ https://issues.apache.org/jira/browse/YARN-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-5235: - Summary: Avoid re-creation of EventColumnNameConverter in HBaseTimelineWriterImpl#storeEvents (was: Avoid re-creation of EvenColumnNameConverter in HBaseTimelineWriterImpl#storeEvents) > Avoid re-creation of EventColumnNameConverter in > HBaseTimelineWriterImpl#storeEvents > > > Key: YARN-5235 > URL: https://issues.apache.org/jira/browse/YARN-5235 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Joep Rottinghuis >Priority: Trivial > > As per discussion in YARN-5170 [~varun_saxena] noted: > bq. In HBaseTimelineWriterImpl#storeEvents, we iterate over all events in a > loop and will be creating EventColumnNameConverter object each time. Although > its not a very heavy object right now, but can't we just create it once > outside the loop ? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: YARN-5077.008.patch > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5259) Add two metrics at FSOpDurations for doing container assign and completed Performance statistical analysis
[ https://issues.apache.org/jira/browse/YARN-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334318#comment-15334318 ] Hadoop QA commented on YARN-5259: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 74 unchanged - 0 fixed = 76 total (was 74) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 55s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 56s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811150/YARN-5259-003.patch | | JIRA Issue | YARN-5259 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 81a89ab20472 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c9e7138 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12039/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12039/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12039/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Add two metrics at FSOpDurations for
[jira] [Created] (YARN-5267) RM REST API doc for app lists "Application Type" instead of "applicationType"
Grant Sohn created YARN-5267: Summary: RM REST API doc for app lists "Application Type" instead of "applicationType" Key: YARN-5267 URL: https://issues.apache.org/jira/browse/YARN-5267 Project: Hadoop YARN Issue Type: Bug Components: api, documentation Affects Versions: 2.6.4 Reporter: Grant Sohn Priority: Trivial >From the docs: {noformat} Note that depending on security settings a user might not be able to see all the fields. ItemData Type Description id string The application id userstring The user who started the application namestring The application name Application Typestring The application type {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5256) [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations
[ https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5256: -- Attachment: YARN-5256-YARN-3368.2.patch Uploading new patch with test case and with some more optimization w.r.t NodeLabel UI page. > [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations > > > Key: YARN-5256 > URL: https://issues.apache.org/jira/browse/YARN-5256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5256-YARN-3368.1.patch, YARN-5256-YARN-3368.2.patch > > > Add a new REST endpoint to fetch few more detailed information about node > labels such as resource, list of nodes etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: (was: YARN-5077.008.patch) > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat
[ https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334249#comment-15334249 ] Hadoop QA commented on YARN-5262: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 23s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 0s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Nullcheck of finishedContainers at line 818 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.pullJustFinishedContainers() At RMAppAttemptImpl.java:818 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.pullJustFinishedContainers() At RMAppAttemptImpl.java:[line 818] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811145/0001-YARN-5262.patch | | JIRA Issue | YARN-5262 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b60589ea9cf0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c9e7138 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/12038/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12038/testReport/ | | modules | C: hadoop-yarn-proj
[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5077: --- Attachment: YARN-5077.008.patch > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch, YARN-5077.008.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information
[ https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334188#comment-15334188 ] Karthik Kambatla commented on YARN-5264: This should make the code much cleaner. Once we are done with this, I would like for us to clean up the way FairScheduler tests set up queues - through an xml file. > Use FSQueue to store queue-specific information > --- > > Key: YARN-5264 > URL: https://issues.apache.org/jira/browse/YARN-5264 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Yufei Gu >Assignee: Yufei Gu > > Use FSQueue to store queue-specific information instead of querying > AllocationConfiguration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
Sumana Sathish created YARN-5266: Summary: Wrong exit code while trying to get app logs using regex via CLI Key: YARN-5266 URL: https://issues.apache.org/jira/browse/YARN-5266 Project: Hadoop YARN Issue Type: Bug Components: yarn Reporter: Sumana Sathish Assignee: Xuan Gong Priority: Critical The test is trying to do negative test by passing regex as 'ds+' and expects exit code != 0. *Exit Code is zero and the error message is typed more than once* {code} RUNNING: /usr/hdp/current/hadoop-yarn-client/bin/yarn logs -applicationId application_1465500362360_0016 -logFiles ds+ Can not find any log file matching the pattern: [ds+] for the application: application_1465500362360_0016 2016-06-14 19:19:25,079|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find any log file matching the pattern: [ds+] for the application: application_1465500362360_0016 2016-06-14 19:19:25,216|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find any log file matching the pattern: [ds+] for the application: application_1465500362360_0016 2016-06-14 19:19:25,331|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find any log file matching the pattern: [ds+] for the application: application_1465500362360_0016 2016-06-14 19:19:25,432|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find any log file matching the pattern: [ds+] for the application: application_1465500362360_0016 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5266) Wrong exit code while trying to get app logs using regex via CLI
[ https://issues.apache.org/jira/browse/YARN-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5266: Affects Version/s: 2.9.0 > Wrong exit code while trying to get app logs using regex via CLI > > > Key: YARN-5266 > URL: https://issues.apache.org/jira/browse/YARN-5266 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Sumana Sathish >Assignee: Xuan Gong >Priority: Critical > > The test is trying to do negative test by passing regex as 'ds+' and expects > exit code != 0. > *Exit Code is zero and the error message is typed more than once* > {code} > RUNNING: /usr/hdp/current/hadoop-yarn-client/bin/yarn logs -applicationId > application_1465500362360_0016 -logFiles ds+ > Can not find any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,079|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,216|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,331|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > 2016-06-14 > 19:19:25,432|beaver.machine|INFO|4427|140145752217344|MainThread|Can not find > any log file matching the pattern: [ds+] for the application: > application_1465500362360_0016 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5259) Add two metrics at FSOpDurations for doing container assign and completed Performance statistical analysis
[ https://issues.apache.org/jira/browse/YARN-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ChenFolin updated YARN-5259: Attachment: YARN-5259-003.patch new patch > Add two metrics at FSOpDurations for doing container assign and completed > Performance statistical analysis > -- > > Key: YARN-5259 > URL: https://issues.apache.org/jira/browse/YARN-5259 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: ChenFolin > Attachments: YARN-5259-001.patch, YARN-5259-002.patch, > YARN-5259-003.patch > > > If cluster is slow , we can not know Whether it is caused by container assign > or completed performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node
[ https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated YARN-5171: -- Attachment: YARN-5171.006.patch Tackling some of the comments from [~kkaranasos]. > Extend DistributedSchedulerProtocol to notify RM of containers allocated by > the Node > > > Key: YARN-5171 > URL: https://issues.apache.org/jira/browse/YARN-5171 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Inigo Goiri > Attachments: YARN-5171.000.patch, YARN-5171.001.patch, > YARN-5171.002.patch, YARN-5171.003.patch, YARN-5171.004.patch, > YARN-5171.005.patch, YARN-5171.006.patch > > > Currently, the RM does not know about Containers allocated by the > OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the > Distributed Scheduler request interceptor and the protocol to notify the RM > of new containers as and when they are allocated at the NM. The > {{RMContainer}} should also be extended to expose the {{ExecutionType}} of > the container. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0
[ https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328695#comment-15328695 ] Yufei Gu edited comment on YARN-5077 at 6/16/16 4:45 PM: - [~kasha], Thanks for the review. I will discuss with you offline with 1 and 2. I will create a new JIRA for 3. YARN-5264 is the JIRA for the 3. was (Author: yufeigu): [~kasha], Thanks for the review. I will discuss with you offline with 1 and 2. I will create a new JIRA for 3. > Fix FSLeafQueue#getFairShare() for queues with weight 0.0 > - > > Key: YARN-5077 > URL: https://issues.apache.org/jira/browse/YARN-5077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-5077.001.patch, YARN-5077.002.patch, > YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, > YARN-5077.006.patch, YARN-5077.007.patch > > > 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns > > 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns > > In case 1), that means no container ever gets allocated for an AM because > from the viewpoint of the RM, there is never any headroom to allocate a > container on that queue. > For example, we have a pool with the following weights: > - root.dev 0.0 > - root.product 1.0 > The root.dev is a best effort pool and should only get resources if > root.product is not running. In our tests, with no jobs running under > root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and > never start. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5265) Make HBase configuration for the timeline service configurable
Joep Rottinghuis created YARN-5265: -- Summary: Make HBase configuration for the timeline service configurable Key: YARN-5265 URL: https://issues.apache.org/jira/browse/YARN-5265 Project: Hadoop YARN Issue Type: Sub-task Components: timelineserver Affects Versions: YARN-2928 Reporter: Joep Rottinghuis Assignee: Joep Rottinghuis Currently we create "default" HBase configurations, this works as long as the user places the appropriate configuration on the classpath. This works fine for a standalone Hadoop cluster. However, if a user wants to monitor an HBase cluster and has a separate ATS HBase cluster, then it can become tricky to create the right classpath for the nodemanagers and still have tasks have their separate configs. It will be much easier to add a yarn configuration to let cluster admins configure which HBase to connect to to write ATS metrics to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat
[ https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5262: Attachment: 0001-YARN-5262.patch Updated the straight forward patch.. > Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM > heartbeat > --- > > Key: YARN-5262 > URL: https://issues.apache.org/jira/browse/YARN-5262 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5262.patch > > > It is observed that RM triggers an one event for every > ApplicationMaster#allocate request in the following trace. This is not > necessarily required and it can be optimized such that send only if any > containers are there to acknowledge to NodeManager. > {code} > RMAppAttemptImpl.sendFinishedContainersToNM() line: 1871 > RMAppAttemptImpl.pullJustFinishedContainers() line: 805 > ApplicationMasterService.allocate(AllocateRequest) line: 567 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1773) ShuffleHeader should have a format that can inform about errors
[ https://issues.apache.org/jira/browse/YARN-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334113#comment-15334113 ] Junping Du commented on YARN-1773: -- bq. Do not have bandwidth to fix it, in short term. Sure. no worry about it. bq. Being able to encode the error in the ShuffleHeader will let us parse out the error correctly and move on to the remaining data. This sounds like incompatible changes given ShuffleHeader is writer but not PB object. [~bikassaha], can you confirm this? > ShuffleHeader should have a format that can inform about errors > --- > > Key: YARN-1773 > URL: https://issues.apache.org/jira/browse/YARN-1773 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.3.0, 2.4.0 >Reporter: Bikas Saha >Priority: Critical > > Currently, the ShuffleHeader (which is a Writable) simply tries to read the > successful header (mapid, reduceid etc). If there is an error then the input > will have an error message instead of (mapid, reducedid etc). Thus parsing > the ShuffleHeader fails and since we dont know where the error message ends, > we cannot consume the remaining input stream which may have good data from > the remaining map outputs. Being able to encode the error in the > ShuffleHeader will let us parse out the error correctly and move on to the > remaining data. > The shuffle handler response should say which maps are in error and which are > fine, what the error was for the erroneous maps. These will help report > diagnostics for easier upstream reporting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5264) Use FSQueue to store queue-specific information
Yufei Gu created YARN-5264: -- Summary: Use FSQueue to store queue-specific information Key: YARN-5264 URL: https://issues.apache.org/jira/browse/YARN-5264 Project: Hadoop YARN Issue Type: Improvement Reporter: Yufei Gu Assignee: Yufei Gu Use FSQueue to store queue-specific information instead of querying AllocationConfiguration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5083) YARN CLI for AM logs does not give any error message if entered invalid am value
[ https://issues.apache.org/jira/browse/YARN-5083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334092#comment-15334092 ] Hudson commented on YARN-5083: -- SUCCESS: Integrated in Hadoop-trunk-Commit #9968 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9968/]) YARN-5083. YARN CLI for AM logs does not give any error message if (junping_du: rev e14ee0d3b55816bed1d27a8caf78001985119e3c) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java > YARN CLI for AM logs does not give any error message if entered invalid am > value > > > Key: YARN-5083 > URL: https://issues.apache.org/jira/browse/YARN-5083 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Sumana Sathish >Assignee: Jian He > Fix For: 2.9.0 > > Attachments: YARN-5083.1.patch, YARN-5083.1.patch, > YARN-5083.2-checkstyle-fix.patch, YARN-5083.2.patch > > > Entering invalid value for am in yarn logs CLI does not give any error message > {code:title= there is no amattempt 30 for the application} > yarn logs -applicationId -am 30 > impl.TimelineClientImpl: Timeline service address: > INFO client.RMProxy: Connecting to ResourceManager at > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5251) Yarn CLI to obtain App logs for last 'n' bytes fails with 'java.io.IOException' and for 'n' bytes fails with NumberFormatException
[ https://issues.apache.org/jira/browse/YARN-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334070#comment-15334070 ] Junping Du commented on YARN-5251: -- Mark YARN-5248 as duplicated for this JIRA as TestLogsCLI get fixed after this patch. +1. Patch LGTM. Will commit it tomorrow if no further comments. > Yarn CLI to obtain App logs for last 'n' bytes fails with > 'java.io.IOException' and for 'n' bytes fails with NumberFormatException > -- > > Key: YARN-5251 > URL: https://issues.apache.org/jira/browse/YARN-5251 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Xuan Gong >Priority: Blocker > Attachments: YARN-5251.1.patch > > > {code} > yarn logs -applicationId application_1465421211793_0017 -size 1024 >> appLog1 > on finished application > 2016-06-13 18:44:25,989 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #2 tokens > and #1 secret keys for NM use for launching container > 2016-06-13 18:44:25,989 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of > containertok" > at > java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) > at java.lang.Long.parseLong(Long.java:589) > at java.lang.Long.parseLong(Long.java:631) > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.readContainerLogs(AggregatedLogFormat.java:691) > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.readAContainerLogsForALogType(AggregatedLogFormat.java:767) > at > org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:354) > at > org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:830) > at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:231) > at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:264) > {code} > {code} > yarn logs -applicationId application_1465421211793_0004 -containerId > container_e07_1465421211793_0004_01_01 -logFiles syslog -size -1000 > Exception in thread "main" java.io.IOException: The bytes were skipped are > different from the caller requested > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.readContainerLogsForALogType(AggregatedLogFormat.java:838) > at > org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForALogType(LogCLIHelpers.java:300) > at > org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainersLogsForALogTypeWithoutNodeId(LogCLIHelpers.java:224) > at > org.apache.hadoop.yarn.client.cli.LogsCLI.printContainerLogsForFinishedApplicationWithoutNodeId(LogsCLI.java:447) > at > org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogs(LogsCLI.java:782) > at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:228) > at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:264) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org