[jira] [Commented] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291416#comment-17291416 ] Qi Zhu commented on YARN-9615: -- Thanks a lot [~pbacsko] for review. You suggestion is very valid to me, i will update this later. And i think the test is very important also, i will fill it. Thanks again for your hard work. > Add dispatcher metrics to RM > > > Key: YARN-9615 > URL: https://issues.apache.org/jira/browse/YARN-9615 > Project: Hadoop YARN > Issue Type: Task >Reporter: Jonathan Hung >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-9615.001.patch, YARN-9615.002.patch, > YARN-9615.003.patch, YARN-9615.poc.patch, screenshot-1.png > > > It'd be good to have counts/processing times for each event type in RM async > dispatcher and scheduler async dispatcher. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used
[ https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291413#comment-17291413 ] Qi Zhu commented on YARN-10532: --- Fixed checkstyle in latest patch. > Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is > not being used > > > Key: YARN-10532 > URL: https://issues.apache.org/jira/browse/YARN-10532 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10532.001.patch, YARN-10532.002.patch, > YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, > YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, > YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, > YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, > YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, > YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, > YARN-10532.021.patch, YARN-10532.022.patch, YARN-10532.023.patch, > YARN-10532.024.patch, image-2021-02-12-21-32-02-267.png > > > It's better if we can delete auto-created queues when they are not in use for > a period of time (like 5 mins). It will be helpful when we have a large > number of auto-created queues (e.g. from 500 users), but only a small subset > of queues are actively used. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used
[ https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated YARN-10532: -- Attachment: YARN-10532.024.patch > Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is > not being used > > > Key: YARN-10532 > URL: https://issues.apache.org/jira/browse/YARN-10532 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10532.001.patch, YARN-10532.002.patch, > YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, > YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, > YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, > YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, > YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, > YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, > YARN-10532.021.patch, YARN-10532.022.patch, YARN-10532.023.patch, > YARN-10532.024.patch, image-2021-02-12-21-32-02-267.png > > > It's better if we can delete auto-created queues when they are not in use for > a period of time (like 5 mins). It will be helpful when we have a large > number of auto-created queues (e.g. from 500 users), but only a small subset > of queues are actively used. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291407#comment-17291407 ] Qi Zhu edited comment on YARN-10653 at 2/26/21, 5:52 AM: - [~ebadger] [~snemeth] [~pbacsko] [~ahussein] I just find the finding bug has been fixed well in 001 patch already, the finding bug errors is in the trunk, but the fixed has not show finding bugs, i mistake to look at the trunk finding bug. !image-2021-02-26-13-49-18-241.png|width=592,height=67! The patch 001, will be fine for merge, i think. If you any other advice? Thanks. was (Author: zhuqi): [~ebadger] [~ahussein] I just find the finding bug has been fixed well in 001 patch already, the finding bug errors is in the trunk, but the fixed has not show finding bugs, i mistake to look at the trunk finding bug. !image-2021-02-26-13-49-18-241.png|width=592,height=67! The patch 001, will be fine for merge, i think. If you any other advice? Thanks. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch, > image-2021-02-26-13-49-18-241.png > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291407#comment-17291407 ] Qi Zhu edited comment on YARN-10653 at 2/26/21, 5:51 AM: - [~ebadger] [~ahussein] I just find the finding bug has been fixed well in 001 patch already, the finding bug errors is in the trunk, but the fixed has not show finding bugs, i mistake to look at the trunk finding bug. !image-2021-02-26-13-49-18-241.png|width=592,height=67! The patch 001, will be fine for merge, i think. If you any other advice? Thanks. was (Author: zhuqi): [~ebadger] [~ahussein] I just find the finding bug has been fixed, the finding bug errors is in the trunk, but the fixed has not show finding bugs, i mistake to look at the trunk finding bug. !image-2021-02-26-13-49-18-241.png|width=592,height=67! The patch 001, will be fine for merge, i think. If you any other advice? Thanks. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch, > image-2021-02-26-13-49-18-241.png > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291407#comment-17291407 ] Qi Zhu edited comment on YARN-10653 at 2/26/21, 5:50 AM: - [~ebadger] [~ahussein] I just find the finding bug has been fixed, the finding bug errors is in the trunk, but the fixed has not show finding bugs, i mistake to look at the trunk finding bug. !image-2021-02-26-13-49-18-241.png|width=592,height=67! The patch 001, will be fine for merge, i think. If you any other advice? Thanks. was (Author: zhuqi): [~ebadger] [~ahussein] It confirmed the Jenkins don't realize the change in the fix, the latest patch have no null check in line 649, but it still show the 649 null check. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch, > image-2021-02-26-13-49-18-241.png > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291407#comment-17291407 ] Qi Zhu commented on YARN-10653: --- [~ebadger] [~ahussein] It confirmed the Jenkins don't realize the change in the fix, the latest patch have no null check in line 649, but it still show the 649 null check. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291403#comment-17291403 ] Hadoop QA commented on YARN-10653: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 50s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 32s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 51s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/685/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 0s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our cli
[jira] [Commented] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291380#comment-17291380 ] Qi Zhu commented on YARN-10653: --- [~ebadger] Updated a new patch, to see if the name cause the new finding bugs. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291377#comment-17291377 ] Hadoop QA commented on YARN-10652: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 5s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 48s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 50s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 53s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green}{color} | {color
[jira] [Updated] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated YARN-10653: -- Attachment: YARN-10653.002.patch > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch, YARN-10653.002.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291266#comment-17291266 ] Qi Zhu edited comment on YARN-10653 at 2/26/21, 3:47 AM: - Thanks [~ebadger] for review. I am confused why still have a finding bugs now. cc [~ahussein] could you help to take a look about this? was (Author: zhuqi): Thanks [~ebadger] for review. I am confused why still have a finding bugs now. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291340#comment-17291340 ] Siddharth Ahuja commented on YARN-10652: Thanks a lot for the review [~wilfreds], much appreciate it! Sure, happy to wait for any comments. > Capacity Scheduler fails to handle user weights for a user that has a "." > (dot) in it > - > > Key: YARN-10652 > URL: https://issues.apache.org/jira/browse/YARN-10652 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Attachments: Correct user weight of 0.76 picked up for the user with > a dot after the patch.png, Incorrect default user weight of 1.0 being picked > for the user with a dot before the patch.png, YARN-10652.001.patch > > > AD usernames can have a "." (dot) in them i.e. they can be of the format -> > {{firstname.lastname}}. However, if you specify a username with this format > against the Capacity Scheduler setting -> > {{yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight}}, > it fails to be applied and is instead assigned the default of 1.0f weight. > This renders the user weight feature (being used as a means of setting user > priorities for a queue) unusable for such users. > This limitation comes from [1]. From [1], only word characters (A word > character: [a-zA-Z_0-9]) (see [2]) are permissible at the moment which is no > good for AD names that contain a "." (dot). > Similar discussion has been had in a few HADOOP jiras e.g. HADOOP-7050 and > HADOOP-15395 and the outcome was to use non-whitespace characters i.e. > instead of {{\w+}}, use {{\S+}}. > We could go down similar path and unblock this feature for the AD usernames > with a "." (dot) in them. > [1] > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java#L1953 > [2] > https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291339#comment-17291339 ] Wilfred Spiegelenburg commented on YARN-10652: -- Change looks good +1 (binding) I'll let it sit for a day or so for other people to have a look at this too. I will commit if there are no comments in the next day or so. > Capacity Scheduler fails to handle user weights for a user that has a "." > (dot) in it > - > > Key: YARN-10652 > URL: https://issues.apache.org/jira/browse/YARN-10652 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Attachments: Correct user weight of 0.76 picked up for the user with > a dot after the patch.png, Incorrect default user weight of 1.0 being picked > for the user with a dot before the patch.png, YARN-10652.001.patch > > > AD usernames can have a "." (dot) in them i.e. they can be of the format -> > {{firstname.lastname}}. However, if you specify a username with this format > against the Capacity Scheduler setting -> > {{yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight}}, > it fails to be applied and is instead assigned the default of 1.0f weight. > This renders the user weight feature (being used as a means of setting user > priorities for a queue) unusable for such users. > This limitation comes from [1]. From [1], only word characters (A word > character: [a-zA-Z_0-9]) (see [2]) are permissible at the moment which is no > good for AD names that contain a "." (dot). > Similar discussion has been had in a few HADOOP jiras e.g. HADOOP-7050 and > HADOOP-15395 and the outcome was to use non-whitespace characters i.e. > instead of {{\w+}}, use {{\S+}}. > We could go down similar path and unblock this feature for the AD usernames > with a "." (dot) in them. > [1] > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java#L1953 > [2] > https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291334#comment-17291334 ] Siddharth Ahuja edited comment on YARN-10652 at 2/26/21, 1:53 AM: -- Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@mac hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-SNAPSHOT-tests.jar sleep -m 1 -mt 60 2021-02-26 12:21:57,305 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-02-26 12:21:57,989 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2021-02-26 12:21:58,101 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200 2021-02-26 12:21:58,581 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/firstname.lastname/.staging/job_1614302477419_0001 2021-02-26 12:21:59,343 INFO mapreduce.JobSubmitter: number of splits:1 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1614302477419_0001 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2021-02-26 12:21:59,573 INFO conf.Configuration: resource-types.xml not found 2021-02-26 12:21:59,574 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2021-02-26 12:21:59,987 INFO impl.YarnClientImpl: Submitted application application_1614302477419_0001 2021-02-26 12:22:00,025 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1614302477419_0001/ 2021-02-26 12:22:00,025 INFO mapreduce.Job: Running job: job_1614302477419_0001 2021-02-26 12:22:07,128 INFO mapreduce.Job: Job job_1614302477419_0001 running in uber mode : false 2021-02-26 12:22:07,130 INFO mapreduce.Job: map 0% reduce 0% ... {code} * Check the "_Active Users Info_" section after expanding the {{root.default}} queue on the RM Scheduler page at http://localhost:8088/cluster/scheduler. It should contain 0.76 instead of 1.0. Confirmed this to be working after the change. [1] https://support.apple.com/en-au/guide/mac-help/mtusr001/mac JUnit has also been updated to ensure that the weights for usernames containing a dot are set up accordingly. Meanwhile, also fixed the junit to ensure that that the {{assertEquals}} with float arguments are picked up instead of double by appending the suffix "f" to the literal values (see [2] for more info) and also removed un-necessary unboxing using {{floatValue()}} as this is not required. [2] https://stackoverflow.com/questions/3033137/representing-float-values-in-java was (Author: sahuja): Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new us
[jira] [Comment Edited] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291334#comment-17291334 ] Siddharth Ahuja edited comment on YARN-10652 at 2/26/21, 1:49 AM: -- Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@mac hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-SNAPSHOT-tests.jar sleep -m 1 -mt 60 2021-02-26 12:21:57,305 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-02-26 12:21:57,989 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2021-02-26 12:21:58,101 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200 2021-02-26 12:21:58,581 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/firstname.lastname/.staging/job_1614302477419_0001 2021-02-26 12:21:59,343 INFO mapreduce.JobSubmitter: number of splits:1 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1614302477419_0001 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2021-02-26 12:21:59,573 INFO conf.Configuration: resource-types.xml not found 2021-02-26 12:21:59,574 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2021-02-26 12:21:59,987 INFO impl.YarnClientImpl: Submitted application application_1614302477419_0001 2021-02-26 12:22:00,025 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1614302477419_0001/ 2021-02-26 12:22:00,025 INFO mapreduce.Job: Running job: job_1614302477419_0001 2021-02-26 12:22:07,128 INFO mapreduce.Job: Job job_1614302477419_0001 running in uber mode : false 2021-02-26 12:22:07,130 INFO mapreduce.Job: map 0% reduce 0% ... {code} * Check the "_Active Users Info_" section after expanding the {{root.default}} queue on the RM Scheduler page at http://localhost:8088/cluster/scheduler. It should contain 0.76 instead of 1.0. Confirmed this to be working after the change. [1] https://support.apple.com/en-au/guide/mac-help/mtusr001/mac JUnit has also been updated to ensure that the weights for usernames containing a dot are set up accordingly. Meanwhile, also fixed the junit to ensure that that the {{assertEquals}} with float arguments are picked up instead of double by appending the suffix "f" to the literal values and also removed un-necessary unboxing using {{floatValue()}} as this is not required. was (Author: sahuja): Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hado
[jira] [Comment Edited] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291334#comment-17291334 ] Siddharth Ahuja edited comment on YARN-10652 at 2/26/21, 1:48 AM: -- Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-SNAPSHOT-tests.jar sleep -m 1 -mt 60 2021-02-26 12:21:57,305 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-02-26 12:21:57,989 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2021-02-26 12:21:58,101 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200 2021-02-26 12:21:58,581 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/firstname.lastname/.staging/job_1614302477419_0001 2021-02-26 12:21:59,343 INFO mapreduce.JobSubmitter: number of splits:1 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1614302477419_0001 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2021-02-26 12:21:59,573 INFO conf.Configuration: resource-types.xml not found 2021-02-26 12:21:59,574 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2021-02-26 12:21:59,987 INFO impl.YarnClientImpl: Submitted application application_1614302477419_0001 2021-02-26 12:22:00,025 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1614302477419_0001/ 2021-02-26 12:22:00,025 INFO mapreduce.Job: Running job: job_1614302477419_0001 2021-02-26 12:22:07,128 INFO mapreduce.Job: Job job_1614302477419_0001 running in uber mode : false 2021-02-26 12:22:07,130 INFO mapreduce.Job: map 0% reduce 0% ... {code} * Check the "_Active Users Info_" section after expanding the {{root.default}} queue on the RM Scheduler page at http://localhost:8088/cluster/scheduler. It should contain 0.76 instead of 1.0. Confirmed this to be working after the change. [1] https://support.apple.com/en-au/guide/mac-help/mtusr001/mac JUnit has also been updated to ensure that the weights for usernames containing a dot are set up accordingly. Meanwhile, also fixed the junit to ensure that that the {{assertEquals}} with float arguments are picked up instead of double by appending the suffix "f" to the literal values and also removed un-necessary unboxing using {{floatValue()}} as this is not required. was (Author: sahuja): Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar
[jira] [Comment Edited] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291334#comment-17291334 ] Siddharth Ahuja edited comment on YARN-10652 at 2/26/21, 1:48 AM: -- Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-SNAPSHOT-tests.jar sleep -m 1 -mt 60 2021-02-26 12:21:57,305 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-02-26 12:21:57,989 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2021-02-26 12:21:58,101 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200 2021-02-26 12:21:58,581 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/firstname.lastname/.staging/job_1614302477419_0001 2021-02-26 12:21:59,343 INFO mapreduce.JobSubmitter: number of splits:1 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1614302477419_0001 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2021-02-26 12:21:59,573 INFO conf.Configuration: resource-types.xml not found 2021-02-26 12:21:59,574 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2021-02-26 12:21:59,987 INFO impl.YarnClientImpl: Submitted application application_1614302477419_0001 2021-02-26 12:22:00,025 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1614302477419_0001/ 2021-02-26 12:22:00,025 INFO mapreduce.Job: Running job: job_1614302477419_0001 2021-02-26 12:22:07,128 INFO mapreduce.Job: Job job_1614302477419_0001 running in uber mode : false 2021-02-26 12:22:07,130 INFO mapreduce.Job: map 0% reduce 0% ... {code} * Check the "_Active Users Info_" section after expanding the {{root.default}}queue on the RM Scheduler page at http://localhost:8088/cluster/scheduler. It should contain 0.76 instead of 1.0. Confirmed this to be working after the change. [1] https://support.apple.com/en-au/guide/mac-help/mtusr001/mac JUnit has also been updated to ensure that the weights for usernames containing a dot are set up accordingly. Meanwhile, also fixed the junit to ensure that that the {{assertEquals}} with float arguments are picked up instead of double by appending the suffix "f" to the literal values and also removed un-necessary unboxing using {{floatValue()}} as this is not required. was (Author: sahuja): Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar s
[jira] [Updated] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja updated YARN-10652: --- Attachment: Incorrect default user weight of 1.0 being picked for the user with a dot before the patch.png > Capacity Scheduler fails to handle user weights for a user that has a "." > (dot) in it > - > > Key: YARN-10652 > URL: https://issues.apache.org/jira/browse/YARN-10652 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Attachments: Correct user weight of 0.76 picked up for the user with > a dot after the patch.png, Incorrect default user weight of 1.0 being picked > for the user with a dot before the patch.png, YARN-10652.001.patch > > > AD usernames can have a "." (dot) in them i.e. they can be of the format -> > {{firstname.lastname}}. However, if you specify a username with this format > against the Capacity Scheduler setting -> > {{yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight}}, > it fails to be applied and is instead assigned the default of 1.0f weight. > This renders the user weight feature (being used as a means of setting user > priorities for a queue) unusable for such users. > This limitation comes from [1]. From [1], only word characters (A word > character: [a-zA-Z_0-9]) (see [2]) are permissible at the moment which is no > good for AD names that contain a "." (dot). > Similar discussion has been had in a few HADOOP jiras e.g. HADOOP-7050 and > HADOOP-15395 and the outcome was to use non-whitespace characters i.e. > instead of {{\w+}}, use {{\S+}}. > We could go down similar path and unblock this feature for the AD usernames > with a "." (dot) in them. > [1] > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java#L1953 > [2] > https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja updated YARN-10652: --- Attachment: Correct user weight of 0.76 picked up for the user with a dot after the patch.png > Capacity Scheduler fails to handle user weights for a user that has a "." > (dot) in it > - > > Key: YARN-10652 > URL: https://issues.apache.org/jira/browse/YARN-10652 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Attachments: Correct user weight of 0.76 picked up for the user with > a dot after the patch.png, Incorrect default user weight of 1.0 being picked > for the user with a dot before the patch.png, YARN-10652.001.patch > > > AD usernames can have a "." (dot) in them i.e. they can be of the format -> > {{firstname.lastname}}. However, if you specify a username with this format > against the Capacity Scheduler setting -> > {{yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight}}, > it fails to be applied and is instead assigned the default of 1.0f weight. > This renders the user weight feature (being used as a means of setting user > priorities for a queue) unusable for such users. > This limitation comes from [1]. From [1], only word characters (A word > character: [a-zA-Z_0-9]) (see [2]) are permissible at the moment which is no > good for AD names that contain a "." (dot). > Similar discussion has been had in a few HADOOP jiras e.g. HADOOP-7050 and > HADOOP-15395 and the outcome was to use non-whitespace characters i.e. > instead of {{\w+}}, use {{\S+}}. > We could go down similar path and unblock this feature for the AD usernames > with a "." (dot) in them. > [1] > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java#L1953 > [2] > https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291336#comment-17291336 ] Siddharth Ahuja commented on YARN-10652: +cc [~snemeth] > Capacity Scheduler fails to handle user weights for a user that has a "." > (dot) in it > - > > Key: YARN-10652 > URL: https://issues.apache.org/jira/browse/YARN-10652 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Attachments: YARN-10652.001.patch > > > AD usernames can have a "." (dot) in them i.e. they can be of the format -> > {{firstname.lastname}}. However, if you specify a username with this format > against the Capacity Scheduler setting -> > {{yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight}}, > it fails to be applied and is instead assigned the default of 1.0f weight. > This renders the user weight feature (being used as a means of setting user > priorities for a queue) unusable for such users. > This limitation comes from [1]. From [1], only word characters (A word > character: [a-zA-Z_0-9]) (see [2]) are permissible at the moment which is no > good for AD names that contain a "." (dot). > Similar discussion has been had in a few HADOOP jiras e.g. HADOOP-7050 and > HADOOP-15395 and the outcome was to use non-whitespace characters i.e. > instead of {{\w+}}, use {{\S+}}. > We could go down similar path and unblock this feature for the AD usernames > with a "." (dot) in them. > [1] > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java#L1953 > [2] > https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10652) Capacity Scheduler fails to handle user weights for a user that has a "." (dot) in it
[ https://issues.apache.org/jira/browse/YARN-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291334#comment-17291334 ] Siddharth Ahuja edited comment on YARN-10652 at 2/26/21, 1:42 AM: -- Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-SNAPSHOT-tests.jar sleep -m 1 -mt 60 2021-02-26 12:21:57,305 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2021-02-26 12:21:57,989 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2021-02-26 12:21:58,101 INFO client.AHSProxy: Connecting to Application History server at /0.0.0.0:10200 2021-02-26 12:21:58,581 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/firstname.lastname/.staging/job_1614302477419_0001 2021-02-26 12:21:59,343 INFO mapreduce.JobSubmitter: number of splits:1 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1614302477419_0001 2021-02-26 12:21:59,440 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2021-02-26 12:21:59,573 INFO conf.Configuration: resource-types.xml not found 2021-02-26 12:21:59,574 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2021-02-26 12:21:59,987 INFO impl.YarnClientImpl: Submitted application application_1614302477419_0001 2021-02-26 12:22:00,025 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1614302477419_0001/ 2021-02-26 12:22:00,025 INFO mapreduce.Job: Running job: job_1614302477419_0001 2021-02-26 12:22:07,128 INFO mapreduce.Job: Job job_1614302477419_0001 running in uber mode : false 2021-02-26 12:22:07,130 INFO mapreduce.Job: map 0% reduce 0% ... {code} * Check the "_Active Users Info_" section after expanding the {{root.default }}queue on the RM Scheduler page at http://localhost:8088/cluster/scheduler. It should contain 0.76 instead of 1.0. Confirmed this to be working after the change. [1] https://support.apple.com/en-au/guide/mac-help/mtusr001/mac JUnit has also been updated to ensure that the weights for usernames containing a dot are set up accordingly. Meanwhile, also fixed the junit to ensure that that the {{assertEquals}} with float arguments are picked up instead of double by appending the suffix "f" to the literal values and also removed un-necessary unboxing using {{floatValue()}} as this is not required. was (Author: sahuja): Tested the fix on trunk using a single node cluster using the following steps: * Create a _Standard_ user with a username containing a "." (dot) on Mac -> {{firstname.lastname}} as per [1]. * Set up the single node cluster for trunk and enable the following permissions such that the new user can have rwx permissions under /tmp folder as otherwise job submissions will fail: {code} admin@mac hadoop-3.4.0-SNAPSHOT % bin/hdfs dfs -chmod -R a+rwx /tmp 2021-02-26 12:21:54,034 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable {code} * Add the following setting under {{hadoop-3.4.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml}} to enable weights for the {{firstname.lastname}} user: {code} yarn.scheduler.capacity.root.default.user-settings.firstname.lastname.weight 0.76 {code} * Ensure HDFS & RM services are running on the single node cluster and run the sleep job as the new user: {code} admin@sahuja-MBP16 hadoop-3.4.0-SNAPSHOT % sudo -u firstname.lastname bin/hadoop jar
[jira] [Commented] (YARN-10651) CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource()
[ https://issues.apache.org/jira/browse/YARN-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291323#comment-17291323 ] Hadoop QA commented on YARN-10651: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 22s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 48s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 49s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 4s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {col
[jira] [Comment Edited] (YARN-10651) CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource()
[ https://issues.apache.org/jira/browse/YARN-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291305#comment-17291305 ] Jonathan Hung edited comment on YARN-10651 at 2/26/21, 12:04 AM: - +1 from me. I pushed this to trunk~branch-2.10. Thanks [~haibochen] for the contribution. was (Author: jhung): I pushed this to trunk~branch-2.10. Thanks [~haibochen] for the contribution. > CapacityScheduler crashed with NPE in > AbstractYarnScheduler.updateNodeResource() > - > > Key: YARN-10651 > URL: https://issues.apache.org/jira/browse/YARN-10651 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.10.0, 2.10.1 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Fix For: 3.4.0, 3.3.1, 3.1.5, 2.10.2, 3.2.3 > > Attachments: YARN-10651.00.patch, YARN-10651.01.patch, event_seq.jpg > > > {code:java} > 2021-02-24 17:07:39,798 FATAL org.apache.hadoop.yarn.event.EventDispatcher: > Error in handling event type NODE_RESOURCE_UPDATE to the Event Dispatcher > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.updateNodeResource(AbstractYarnScheduler.java:809) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateNodeAndQueueResource(CapacityScheduler.java:1116) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1505) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:154) > at > org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66) > at java.lang.Thread.run(Thread.java:748) > 2021-02-24 17:07:39,798 INFO org.apache.hadoop.yarn.event.EventDispatcher: > Exiting, bbye..{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291266#comment-17291266 ] Qi Zhu commented on YARN-10653: --- Thanks [~ebadger] for review. I am confused why still have a finding bugs now. > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10651) CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource()
[ https://issues.apache.org/jira/browse/YARN-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291263#comment-17291263 ] Haibo Chen commented on YARN-10651: --- I updated the patch to add some logging. Unit test wise, the key condition to trigger this is that the scheduler thread must process a healthy node update event after the corresponding node turned into the DECOMMISSIONING state (see the diagram for event ordering), which only happens in a very busy cluster. There isn't anything we can use right now in unit test to artificially slow down the scheduler thread, wait for the node to be DECOMMISSIONING and then allow it to process node update. > CapacityScheduler crashed with NPE in > AbstractYarnScheduler.updateNodeResource() > - > > Key: YARN-10651 > URL: https://issues.apache.org/jira/browse/YARN-10651 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.10.0, 2.10.1 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-10651.00.patch, YARN-10651.01.patch, event_seq.jpg > > > {code:java} > 2021-02-24 17:07:39,798 FATAL org.apache.hadoop.yarn.event.EventDispatcher: > Error in handling event type NODE_RESOURCE_UPDATE to the Event Dispatcher > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.updateNodeResource(AbstractYarnScheduler.java:809) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateNodeAndQueueResource(CapacityScheduler.java:1116) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1505) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:154) > at > org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66) > at java.lang.Thread.run(Thread.java:748) > 2021-02-24 17:07:39,798 INFO org.apache.hadoop.yarn.event.EventDispatcher: > Exiting, bbye..{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10651) CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource()
[ https://issues.apache.org/jira/browse/YARN-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-10651: -- Attachment: YARN-10651.01.patch > CapacityScheduler crashed with NPE in > AbstractYarnScheduler.updateNodeResource() > - > > Key: YARN-10651 > URL: https://issues.apache.org/jira/browse/YARN-10651 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.10.0, 2.10.1 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-10651.00.patch, YARN-10651.01.patch, event_seq.jpg > > > {code:java} > 2021-02-24 17:07:39,798 FATAL org.apache.hadoop.yarn.event.EventDispatcher: > Error in handling event type NODE_RESOURCE_UPDATE to the Event Dispatcher > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.updateNodeResource(AbstractYarnScheduler.java:809) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateNodeAndQueueResource(CapacityScheduler.java:1116) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1505) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:154) > at > org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66) > at java.lang.Thread.run(Thread.java:748) > 2021-02-24 17:07:39,798 INFO org.apache.hadoop.yarn.event.EventDispatcher: > Exiting, bbye..{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291246#comment-17291246 ] Hadoop QA commented on YARN-10613: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 32s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 19s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 34s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 36s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green}{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green}{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 104 unchanged - 1 fixed = 104 total (was 105) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 17s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green}{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 58s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/681/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green}{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}159m 0s{color} | {color:black}{color} | {color:black}{color} | \\ \\ || Reason || Tests || | Failed junit tests | ha
[jira] [Commented] (YARN-10651) CapacityScheduler crashed with NPE in AbstractYarnScheduler.updateNodeResource()
[ https://issues.apache.org/jira/browse/YARN-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291225#comment-17291225 ] Jonathan Hung commented on YARN-10651: -- Thanks [~haibochen] - should we add some logging in this case? Also, any way to reproduce this issue in a test? > CapacityScheduler crashed with NPE in > AbstractYarnScheduler.updateNodeResource() > - > > Key: YARN-10651 > URL: https://issues.apache.org/jira/browse/YARN-10651 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.10.0, 2.10.1 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-10651.00.patch, event_seq.jpg > > > {code:java} > 2021-02-24 17:07:39,798 FATAL org.apache.hadoop.yarn.event.EventDispatcher: > Error in handling event type NODE_RESOURCE_UPDATE to the Event Dispatcher > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.updateNodeResource(AbstractYarnScheduler.java:809) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateNodeAndQueueResource(CapacityScheduler.java:1116) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1505) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:154) > at > org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66) > at java.lang.Thread.run(Thread.java:748) > 2021-02-24 17:07:39,798 INFO org.apache.hadoop.yarn.event.EventDispatcher: > Exiting, bbye..{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291196#comment-17291196 ] Eric Badger commented on YARN-10653: [~zhuqi], very sorry about committing YARN-10647 without seeing the findbugs report. That's my mistake and I apologize. Looking at the above Hadoop QA report it still shows the findbugs warning for {{labels}}, but it references lines 643 and 649. In this patch, you have removed the {{labels}} nullcheck. So to me it looks like findbugs didn't run on your patch, but rather the current code without your patch. But I'd rather not commit another patch with a findbugs warning. So [~snemeth] or [~pbacsko] could you also take a look? {noformat:title=CommonNodeLabelsManager.java after Qi's patch} 642 case REPLACE: 643 replaceNodeForLabels(nodeId, host.labels, labels); 644 replaceLabelsForNode(nodeId, host.labels, labels); 645 host.labels.clear(); 646 host.labels.addAll(labels); 647 for (Node node : host.nms.values()) { 648 replaceNodeForLabels(node.nodeId, node.labels, labels); 649 if (node.labels != null) { 650 replaceLabelsForNode(node.nodeId, node.labels, labels); 651 } 652 node.labels = null; 653 } {noformat} > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10564) Support Auto Queue Creation template configurations
[ https://issues.apache.org/jira/browse/YARN-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291175#comment-17291175 ] Hadoop QA commented on YARN-10564: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 23s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 7s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 6s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/680/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 185 unchanged - 0 fixed = 186 total (was 185) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 47s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color
[jira] [Updated] (YARN-10653) Fixed the findbugs issues introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10653: -- Summary: Fixed the findbugs issues introduced by YARN-10647. (was: Fixed the findingbugs introduced by YARN-10647.) > Fixed the findbugs issues introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291154#comment-17291154 ] Eric Payne edited comment on YARN-10613 at 2/25/21, 7:15 PM: - Thanks [~Jim_Brennan]. I have uploaded the branch-3.2 patch. It backports cleanly and compiles and preemtion tests pass. was (Author: eepayne): Thanks [~Jim_Brennan]. I have uploaded the branch-3.2 patch. > Config to allow Intra- and Inter-queue preemption to enable/disable > conservativeDRF > > > Key: YARN-10613 > URL: https://issues.apache.org/jira/browse/YARN-10613 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Minor > Attachments: YARN-10613.branch-2.10.002.patch, > YARN-10613.branch-3.2.002.patch, YARN-10613.trunk.001.patch, > YARN-10613.trunk.002.patch > > > YARN-8292 added code that prevents CS intra-queue preemption from preempting > containers from an app unless all of the major resources used by the app are > greater than the user limit for that user. > Ex: > | Used | User Limit | > | <58GB, 58> | <30GB, 300> | > In this example, only used memory is above the user limit, not used vcores. > So, intra-queue preemption will not occur. > YARN-8292 added the {{conservativeDRF}} flag to > {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. > If {{conservativeDRF}} is false, containers will be preempted from apps in > the example state. If true, containers will not be preempted. > This flag is hard-coded to false for Inter-queue (cross-queue) preemption and > true for intra-queue (in-queue) preemption. > I propose that in some cases, we want intra-queue preemption to be more > aggressive and preempt in the example case. To accommodate that, I propose > the addition of a config property. > Also, we may want inter-queue (cross-queue) preemption to be more > conservative, so I propose also making that a configuration property: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291154#comment-17291154 ] Eric Payne commented on YARN-10613: --- Thanks [~Jim_Brennan]. I have uploaded the branch-3.2 patch. > Config to allow Intra- and Inter-queue preemption to enable/disable > conservativeDRF > > > Key: YARN-10613 > URL: https://issues.apache.org/jira/browse/YARN-10613 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Minor > Attachments: YARN-10613.branch-2.10.002.patch, > YARN-10613.branch-3.2.002.patch, YARN-10613.trunk.001.patch, > YARN-10613.trunk.002.patch > > > YARN-8292 added code that prevents CS intra-queue preemption from preempting > containers from an app unless all of the major resources used by the app are > greater than the user limit for that user. > Ex: > | Used | User Limit | > | <58GB, 58> | <30GB, 300> | > In this example, only used memory is above the user limit, not used vcores. > So, intra-queue preemption will not occur. > YARN-8292 added the {{conservativeDRF}} flag to > {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. > If {{conservativeDRF}} is false, containers will be preempted from apps in > the example state. If true, containers will not be preempted. > This flag is hard-coded to false for Inter-queue (cross-queue) preemption and > true for intra-queue (in-queue) preemption. > I propose that in some cases, we want intra-queue preemption to be more > aggressive and preempt in the example case. To accommodate that, I propose > the addition of a config property. > Also, we may want inter-queue (cross-queue) preemption to be more > conservative, so I propose also making that a configuration property: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-10613: -- Attachment: YARN-10613.branch-3.2.002.patch > Config to allow Intra- and Inter-queue preemption to enable/disable > conservativeDRF > > > Key: YARN-10613 > URL: https://issues.apache.org/jira/browse/YARN-10613 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Minor > Attachments: YARN-10613.branch-2.10.002.patch, > YARN-10613.branch-3.2.002.patch, YARN-10613.trunk.001.patch, > YARN-10613.trunk.002.patch > > > YARN-8292 added code that prevents CS intra-queue preemption from preempting > containers from an app unless all of the major resources used by the app are > greater than the user limit for that user. > Ex: > | Used | User Limit | > | <58GB, 58> | <30GB, 300> | > In this example, only used memory is above the user limit, not used vcores. > So, intra-queue preemption will not occur. > YARN-8292 added the {{conservativeDRF}} flag to > {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. > If {{conservativeDRF}} is false, containers will be preempted from apps in > the example state. If true, containers will not be preempted. > This flag is hard-coded to false for Inter-queue (cross-queue) preemption and > true for intra-queue (in-queue) preemption. > I propose that in some cases, we want intra-queue preemption to be more > aggressive and preempt in the example case. To accommodate that, I propose > the addition of a config property. > Also, we may want inter-queue (cross-queue) preemption to be more > conservative, so I propose also making that a configuration property: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291075#comment-17291075 ] Hadoop QA commented on YARN-10653: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 21s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 19s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 13s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 1s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/679/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 1s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our cli
[jira] [Commented] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291064#comment-17291064 ] Jim Brennan commented on YARN-10613: [~epayne], I have committed to trunk and branch-3.3, but the patch does not work for branch-3.2. I can get it to apply, but then compilation fails. Can you put up a patch for branch-3.2, and branch-3.1 if needed? > Config to allow Intra- and Inter-queue preemption to enable/disable > conservativeDRF > > > Key: YARN-10613 > URL: https://issues.apache.org/jira/browse/YARN-10613 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Minor > Attachments: YARN-10613.branch-2.10.002.patch, > YARN-10613.trunk.001.patch, YARN-10613.trunk.002.patch > > > YARN-8292 added code that prevents CS intra-queue preemption from preempting > containers from an app unless all of the major resources used by the app are > greater than the user limit for that user. > Ex: > | Used | User Limit | > | <58GB, 58> | <30GB, 300> | > In this example, only used memory is above the user limit, not used vcores. > So, intra-queue preemption will not occur. > YARN-8292 added the {{conservativeDRF}} flag to > {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. > If {{conservativeDRF}} is false, containers will be preempted from apps in > the example state. If true, containers will not be preempted. > This flag is hard-coded to false for Inter-queue (cross-queue) preemption and > true for intra-queue (in-queue) preemption. > I propose that in some cases, we want intra-queue preemption to be more > aggressive and preempt in the example case. To accommodate that, I propose > the addition of a config property. > Also, we may want inter-queue (cross-queue) preemption to be more > conservative, so I propose also making that a configuration property: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10564) Support Auto Queue Creation template configurations
[ https://issues.apache.org/jira/browse/YARN-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291038#comment-17291038 ] Andras Gyori commented on YARN-10564: - Simplified the code by reordering the configuration property name. Also, I have realised, that since we only support 2 levels of auto queue creation (so only 1 level of dynamic ParentQueues at a time), we only need to support 1 wildcard level. > Support Auto Queue Creation template configurations > --- > > Key: YARN-10564 > URL: https://issues.apache.org/jira/browse/YARN-10564 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Andras Gyori >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10564.001.patch, YARN-10564.002.patch, > YARN-10564.003.patch, YARN-10564.004.patch, YARN-10564.005.patch, > YARN-10564.poc.001.patch > > > Similar to how the template configuration works for ManagedParents, we need > to support templates for the new auto queue creation logic. Proposition is to > allow wildcards in template configs such as: > {noformat} > yarn.scheduler.capacity.root.*.*.weight 10{noformat} > which would mean, that set weight to 10 of every leaf of every parent under > root. > We should possibly take an approach, that could support arbitrary depth of > template configuration, because we might need to lift the limitation of auto > queue nesting. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10564) Support Auto Queue Creation template configurations
[ https://issues.apache.org/jira/browse/YARN-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Gyori updated YARN-10564: Attachment: YARN-10564.005.patch > Support Auto Queue Creation template configurations > --- > > Key: YARN-10564 > URL: https://issues.apache.org/jira/browse/YARN-10564 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Andras Gyori >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10564.001.patch, YARN-10564.002.patch, > YARN-10564.003.patch, YARN-10564.004.patch, YARN-10564.005.patch, > YARN-10564.poc.001.patch > > > Similar to how the template configuration works for ManagedParents, we need > to support templates for the new auto queue creation logic. Proposition is to > allow wildcards in template configs such as: > {noformat} > yarn.scheduler.capacity.root.*.*.weight 10{noformat} > which would mean, that set weight to 10 of every leaf of every parent under > root. > We should possibly take an approach, that could support arbitrary depth of > template configuration, because we might need to lift the limitation of auto > queue nesting. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291023#comment-17291023 ] Qi Zhu edited comment on YARN-10653 at 2/25/21, 4:26 PM: - [~ebadger] [~snemeth] [~pbacsko] Triggered CI showed in -YARN-10647 and- YARN-10623. The fix by me in -YARN-10647,- It still have finding bugs should be handled. It should be handled quickly, could you help merge this finding bug fix. Thanks. was (Author: zhuqi): [~ebadger] [~snemeth] [~pbacsko] Triggered CI showed in YARN-10623. The fix by me in -YARN-10647,- It still have finding bugs should be handled. It should be handled quickly, could you help merge this finding bug fix. Thanks. > Fixed the findingbugs introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291023#comment-17291023 ] Qi Zhu edited comment on YARN-10653 at 2/25/21, 4:25 PM: - [~ebadger] [~snemeth] [~pbacsko] Triggered CI showed in YARN-10623. The fix by me in -YARN-10647,- It still have finding bugs should be handled. It should be handled quickly, could you help merge this finding bug fix. Thanks. was (Author: zhuqi): [~ebadger] Triggered CI showed in YARN-10623. The fix in It still have finding bugs should be handled. > Fixed the findingbugs introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291023#comment-17291023 ] Qi Zhu edited comment on YARN-10653 at 2/25/21, 4:23 PM: - [~ebadger] Triggered CI showed in YARN-10623. The fix in It still have finding bugs should be handled. was (Author: zhuqi): [~ebadger] Triggered CI in YARN-10623 It still have finding bugs should be handled. > Fixed the findingbugs introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10653.001.patch > > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291023#comment-17291023 ] Qi Zhu edited comment on YARN-10653 at 2/25/21, 4:20 PM: - [~ebadger] Triggered CI in YARN-10623 It still have finding bugs should be handled. was (Author: zhuqi): Triggered CI in YARN-10623 It still have finding bugs should be handled. > Fixed the findingbugs introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10613) Config to allow Intra- and Inter-queue preemption to enable/disable conservativeDRF
[ https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291022#comment-17291022 ] Eric Payne commented on YARN-10613: --- I don't think the unit test failures were related. It looks like a build environment issue. This is from the UT log: {panel:title=https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/671/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt} [INFO] Results: [INFO] [WARNING] Tests run: 2161, Failures: 0, Errors: 0, Skipped: 8 ... [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 1 [ERROR] Crashed tests: [ERROR] org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA ... [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 1 [ERROR] Crashed tests: [ERROR] org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA ... [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 1 [ERROR] Crashed tests: [ERROR] org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA ... [ERROR] Error occurred in starting fork, check output in log [ERROR] Process Exit Code: 1 [ERROR] Crashed tests: [ERROR] org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands {panel} > Config to allow Intra- and Inter-queue preemption to enable/disable > conservativeDRF > > > Key: YARN-10613 > URL: https://issues.apache.org/jira/browse/YARN-10613 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Minor > Attachments: YARN-10613.branch-2.10.002.patch, > YARN-10613.trunk.001.patch, YARN-10613.trunk.002.patch > > > YARN-8292 added code that prevents CS intra-queue preemption from preempting > containers from an app unless all of the major resources used by the app are > greater than the user limit for that user. > Ex: > | Used | User Limit | > | <58GB, 58> | <30GB, 300> | > In this example, only used memory is above the user limit, not used vcores. > So, intra-queue preemption will not occur. > YARN-8292 added the {{conservativeDRF}} flag to > {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. > If {{conservativeDRF}} is false, containers will be preempted from apps in > the example state. If true, containers will not be preempted. > This flag is hard-coded to false for Inter-queue (cross-queue) preemption and > true for intra-queue (in-queue) preemption. > I propose that in some cases, we want intra-queue preemption to be more > aggressive and preempt in the example case. To accommodate that, I propose > the addition of a config property. > Also, we may want inter-queue (cross-queue) preemption to be more > conservative, so I propose also making that a configuration property: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
[ https://issues.apache.org/jira/browse/YARN-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291023#comment-17291023 ] Qi Zhu commented on YARN-10653: --- Triggered CI in YARN-10623 It still have finding bugs should be handled. > Fixed the findingbugs introduced by YARN-10647. > --- > > Key: YARN-10653 > URL: https://issues.apache.org/jira/browse/YARN-10653 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > > In YARN-10647 > I fixed TestRMNodeLabelsManager failed after YARN-10501. > But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10653) Fixed the findingbugs introduced by YARN-10647.
Qi Zhu created YARN-10653: - Summary: Fixed the findingbugs introduced by YARN-10647. Key: YARN-10653 URL: https://issues.apache.org/jira/browse/YARN-10653 Project: Hadoop YARN Issue Type: Improvement Reporter: Qi Zhu Assignee: Qi Zhu In YARN-10647 I fixed TestRMNodeLabelsManager failed after YARN-10501. But the finding bugs should be fixed also. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10627) Extend logging to give more information about weight mode
[ https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17291001#comment-17291001 ] Hadoop QA commented on YARN-10627: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 58s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 31s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 14s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 30s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 49s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/677/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 146 unchanged - 16 fixed = 148 total (was 162) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 56s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {colo
[jira] [Commented] (YARN-10627) Extend logging to give more information about weight mode
[ https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290991#comment-17290991 ] Hadoop QA commented on YARN-10627: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 22s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 44s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 44s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/678/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 146 unchanged - 16 fixed = 148 total (was 162) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 39s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {co
[jira] [Commented] (YARN-9615) Add dispatcher metrics to RM
[ https://issues.apache.org/jira/browse/YARN-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290963#comment-17290963 ] Peter Bacsko commented on YARN-9615: Thanks for the patch [~zhuqi]. Some comments: 1. {noformat} eventTypeMetricsMap.get(event.getType().getClass()) .incr(event.getType(), (System.nanoTime() - startTime) / 1000); {noformat} I'm not 100% confident in this, but most of the time, we rely on {{Clock}} implementations, like {{MonotonicClock}}. I suggest using {{MonotonicClock.getTime()}}. It might be a good idea to introduce a new method to {{AsyncDispatcher}} like {{setClock()}} (mark it with VisibleForTesting). This way, you can replace the Clock instance with a mock or something else, so testability is much easier. 2. Same thing applies to {{EventDispatcher}}. 3. Nit: {{public class DisableEventTypeMetrics implements EventTypeMetrics{}} -- add space after "EventTypeMetrics" 4. {noformat} @Override public void get(Enum type) { } {noformat} If this method does nothing, pls. add a comment like "//nop" to the method body (make it clear that no-op is normal). 5. {noformat} @Override public void get(T type) { } @Override public void getMetrics(MetricsCollector collector, boolean all) { } {noformat} Same here, add a short "// nop" comment in the method bodies. 6. ResourceManager.java: {{import org.apache.hadoop.yarn.event.*;}} --> avoid star imports 7. EventTypeMetrics.java: {noformat} void incr(T type, long processingTimeUs); {noformat} Nit: it's a minor thing, but if we can do it, let's write complete words, so I'd opt for {{increment()}} instead of just {{incr()}}. 8. Very important: there are NO tests for either {{EventDispatcher}} or {{AsyncDispatcher}}. Please add 1-2 unit tests that validate the correct behavior (and think of #1 and you can use a mock {{Clock}} instance for verification). Also fix checkstyle and FindBugs issues. > Add dispatcher metrics to RM > > > Key: YARN-9615 > URL: https://issues.apache.org/jira/browse/YARN-9615 > Project: Hadoop YARN > Issue Type: Task >Reporter: Jonathan Hung >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-9615.001.patch, YARN-9615.002.patch, > YARN-9615.003.patch, YARN-9615.poc.patch, screenshot-1.png > > > It'd be good to have counts/processing times for each event type in RM async > dispatcher and scheduler async dispatcher. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10639) Queueinfo related capacity, should adjusted to weight mode.
[ https://issues.apache.org/jira/browse/YARN-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10639: -- Summary: Queueinfo related capacity, should adjusted to weight mode. (was: Queueinfo related capacity, should ajusted to weight mode.) > Queueinfo related capacity, should adjusted to weight mode. > --- > > Key: YARN-10639 > URL: https://issues.apache.org/jira/browse/YARN-10639 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10639.001.patch, YARN-10639.002.patch > > > {color:#172b4d}The class QueueInfo capacity field should consider the weight > mode.{color} > {color:#172b4d}Now when client use getQueueInfo to get queue capacity in > weight mode, i always return 0, it is wrong.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10627) Extend logging to give more information about weight mode
[ https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290894#comment-17290894 ] Benjamin Teke commented on YARN-10627: -- Thanks [~pbacsko] and [~gandras] for the review. Fixed the checkstyle issues, added the rm close methods, and removed the unnecessary GB string. The current tests include cases for capacity related log string generation (the used configurations are mixed mode, they have both capacity and weight in the hierarchy), this is why I didn't feel the need to introduce separate cases. Do you think it should be done? > Extend logging to give more information about weight mode > - > > Key: YARN-10627 > URL: https://issues.apache.org/jira/browse/YARN-10627 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Benjamin Teke >Assignee: Benjamin Teke >Priority: Major > Attachments: YARN-10627.001.patch, YARN-10627.002.patch, > YARN-10627.003.patch, YARN-10627.004.patch, YARN-10627.005.patch, > image-2021-02-20-00-07-09-875.png > > > In YARN-10504 weight mode was added, however the logged information about the > created queues or the toString methods weren't updated accordingly. Some > examples: > ParentQueue#setupQueueConfigs: > {code:java} > LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity() > + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity() > + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity() > + ", absoluteMaxCapacity=" + this.queueCapacities > .getAbsoluteMaximumCapacity() + ", state=" + getState() + ", acls=" > + aclsString + ", labels=" + labelStrBuilder.toString() + "\n" > + ", reservationsContinueLooking=" + reservationsContinueLooking > + ", orderingPolicy=" + getQueueOrderingPolicyConfigName() > + ", priority=" + priority > + ", allowZeroCapacitySum=" + allowZeroCapacitySum); > {code} > ParentQueue#toString: > {code:java} > public String toString() { > return queueName + ": " + > "numChildQueue= " + childQueues.size() + ", " + > "capacity=" + queueCapacities.getCapacity() + ", " + > "absoluteCapacity=" + queueCapacities.getAbsoluteCapacity() + ", " + > "usedResources=" + queueUsage.getUsed() + > "usedCapacity=" + getUsedCapacity() + ", " + > "numApps=" + getNumApplications() + ", " + > "numContainers=" + getNumContainers(); > } > {code} > LeafQueue#setupQueueConfigs: > {code:java} > LOG.info( > "Initializing " + getQueuePath() + "\n" + "capacity = " > + queueCapacities.getCapacity() > + " [= (float) configuredCapacity / 100 ]" + "\n" > + "absoluteCapacity = " + queueCapacities.getAbsoluteCapacity() > + " [= parentAbsoluteCapacity * capacity ]" + "\n" > + "maxCapacity = " + queueCapacities.getMaximumCapacity() > + " [= configuredMaxCapacity ]" + "\n" + "absoluteMaxCapacity = > " > + queueCapacities.getAbsoluteMaximumCapacity() > + " [= 1.0 maximumCapacity undefined, " > + "(parentAbsoluteMaxCapacity * maximumCapacity) / 100 > otherwise ]" > + "\n" + "effectiveMinResource=" + > getEffectiveCapacity(CommonNodeLabelsManager.NO_LABEL) + "\n" > + " , effectiveMaxResource=" + > getEffectiveMaxCapacity(CommonNodeLabelsManager.NO_LABEL) > + "\n" + "userLimit = " + usersManager.getUserLimit() > + " [= configuredUserLimit ]" + "\n" + "userLimitFactor = " > + usersManager.getUserLimitFactor() > + " [= configuredUserLimitFactor ]" + "\n" + "maxApplications = > " > + maxApplications > + " [= configuredMaximumSystemApplicationsPerQueue or" > + " (int)(configuredMaximumSystemApplications * > absoluteCapacity)]" > + "\n" + "maxApplicationsPerUser = " + maxApplicationsPerUser > + " [= (int)(maxApplications * (userLimit / 100.0f) * " > + "userLimitFactor) ]" + "\n" > + "maxParallelApps = " + getMaxParallelApps() + "\n" > + "usedCapacity = " + > + queueCapacities.getUsedCapacity() + " [= usedResourcesMemory > / " > + "(clusterResourceMemory * absoluteCapacity)]" + "\n" > + "absoluteUsedCapacity = " + absoluteUsedCapacity > + " [= usedResourcesMemory / clusterResourceMemory]" + "\n" > + "maxAMResourcePerQueuePercent = " + > maxAMResourcePerQueuePercent > + " [= configuredMaximumAMResourcePercent ]" + "\n" > + "minimumAlloca
[jira] [Updated] (YARN-10627) Extend logging to give more information about weight mode
[ https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Teke updated YARN-10627: - Attachment: YARN-10627.005.patch > Extend logging to give more information about weight mode > - > > Key: YARN-10627 > URL: https://issues.apache.org/jira/browse/YARN-10627 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Benjamin Teke >Assignee: Benjamin Teke >Priority: Major > Attachments: YARN-10627.001.patch, YARN-10627.002.patch, > YARN-10627.003.patch, YARN-10627.004.patch, YARN-10627.005.patch, > image-2021-02-20-00-07-09-875.png > > > In YARN-10504 weight mode was added, however the logged information about the > created queues or the toString methods weren't updated accordingly. Some > examples: > ParentQueue#setupQueueConfigs: > {code:java} > LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity() > + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity() > + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity() > + ", absoluteMaxCapacity=" + this.queueCapacities > .getAbsoluteMaximumCapacity() + ", state=" + getState() + ", acls=" > + aclsString + ", labels=" + labelStrBuilder.toString() + "\n" > + ", reservationsContinueLooking=" + reservationsContinueLooking > + ", orderingPolicy=" + getQueueOrderingPolicyConfigName() > + ", priority=" + priority > + ", allowZeroCapacitySum=" + allowZeroCapacitySum); > {code} > ParentQueue#toString: > {code:java} > public String toString() { > return queueName + ": " + > "numChildQueue= " + childQueues.size() + ", " + > "capacity=" + queueCapacities.getCapacity() + ", " + > "absoluteCapacity=" + queueCapacities.getAbsoluteCapacity() + ", " + > "usedResources=" + queueUsage.getUsed() + > "usedCapacity=" + getUsedCapacity() + ", " + > "numApps=" + getNumApplications() + ", " + > "numContainers=" + getNumContainers(); > } > {code} > LeafQueue#setupQueueConfigs: > {code:java} > LOG.info( > "Initializing " + getQueuePath() + "\n" + "capacity = " > + queueCapacities.getCapacity() > + " [= (float) configuredCapacity / 100 ]" + "\n" > + "absoluteCapacity = " + queueCapacities.getAbsoluteCapacity() > + " [= parentAbsoluteCapacity * capacity ]" + "\n" > + "maxCapacity = " + queueCapacities.getMaximumCapacity() > + " [= configuredMaxCapacity ]" + "\n" + "absoluteMaxCapacity = > " > + queueCapacities.getAbsoluteMaximumCapacity() > + " [= 1.0 maximumCapacity undefined, " > + "(parentAbsoluteMaxCapacity * maximumCapacity) / 100 > otherwise ]" > + "\n" + "effectiveMinResource=" + > getEffectiveCapacity(CommonNodeLabelsManager.NO_LABEL) + "\n" > + " , effectiveMaxResource=" + > getEffectiveMaxCapacity(CommonNodeLabelsManager.NO_LABEL) > + "\n" + "userLimit = " + usersManager.getUserLimit() > + " [= configuredUserLimit ]" + "\n" + "userLimitFactor = " > + usersManager.getUserLimitFactor() > + " [= configuredUserLimitFactor ]" + "\n" + "maxApplications = > " > + maxApplications > + " [= configuredMaximumSystemApplicationsPerQueue or" > + " (int)(configuredMaximumSystemApplications * > absoluteCapacity)]" > + "\n" + "maxApplicationsPerUser = " + maxApplicationsPerUser > + " [= (int)(maxApplications * (userLimit / 100.0f) * " > + "userLimitFactor) ]" + "\n" > + "maxParallelApps = " + getMaxParallelApps() + "\n" > + "usedCapacity = " + > + queueCapacities.getUsedCapacity() + " [= usedResourcesMemory > / " > + "(clusterResourceMemory * absoluteCapacity)]" + "\n" > + "absoluteUsedCapacity = " + absoluteUsedCapacity > + " [= usedResourcesMemory / clusterResourceMemory]" + "\n" > + "maxAMResourcePerQueuePercent = " + > maxAMResourcePerQueuePercent > + " [= configuredMaximumAMResourcePercent ]" + "\n" > + "minimumAllocationFactor = " + minimumAllocationFactor > + " [= (float)(maximumAllocationMemory - > minimumAllocationMemory) / " > + "maximumAllocationMemory ]" + "\n" + "maximumAllocation = " > + maximumAllocation + " [= configuredMaxAllocation ]" + "\n" > + "numContainers = " + numContainers > + " [= currentNumContainers ]" + "\n" + "state = " + getState() > + "
[jira] [Updated] (YARN-10627) Extend logging to give more information about weight mode
[ https://issues.apache.org/jira/browse/YARN-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Teke updated YARN-10627: - Attachment: YARN-10627.004.patch > Extend logging to give more information about weight mode > - > > Key: YARN-10627 > URL: https://issues.apache.org/jira/browse/YARN-10627 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Benjamin Teke >Assignee: Benjamin Teke >Priority: Major > Attachments: YARN-10627.001.patch, YARN-10627.002.patch, > YARN-10627.003.patch, YARN-10627.004.patch, image-2021-02-20-00-07-09-875.png > > > In YARN-10504 weight mode was added, however the logged information about the > created queues or the toString methods weren't updated accordingly. Some > examples: > ParentQueue#setupQueueConfigs: > {code:java} > LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity() > + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity() > + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity() > + ", absoluteMaxCapacity=" + this.queueCapacities > .getAbsoluteMaximumCapacity() + ", state=" + getState() + ", acls=" > + aclsString + ", labels=" + labelStrBuilder.toString() + "\n" > + ", reservationsContinueLooking=" + reservationsContinueLooking > + ", orderingPolicy=" + getQueueOrderingPolicyConfigName() > + ", priority=" + priority > + ", allowZeroCapacitySum=" + allowZeroCapacitySum); > {code} > ParentQueue#toString: > {code:java} > public String toString() { > return queueName + ": " + > "numChildQueue= " + childQueues.size() + ", " + > "capacity=" + queueCapacities.getCapacity() + ", " + > "absoluteCapacity=" + queueCapacities.getAbsoluteCapacity() + ", " + > "usedResources=" + queueUsage.getUsed() + > "usedCapacity=" + getUsedCapacity() + ", " + > "numApps=" + getNumApplications() + ", " + > "numContainers=" + getNumContainers(); > } > {code} > LeafQueue#setupQueueConfigs: > {code:java} > LOG.info( > "Initializing " + getQueuePath() + "\n" + "capacity = " > + queueCapacities.getCapacity() > + " [= (float) configuredCapacity / 100 ]" + "\n" > + "absoluteCapacity = " + queueCapacities.getAbsoluteCapacity() > + " [= parentAbsoluteCapacity * capacity ]" + "\n" > + "maxCapacity = " + queueCapacities.getMaximumCapacity() > + " [= configuredMaxCapacity ]" + "\n" + "absoluteMaxCapacity = > " > + queueCapacities.getAbsoluteMaximumCapacity() > + " [= 1.0 maximumCapacity undefined, " > + "(parentAbsoluteMaxCapacity * maximumCapacity) / 100 > otherwise ]" > + "\n" + "effectiveMinResource=" + > getEffectiveCapacity(CommonNodeLabelsManager.NO_LABEL) + "\n" > + " , effectiveMaxResource=" + > getEffectiveMaxCapacity(CommonNodeLabelsManager.NO_LABEL) > + "\n" + "userLimit = " + usersManager.getUserLimit() > + " [= configuredUserLimit ]" + "\n" + "userLimitFactor = " > + usersManager.getUserLimitFactor() > + " [= configuredUserLimitFactor ]" + "\n" + "maxApplications = > " > + maxApplications > + " [= configuredMaximumSystemApplicationsPerQueue or" > + " (int)(configuredMaximumSystemApplications * > absoluteCapacity)]" > + "\n" + "maxApplicationsPerUser = " + maxApplicationsPerUser > + " [= (int)(maxApplications * (userLimit / 100.0f) * " > + "userLimitFactor) ]" + "\n" > + "maxParallelApps = " + getMaxParallelApps() + "\n" > + "usedCapacity = " + > + queueCapacities.getUsedCapacity() + " [= usedResourcesMemory > / " > + "(clusterResourceMemory * absoluteCapacity)]" + "\n" > + "absoluteUsedCapacity = " + absoluteUsedCapacity > + " [= usedResourcesMemory / clusterResourceMemory]" + "\n" > + "maxAMResourcePerQueuePercent = " + > maxAMResourcePerQueuePercent > + " [= configuredMaximumAMResourcePercent ]" + "\n" > + "minimumAllocationFactor = " + minimumAllocationFactor > + " [= (float)(maximumAllocationMemory - > minimumAllocationMemory) / " > + "maximumAllocationMemory ]" + "\n" + "maximumAllocation = " > + maximumAllocation + " [= configuredMaxAllocation ]" + "\n" > + "numContainers = " + numContainers > + " [= currentNumContainers ]" + "\n" + "state = " + getState() > + " [= configuredState ]" +
[jira] [Commented] (YARN-10623) Capacity scheduler should support refresh queue automatically by a thread policy.
[ https://issues.apache.org/jira/browse/YARN-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290877#comment-17290877 ] Hadoop QA commented on YARN-10623: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 51s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 29s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 25s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 42s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 48s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 33s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 3s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/675/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 15s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 14s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 14s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 38s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/675/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 6 new + 184 unchanged - 0 fixed = 190 total (was 184) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} |
[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used
[ https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290868#comment-17290868 ] Hadoop QA commented on YARN-10532: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 36s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 30s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 14s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/676/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 3 new + 278 unchanged - 1 fixed = 281 total (was 279) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 45s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {col
[jira] [Comment Edited] (YARN-10639) Queueinfo related capacity, should ajusted to weight mode.
[ https://issues.apache.org/jira/browse/YARN-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290815#comment-17290815 ] Qi Zhu edited comment on YARN-10639 at 2/25/21, 10:04 AM: -- Thanks [~shuzirra] for review, i also think that painlessly add a weight field to the queue info will be a better choice, but: It is hard to add weight field, the proto field will change to adapted to it too, also the other scheduler will have to change the QueueInfo related. was (Author: zhuqi): Thanks [~shuzirra] for review, i also think that painlessly add a weight field to the queue info will be a better choice, i will deep into it. > Queueinfo related capacity, should ajusted to weight mode. > -- > > Key: YARN-10639 > URL: https://issues.apache.org/jira/browse/YARN-10639 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10639.001.patch, YARN-10639.002.patch > > > {color:#172b4d}The class QueueInfo capacity field should consider the weight > mode.{color} > {color:#172b4d}Now when client use getQueueInfo to get queue capacity in > weight mode, i always return 0, it is wrong.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10639) Queueinfo related capacity, should ajusted to weight mode.
[ https://issues.apache.org/jira/browse/YARN-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290815#comment-17290815 ] Qi Zhu commented on YARN-10639: --- Thanks [~shuzirra] for review, i also think that painlessly add a weight field to the queue info will be a better choice, i will deep into it. > Queueinfo related capacity, should ajusted to weight mode. > -- > > Key: YARN-10639 > URL: https://issues.apache.org/jira/browse/YARN-10639 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10639.001.patch, YARN-10639.002.patch > > > {color:#172b4d}The class QueueInfo capacity field should consider the weight > mode.{color} > {color:#172b4d}Now when client use getQueueInfo to get queue capacity in > weight mode, i always return 0, it is wrong.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used
[ https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated YARN-10532: -- Attachment: YARN-10532.023.patch > Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is > not being used > > > Key: YARN-10532 > URL: https://issues.apache.org/jira/browse/YARN-10532 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10532.001.patch, YARN-10532.002.patch, > YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, > YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, > YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, > YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, > YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, > YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, > YARN-10532.021.patch, YARN-10532.022.patch, YARN-10532.023.patch, > image-2021-02-12-21-32-02-267.png > > > It's better if we can delete auto-created queues when they are not in use for > a period of time (like 5 mins). It will be helpful when we have a large > number of auto-created queues (e.g. from 500 users), but only a small subset > of queues are actively used. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used
[ https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290779#comment-17290779 ] Qi Zhu commented on YARN-10532: --- Fixed check style in latest patch.:D > Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is > not being used > > > Key: YARN-10532 > URL: https://issues.apache.org/jira/browse/YARN-10532 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10532.001.patch, YARN-10532.002.patch, > YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, > YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, > YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, > YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch, > YARN-10532.015.patch, YARN-10532.016.patch, YARN-10532.017.patch, > YARN-10532.018.patch, YARN-10532.019.patch, YARN-10532.020.patch, > YARN-10532.021.patch, YARN-10532.022.patch, YARN-10532.023.patch, > image-2021-02-12-21-32-02-267.png > > > It's better if we can delete auto-created queues when they are not in use for > a period of time (like 5 mins). It will be helpful when we have a large > number of auto-created queues (e.g. from 500 users), but only a small subset > of queues are actively used. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10623) Capacity scheduler should support refresh queue automatically by a thread policy.
[ https://issues.apache.org/jira/browse/YARN-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290766#comment-17290766 ] Qi Zhu commented on YARN-10623: --- Thanks a lot [~shuzirra] [~pbacsko] for hard work review, and valid suggestion: # In latest patch i have changed to lastReloadAttempt, which both represent the successful / falied case time, suggested by [~shuzirra]. # Fixed related suggestions by [~pbacsko]. # Fixed thread sleep to GenerticTestUtils.waitFor() for stronger code, suggested by [~pbacsko]. If you have any other suggestions? > Capacity scheduler should support refresh queue automatically by a thread > policy. > - > > Key: YARN-10623 > URL: https://issues.apache.org/jira/browse/YARN-10623 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10623.001.patch, YARN-10623.002.patch, > YARN-10623.003.patch, YARN-10623.004.patch > > > In fair scheduler, it is supported that refresh queue related conf > automatically by a thread to reload, but in capacity scheduler we only > support to refresh queue related changes by refreshQueues, it is needed for > our cluster to realize queue manage. > cc [~wangda] [~ztang] [~pbacsko] [~snemeth] [~gandras] [~bteke] [~shuzirra] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10623) Capacity scheduler should support refresh queue automatically by a thread policy.
[ https://issues.apache.org/jira/browse/YARN-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated YARN-10623: -- Attachment: YARN-10623.004.patch > Capacity scheduler should support refresh queue automatically by a thread > policy. > - > > Key: YARN-10623 > URL: https://issues.apache.org/jira/browse/YARN-10623 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: YARN-10623.001.patch, YARN-10623.002.patch, > YARN-10623.003.patch, YARN-10623.004.patch > > > In fair scheduler, it is supported that refresh queue related conf > automatically by a thread to reload, but in capacity scheduler we only > support to refresh queue related changes by refreshQueues, it is needed for > our cluster to realize queue manage. > cc [~wangda] [~ztang] [~pbacsko] [~snemeth] [~gandras] [~bteke] [~shuzirra] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org