[jira] [Commented] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248297#comment-17248297
 ] 

zhuqi commented on YARN-10531:
--

[~wangda]  [~pbacsko] [~tangzhankun]

I submit a patch to disable user limit factor for capacityScheduler leaf queue.

If you can review it.

Thanks.

> Be able to disable user limit factor for CapacityScheduler Leaf Queue
> -
>
> Key: YARN-10531
> URL: https://issues.apache.org/jira/browse/YARN-10531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Major
> Attachments: YARN-10531.001.patch
>
>
> User limit factor is used to define max cap of how much resource can be 
> consumed by single user. 
> Under Auto Queue Creation context, it doesn't make much sense to set user 
> limit factor, because initially every queue will set weight to 1.0, we want 
> user can consume more resource if possible. It is hard to pre-determine how 
> to set up user limit factor. So it makes more sense to add a new value (like 
> -1) to indicate we will disable user limit factor 
> Logic need to be changed is below: 
> (Inside LeafQueue.java)
> {code}
> Resource maxUserLimit = Resources.none();
> if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
>   maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
>   getUserLimitFactor());
> } else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) 
> {
>   maxUserLimit = partitionResource;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248297#comment-17248297
 ] 

zhuqi edited comment on YARN-10531 at 12/12/20, 6:14 AM:
-

[~wangda]  [~pbacsko] [~tangzhankun]

I submit a patch to support disable user limit factor for capacityScheduler 
leaf queue.

If you can review it.

Thanks.


was (Author: zhuqi):
[~wangda]  [~pbacsko] [~tangzhankun]

I submit a patch to disable user limit factor for capacityScheduler leaf queue.

If you can review it.

Thanks.

> Be able to disable user limit factor for CapacityScheduler Leaf Queue
> -
>
> Key: YARN-10531
> URL: https://issues.apache.org/jira/browse/YARN-10531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Major
> Attachments: YARN-10531.001.patch
>
>
> User limit factor is used to define max cap of how much resource can be 
> consumed by single user. 
> Under Auto Queue Creation context, it doesn't make much sense to set user 
> limit factor, because initially every queue will set weight to 1.0, we want 
> user can consume more resource if possible. It is hard to pre-determine how 
> to set up user limit factor. So it makes more sense to add a new value (like 
> -1) to indicate we will disable user limit factor 
> Logic need to be changed is below: 
> (Inside LeafQueue.java)
> {code}
> Resource maxUserLimit = Resources.none();
> if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
>   maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
>   getUserLimitFactor());
> } else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) 
> {
>   maxUserLimit = partitionResource;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10506) Update queue creation logic to use weight mode and allow the flexible static/dynamic creation

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248263#comment-17248263
 ] 

Hadoop QA commented on YARN-10506:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red}{color} | {color:red} YARN-10506 does not apply to trunk. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-10506 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13016968/YARN-10506.001.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/382/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Update queue creation logic to use weight mode and allow the flexible 
> static/dynamic creation
> -
>
> Key: YARN-10506
> URL: https://issues.apache.org/jira/browse/YARN-10506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Priority: Major
> Attachments: YARN-10506.001.patch
>
>
> The queue creation logic should be updated to use weight mode and support the 
> flexible creation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10506) Update queue creation logic to use weight mode and allow the flexible static/dynamic creation

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248261#comment-17248261
 ] 

Wangda Tan commented on YARN-10506:
---

Uploaded a patch, which is based on YARN-10504. It has done the following: 

- Handle create of leaf and parent. 
- Added control parameter to ApplicationPlacementContext. 
- Handle dynamic adjust weights for queues. 

Partically complete: 
- Handle convert of dynamic queue to static queue (still see some test 
failures). 

Not started: 
- Integrate with Queue Placement Policy and related tests. 

Have done unit tests for some part of logics, details see 
TestCapacitySchedulerNewQueueAutoCreation

> Update queue creation logic to use weight mode and allow the flexible 
> static/dynamic creation
> -
>
> Key: YARN-10506
> URL: https://issues.apache.org/jira/browse/YARN-10506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Priority: Major
> Attachments: YARN-10506.001.patch
>
>
> The queue creation logic should be updated to use weight mode and support the 
> flexible creation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread zhuqi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuqi reassigned YARN-10531:


Assignee: zhuqi

> Be able to disable user limit factor for CapacityScheduler Leaf Queue
> -
>
> Key: YARN-10531
> URL: https://issues.apache.org/jira/browse/YARN-10531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Major
>
> User limit factor is used to define max cap of how much resource can be 
> consumed by single user. 
> Under Auto Queue Creation context, it doesn't make much sense to set user 
> limit factor, because initially every queue will set weight to 1.0, we want 
> user can consume more resource if possible. It is hard to pre-determine how 
> to set up user limit factor. So it makes more sense to add a new value (like 
> -1) to indicate we will disable user limit factor 
> Logic need to be changed is below: 
> (Inside LeafQueue.java)
> {code}
> Resource maxUserLimit = Resources.none();
> if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
>   maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
>   getUserLimitFactor());
> } else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) 
> {
>   maxUserLimit = partitionResource;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248260#comment-17248260
 ] 

zhuqi commented on YARN-10531:
--

[~wangda]

I'm glad to take this.

Thanks.

> Be able to disable user limit factor for CapacityScheduler Leaf Queue
> -
>
> Key: YARN-10531
> URL: https://issues.apache.org/jira/browse/YARN-10531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> User limit factor is used to define max cap of how much resource can be 
> consumed by single user. 
> Under Auto Queue Creation context, it doesn't make much sense to set user 
> limit factor, because initially every queue will set weight to 1.0, we want 
> user can consume more resource if possible. It is hard to pre-determine how 
> to set up user limit factor. So it makes more sense to add a new value (like 
> -1) to indicate we will disable user limit factor 
> Logic need to be changed is below: 
> (Inside LeafQueue.java)
> {code}
> Resource maxUserLimit = Resources.none();
> if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
>   maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
>   getUserLimitFactor());
> } else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) 
> {
>   maxUserLimit = partitionResource;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10506) Update queue creation logic to use weight mode and allow the flexible static/dynamic creation

2020-12-11 Thread Wangda Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-10506:
--
Attachment: YARN-10506.001.patch

> Update queue creation logic to use weight mode and allow the flexible 
> static/dynamic creation
> -
>
> Key: YARN-10506
> URL: https://issues.apache.org/jira/browse/YARN-10506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Priority: Major
> Attachments: YARN-10506.001.patch
>
>
> The queue creation logic should be updated to use weight mode and support the 
> flexible creation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10504) Implement weight mode in Capacity Scheduler

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248194#comment-17248194
 ] 

Hadoop QA commented on YARN-10504:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 7 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 58s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
46s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/381/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 16 new + 743 unchanged - 13 fixed = 759 total (was 756) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| 

[jira] [Commented] (YARN-10494) CLI tool for docker-to-squashfs conversion (pure Java)

2020-12-11 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248173#comment-17248173
 ] 

Jim Brennan commented on YARN-10494:


[~ccondit], [~ebadger] I am OK with including this for now. 

> CLI tool for docker-to-squashfs conversion (pure Java)
> --
>
> Key: YARN-10494
> URL: https://issues.apache.org/jira/browse/YARN-10494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Major
>  Labels: pull-request-available
> Attachments: YARN-10494.001.patch, 
> docker-to-squashfs-conversion-tool-design.pdf
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> *YARN-9564* defines a docker-to-squashfs image conversion tool that relies on 
> python2, multiple libraries, squashfs-tools and root access in order to 
> convert Docker images to squashfs images for use with the runc container 
> runtime in YARN.
> *YARN-9943* was created to investigate alternatives, as the response to 
> merging YARN-9564 has not been very positive. This proposal outlines the 
> design for a CLI conversion tool in 100% pure Java that will work out of the 
> box.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2020-12-11 Thread Wangda Tan (Jira)
Wangda Tan created YARN-10532:
-

 Summary: Capacity Scheduler Auto Queue Creation: Allow auto delete 
queue when queue is not being used
 Key: YARN-10532
 URL: https://issues.apache.org/jira/browse/YARN-10532
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan


It's better if we can delete auto-created queues when they are not in use for a 
period of time (like 5 mins). It will be helpful when we have a large number of 
auto-created queues (e.g. from 500 users), but only a small subset of queues 
are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10526) RMAppManager CS Placement ignores parent path

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248161#comment-17248161
 ] 

Hadoop QA commented on YARN-10526:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  0s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
6s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 20s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} the 

[jira] [Commented] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248149#comment-17248149
 ] 

Wangda Tan commented on YARN-10531:
---

[~zhuqi] do you want to take a try on this one? 

Thanks, 

> Be able to disable user limit factor for CapacityScheduler Leaf Queue
> -
>
> Key: YARN-10531
> URL: https://issues.apache.org/jira/browse/YARN-10531
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> User limit factor is used to define max cap of how much resource can be 
> consumed by single user. 
> Under Auto Queue Creation context, it doesn't make much sense to set user 
> limit factor, because initially every queue will set weight to 1.0, we want 
> user can consume more resource if possible. It is hard to pre-determine how 
> to set up user limit factor. So it makes more sense to add a new value (like 
> -1) to indicate we will disable user limit factor 
> Logic need to be changed is below: 
> (Inside LeafQueue.java)
> {code}
> Resource maxUserLimit = Resources.none();
> if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
>   maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
>   getUserLimitFactor());
> } else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) 
> {
>   maxUserLimit = partitionResource;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10531) Be able to disable user limit factor for CapacityScheduler Leaf Queue

2020-12-11 Thread Wangda Tan (Jira)
Wangda Tan created YARN-10531:
-

 Summary: Be able to disable user limit factor for 
CapacityScheduler Leaf Queue
 Key: YARN-10531
 URL: https://issues.apache.org/jira/browse/YARN-10531
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan


User limit factor is used to define max cap of how much resource can be 
consumed by single user. 

Under Auto Queue Creation context, it doesn't make much sense to set user limit 
factor, because initially every queue will set weight to 1.0, we want user can 
consume more resource if possible. It is hard to pre-determine how to set up 
user limit factor. So it makes more sense to add a new value (like -1) to 
indicate we will disable user limit factor 

Logic need to be changed is below: 

(Inside LeafQueue.java)

{code}
Resource maxUserLimit = Resources.none();
if (schedulingMode == SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY) {
  maxUserLimit = Resources.multiplyAndRoundDown(queueCapacity,
  getUserLimitFactor());
} else if (schedulingMode == SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY) {
  maxUserLimit = partitionResource;
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10504) Implement weight mode in Capacity Scheduler

2020-12-11 Thread Wangda Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-10504:
--
Attachment: YARN-10504.ver-3.patch

> Implement weight mode in Capacity Scheduler
> ---
>
> Key: YARN-10504
> URL: https://issues.apache.org/jira/browse/YARN-10504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Attachments: YARN-10504.001.patch, YARN-10504.ver-1.patch, 
> YARN-10504.ver-2.patch, YARN-10504.ver-3.patch
>
>
> To allow the possibility to flexibly create queues in Capacity Scheduler a 
> weight mode should be introduced. The existing \{{capacity }}property should 
> be used with a different syntax, i.e:
> root.users.capacity = (1.0) or ~1.0 or ^1.0 or @1.0
> root.users.capacity = 1.0w
> root.users.capacity = w:1.0
> Weight support should not impact the existing functionality.
>  
> The new functionality should: 
>  * accept and validate the new weight values
>  * enforce a singular mode on the whole queue tree
>  * (re)calculate the relative (percentage-based) capacities based on the 
> weights during launch and every time the queue structure changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10504) Implement weight mode in Capacity Scheduler

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248114#comment-17248114
 ] 

Wangda Tan commented on YARN-10504:
---

[~zhuqi], thank you so much for your review. [~bteke] will take over the work 
from me, so [~bteke] can you continue work with [~zhuqi] to address comments? 

I just uploaded ver.3 patch, fixed a potential deadlock in 
AutoCreatedLeafQueue. 

> Implement weight mode in Capacity Scheduler
> ---
>
> Key: YARN-10504
> URL: https://issues.apache.org/jira/browse/YARN-10504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Attachments: YARN-10504.001.patch, YARN-10504.ver-1.patch, 
> YARN-10504.ver-2.patch
>
>
> To allow the possibility to flexibly create queues in Capacity Scheduler a 
> weight mode should be introduced. The existing \{{capacity }}property should 
> be used with a different syntax, i.e:
> root.users.capacity = (1.0) or ~1.0 or ^1.0 or @1.0
> root.users.capacity = 1.0w
> root.users.capacity = w:1.0
> Weight support should not impact the existing functionality.
>  
> The new functionality should: 
>  * accept and validate the new weight values
>  * enforce a singular mode on the whole queue tree
>  * (re)calculate the relative (percentage-based) capacities based on the 
> weights during launch and every time the queue structure changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10530) CapacityScheduler ResourceLimits doesn't handle node partition well

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248087#comment-17248087
 ] 

Wangda Tan commented on YARN-10530:
---

I haven't written any UT yet, but I just want to file the ticket to make sure 
we take a closer look because the logic looks confusing. I will be delighted if 
this is a false alarm :) 

> CapacityScheduler ResourceLimits doesn't handle node partition well
> ---
>
> Key: YARN-10530
> URL: https://issues.apache.org/jira/browse/YARN-10530
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Reporter: Wangda Tan
>Priority: Blocker
>
> This is a serious bug may impact all releases, I need to do further check but 
> I want to log the JIRA so we will not forget:  
> ResourceLimits objects are used to handle two purposes: 
> 1) When there's cluster resource change, for example adding new node, or 
> scheduler config reinitialize. We will pass ResourceLimits to 
> updateClusterResource to queues. 
> 2) When allocate container, we try to pass parent's available resource to 
> child to make sure child's resource allocation won't violate parent's max 
> resource. For example below: 
> {code}
> queue used  max
> --
> root  1020
> root.a8 10
> root.a.a1 2 10
> root.a.a2 6 10
> {code}
> Even though a.a1 has 8 resources headroom (a1.max - a1.used). But we can at 
> most allocate 2 resources to a1 because root.a's limit will hit first. This 
> information will be passed down from parent queue to child queue during 
> assignContainers call via ResourceLimits. 
> However, we only pass 1 ResourceLimits from top, for queue initialize, we 
> passed in: 
> {code}
> root.updateClusterResource(clusterResource, new ResourceLimits(
> clusterResource));
> {code}
> And when we update cluster resource, we only considered default partition
> {code}
>   // Update all children
>   for (CSQueue childQueue : childQueues) {
> // Get ResourceLimits of child queue before assign containers
> ResourceLimits childLimits = getResourceLimitsOfChild(childQueue,
> clusterResource, resourceLimits,
> RMNodeLabelsManager.NO_LABEL, false);
> childQueue.updateClusterResource(clusterResource, childLimits);
>   }
> {code}
> Same for allocation logic, we passed in: (Actually I found I added a TODO 
> item 5 years ago).
> {code}
> // Try to use NON_EXCLUSIVE
> assignment = getRootQueue().assignContainers(getClusterResource(),
> candidates,
> // TODO, now we only consider limits for parent for non-labeled
> // resources, should consider labeled resources as well.
> new ResourceLimits(labelManager
> .getResourceByLabel(RMNodeLabelsManager.NO_LABEL,
> getClusterResource())),
> SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY);
> {code} 
> The good thing is, in the assignContainers call, we calculated child limit 
> based on partition
> {code} 
> ResourceLimits childLimits =
>   getResourceLimitsOfChild(childQueue, cluster, limits,
>   candidates.getPartition(), true);
> {code} 
> So I think now the problem is, when a named partition has more resource than 
> default partition, effective min/max resource of each queue could be wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10530) CapacityScheduler ResourceLimits doesn't handle node partition well

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248084#comment-17248084
 ] 

Wangda Tan commented on YARN-10530:
---

cc: [~sunilg], [~epayne]

> CapacityScheduler ResourceLimits doesn't handle node partition well
> ---
>
> Key: YARN-10530
> URL: https://issues.apache.org/jira/browse/YARN-10530
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Reporter: Wangda Tan
>Priority: Blocker
>
> This is a serious bug may impact all releases, I need to do further check but 
> I want to log the JIRA so we will not forget:  
> ResourceLimits objects are used to handle two purposes: 
> 1) When there's cluster resource change, for example adding new node, or 
> scheduler config reinitialize. We will pass ResourceLimits to 
> updateClusterResource to queues. 
> 2) When allocate container, we try to pass parent's available resource to 
> child to make sure child's resource allocation won't violate parent's max 
> resource. For example below: 
> {code}
> queue used  max
> --
> root  1020
> root.a8 10
> root.a.a1 2 10
> root.a.a2 6 10
> {code}
> Even though a.a1 has 8 resources headroom (a1.max - a1.used). But we can at 
> most allocate 2 resources to a1 because root.a's limit will hit first. This 
> information will be passed down from parent queue to child queue during 
> assignContainers call via ResourceLimits. 
> However, we only pass 1 ResourceLimits from top, for queue initialize, we 
> passed in: 
> {code}
> root.updateClusterResource(clusterResource, new ResourceLimits(
> clusterResource));
> {code}
> And when we update cluster resource, we only considered default partition
> {code}
>   // Update all children
>   for (CSQueue childQueue : childQueues) {
> // Get ResourceLimits of child queue before assign containers
> ResourceLimits childLimits = getResourceLimitsOfChild(childQueue,
> clusterResource, resourceLimits,
> RMNodeLabelsManager.NO_LABEL, false);
> childQueue.updateClusterResource(clusterResource, childLimits);
>   }
> {code}
> Same for allocation logic, we passed in: (Actually I found I added a TODO 
> item 5 years ago).
> {code}
> // Try to use NON_EXCLUSIVE
> assignment = getRootQueue().assignContainers(getClusterResource(),
> candidates,
> // TODO, now we only consider limits for parent for non-labeled
> // resources, should consider labeled resources as well.
> new ResourceLimits(labelManager
> .getResourceByLabel(RMNodeLabelsManager.NO_LABEL,
> getClusterResource())),
> SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY);
> {code} 
> The good thing is, in the assignContainers call, we calculated child limit 
> based on partition
> {code} 
> ResourceLimits childLimits =
>   getResourceLimitsOfChild(childQueue, cluster, limits,
>   candidates.getPartition(), true);
> {code} 
> So I think now the problem is, when a named partition has more resource than 
> default partition, effective min/max resource of each queue could be wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10529) The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong

2020-12-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated YARN-10529:
--
Labels: pull-request-available  (was: )

> The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong
> 
>
> Key: YARN-10529
> URL: https://issues.apache.org/jira/browse/YARN-10529
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: akiyamaneko
>Priority: Major
>  Labels: pull-request-available
> Attachments: Graph View ApptemptID shows error.png, Grid View 
> ApptemptID shows Ok.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {color:#403294}*ShortAppAttemptId* {color}displayed on the page was got by 
> *shortAppAttemptId*()=> *containerIdToAttemptIdontainerIdToAttemptId*
> {code:javascript}
> shortAppAttemptId: function() {
> if (!this.get("containerId")) {
>   return this.get("id");
> }
> return "attempt_" +
>
> parseInt(Converter.containerIdToAttemptId(this.get("containerId")).split("_")[3]);
>   }.property("containerId"),
> {code}
> {code:javascript}
> containerIdToAttemptIdontainerIdToAttemptId: function(containerId) {
> if (containerId) {
> // containerId example : container_e73_1605851303713_0005_01_01
>  // try to get the attempt id (marked in red), but the actual attempt id is 
> *01*
>   var arr = containerId.split('_');
>   var attemptId = ["appattempt", arr[1],
> arr[2], this.padding(arr[3], 6)];
>   return attemptId.join('_');
> }
>   },
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10529) The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong

2020-12-11 Thread akiyamaneko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

akiyamaneko updated YARN-10529:
---
Attachment: Grid View ApptemptID shows Ok.png

> The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong
> 
>
> Key: YARN-10529
> URL: https://issues.apache.org/jira/browse/YARN-10529
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: akiyamaneko
>Priority: Major
> Attachments: Graph View ApptemptID shows error.png, Grid View 
> ApptemptID shows Ok.png
>
>
> {color:#403294}*ShortAppAttemptId* {color}displayed on the page was got by 
> *shortAppAttemptId*()=> *containerIdToAttemptIdontainerIdToAttemptId*
> {code:javascript}
> shortAppAttemptId: function() {
> if (!this.get("containerId")) {
>   return this.get("id");
> }
> return "attempt_" +
>
> parseInt(Converter.containerIdToAttemptId(this.get("containerId")).split("_")[3]);
>   }.property("containerId"),
> {code}
> {code:javascript}
> containerIdToAttemptIdontainerIdToAttemptId: function(containerId) {
> if (containerId) {
> // containerId example : container_e73_1605851303713_0005_01_01
>  // try to get the attempt id (marked in red), but the actual attempt id is 
> *01*
>   var arr = containerId.split('_');
>   var attemptId = ["appattempt", arr[1],
> arr[2], this.padding(arr[3], 6)];
>   return attemptId.join('_');
> }
>   },
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10529) The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong

2020-12-11 Thread akiyamaneko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

akiyamaneko updated YARN-10529:
---
Attachment: (was: Graph View ApptemptID shows ok.png)

> The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong
> 
>
> Key: YARN-10529
> URL: https://issues.apache.org/jira/browse/YARN-10529
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.1
>Reporter: akiyamaneko
>Priority: Major
> Attachments: Graph View ApptemptID shows error.png
>
>
> {color:#403294}*ShortAppAttemptId* {color}displayed on the page was got by 
> *shortAppAttemptId*()=> *containerIdToAttemptIdontainerIdToAttemptId*
> {code:javascript}
> shortAppAttemptId: function() {
> if (!this.get("containerId")) {
>   return this.get("id");
> }
> return "attempt_" +
>
> parseInt(Converter.containerIdToAttemptId(this.get("containerId")).split("_")[3]);
>   }.property("containerId"),
> {code}
> {code:javascript}
> containerIdToAttemptIdontainerIdToAttemptId: function(containerId) {
> if (containerId) {
> // containerId example : container_e73_1605851303713_0005_01_01
>  // try to get the attempt id (marked in red), but the actual attempt id is 
> *01*
>   var arr = containerId.split('_');
>   var attemptId = ["appattempt", arr[1],
> arr[2], this.padding(arr[3], 6)];
>   return attemptId.join('_');
> }
>   },
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10529) The attempt id displayed on the graph-view tab in the YARN-UI2 page is wrong

2020-12-11 Thread akiyamaneko (Jira)
akiyamaneko created YARN-10529:
--

 Summary: The attempt id displayed on the graph-view tab in the 
YARN-UI2 page is wrong
 Key: YARN-10529
 URL: https://issues.apache.org/jira/browse/YARN-10529
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.1.1
Reporter: akiyamaneko
 Attachments: Graph View ApptemptID shows error.png, Graph View 
ApptemptID shows ok.png

{color:#403294}*ShortAppAttemptId* {color}displayed on the page was got by 
*shortAppAttemptId*()=> *containerIdToAttemptIdontainerIdToAttemptId*
{code:javascript}
shortAppAttemptId: function() {
if (!this.get("containerId")) {
  return this.get("id");
}
return "attempt_" +
   
parseInt(Converter.containerIdToAttemptId(this.get("containerId")).split("_")[3]);
  }.property("containerId"),

{code}
{code:javascript}
containerIdToAttemptIdontainerIdToAttemptId: function(containerId) {
if (containerId) {
// containerId example : container_e73_1605851303713_0005_01_01
 // try to get the attempt id (marked in red), but the actual attempt id is *01*
  var arr = containerId.split('_');
  var attemptId = ["appattempt", arr[1],
arr[2], this.padding(arr[3], 6)];
  return attemptId.join('_');
}
  },
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10530) CapacityScheduler ResourceLimits doesn't handle node partition well

2020-12-11 Thread Wangda Tan (Jira)
Wangda Tan created YARN-10530:
-

 Summary: CapacityScheduler ResourceLimits doesn't handle node 
partition well
 Key: YARN-10530
 URL: https://issues.apache.org/jira/browse/YARN-10530
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, capacityscheduler
Reporter: Wangda Tan


This is a serious bug may impact all releases, I need to do further check but I 
want to log the JIRA so we will not forget:  

ResourceLimits objects are used to handle two purposes: 

1) When there's cluster resource change, for example adding new node, or 
scheduler config reinitialize. We will pass ResourceLimits to 
updateClusterResource to queues. 

2) When allocate container, we try to pass parent's available resource to child 
to make sure child's resource allocation won't violate parent's max resource. 
For example below: 

{code}
queue used  max
--
root  1020
root.a8 10
root.a.a1 2 10
root.a.a2 6 10
{code}

Even though a.a1 has 8 resources headroom (a1.max - a1.used). But we can at 
most allocate 2 resources to a1 because root.a's limit will hit first. This 
information will be passed down from parent queue to child queue during 
assignContainers call via ResourceLimits. 

However, we only pass 1 ResourceLimits from top, for queue initialize, we 
passed in: 

{code}
root.updateClusterResource(clusterResource, new ResourceLimits(
clusterResource));
{code}

And when we update cluster resource, we only considered default partition

{code}
  // Update all children
  for (CSQueue childQueue : childQueues) {
// Get ResourceLimits of child queue before assign containers
ResourceLimits childLimits = getResourceLimitsOfChild(childQueue,
clusterResource, resourceLimits,
RMNodeLabelsManager.NO_LABEL, false);
childQueue.updateClusterResource(clusterResource, childLimits);
  }
{code}

Same for allocation logic, we passed in: (Actually I found I added a TODO item 
5 years ago).

{code}
// Try to use NON_EXCLUSIVE
assignment = getRootQueue().assignContainers(getClusterResource(),
candidates,
// TODO, now we only consider limits for parent for non-labeled
// resources, should consider labeled resources as well.
new ResourceLimits(labelManager
.getResourceByLabel(RMNodeLabelsManager.NO_LABEL,
getClusterResource())),
SchedulingMode.IGNORE_PARTITION_EXCLUSIVITY);
{code} 

The good thing is, in the assignContainers call, we calculated child limit 
based on partition
{code} 
ResourceLimits childLimits =
  getResourceLimitsOfChild(childQueue, cluster, limits,
  candidates.getPartition(), true);
{code} 

So I think now the problem is, when a named partition has more resource than 
default partition, effective min/max resource of each queue could be wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10526) RMAppManager CS Placement ignores parent path

2020-12-11 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-10526:
--
Attachment: YARN-10526.003.patch

> RMAppManager CS Placement ignores parent path
> -
>
> Key: YARN-10526
> URL: https://issues.apache.org/jira/browse/YARN-10526
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10526.001.patch, YARN-10526.002.patch, 
> YARN-10526.003.patch
>
>
> When RMAppManager creates the RMApp object using the placementContext's 
> results, it only uses the getQueue method which will return only the name of 
> the leaf queue in the case of CapacityScheduler.
> If a queue exists with this name, then the application will be placed into 
> the queue. If the queue does not exists, then CS will take the parent path 
> into consideration during the auto queue creation, however this only happens 
> if there is no queue with the leaf name.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10506) Update queue creation logic to use weight mode and allow the flexible static/dynamic creation

2020-12-11 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248048#comment-17248048
 ] 

Wangda Tan commented on YARN-10506:
---

I'm looking at PoC of the patch now, and will keep the JIRA updated in a day or 
two. 

> Update queue creation logic to use weight mode and allow the flexible 
> static/dynamic creation
> -
>
> Key: YARN-10506
> URL: https://issues.apache.org/jira/browse/YARN-10506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Priority: Major
>
> The queue creation logic should be updated to use weight mode and support the 
> flexible creation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10526) RMAppManager CS Placement ignores parent path

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247980#comment-17247980
 ] 

Hadoop QA commented on YARN-10526:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  1s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
46s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 47s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| 

[jira] [Issue Comment Deleted] (YARN-10169) Mixed absolute resource value and percentage-based resource value in CapacityScheduler should fail

2020-12-11 Thread zhuqi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuqi updated YARN-10169:
-
Comment: was deleted

(was: [~leftnoteasy]

I look into the source code, and test locally, we allow the scheduler max 
capacity like:

a.max (absolute), a1.max (percentage), a2.max (absolute), a2_1.max (percentage).

We calculate percentage below absolute, when update absolute resources from 
parent:
{code:java}
float maxCapacity = queueCapacities.getMaximumCapacity(label);
if (maxCapacity > 0f) {
 queueCapacities.setAbsoluteMaximumCapacity(label, maxCapacity * (
 parentQueueCapacities == null ?
 1 :
 parentQueueCapacities.getAbsoluteMaximumCapacity(label)));
}
{code}
When a2.max (absolute) updated to AbsoluteMaximumCapacity, a2_1.max (percentage 
below absolute) will be  calculated and update for children.

And we need add document for this configuration.

Thanks.)

> Mixed absolute resource value and percentage-based resource value in 
> CapacityScheduler should fail
> --
>
> Key: YARN-10169
> URL: https://issues.apache.org/jira/browse/YARN-10169
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Blocker
> Attachments: YARN-10169.001.patch, YARN-10169.002.patch, 
> YARN-10169.003.patch
>
>
> To me this is a bug: if there's a queue has capacity set to float, and 
> maximum-capacity set to absolute value. Existing logic allows the behavior.
> For example:
> {code:java}
> queue.capacity = 0.8 
> queue.maximum-capacity = [mem=x, vcore=y] {code}
> We should throw exception when configured like this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10169) Mixed absolute resource value and percentage-based resource value in CapacityScheduler should fail

2020-12-11 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247954#comment-17247954
 ] 

zhuqi commented on YARN-10169:
--

[~leftnoteasy]

I look into the source code, and test locally, we allow the scheduler max 
capacity like:

a.max (absolute), a1.max (percentage), a2.max (absolute), a2_1.max (percentage).

We calculate percentage below absolute, when update absolute resources from 
parent:
{code:java}
float maxCapacity = queueCapacities.getMaximumCapacity(label);
if (maxCapacity > 0f) {
 queueCapacities.setAbsoluteMaximumCapacity(label, maxCapacity * (
 parentQueueCapacities == null ?
 1 :
 parentQueueCapacities.getAbsoluteMaximumCapacity(label)));
}
{code}
When a2.max (absolute) updated to AbsoluteMaximumCapacity, a2_1.max (percentage 
below absolute) will be  calculated and update for children.

And we need add document for this configuration.

Thanks.

> Mixed absolute resource value and percentage-based resource value in 
> CapacityScheduler should fail
> --
>
> Key: YARN-10169
> URL: https://issues.apache.org/jira/browse/YARN-10169
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: zhuqi
>Priority: Blocker
> Attachments: YARN-10169.001.patch, YARN-10169.002.patch, 
> YARN-10169.003.patch
>
>
> To me this is a bug: if there's a queue has capacity set to float, and 
> maximum-capacity set to absolute value. Existing logic allows the behavior.
> For example:
> {code:java}
> queue.capacity = 0.8 
> queue.maximum-capacity = [mem=x, vcore=y] {code}
> We should throw exception when configured like this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10528) maxAMShare should only be accepted for leaf queues, not parent queues

2020-12-11 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja updated YARN-10528:
---
Description: 
Based on [Hadoop 
documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
 it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
This is similar to the {{reservation}} setting.

However, existing code only ensures that the reservation setting is not 
accepted for "parent" queues (see 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L233)
 but it is missing the checks for {{maxAMShare}}. Due to this, it is currently 
possible to have an allocation similar to below:

{code}



1.0
drf
*
*

1.0
drf


1.0
drf
1.0


fair








{code}

where {{maxAMShare}} is 1.0f meaning, it is possible allocate 100% of the 
queue's resources for Application Masters. Notice above that root.users is a 
parent queue, however, it still gladly accepts {{maxAMShare}}. This is contrary 
to the documentation and in fact, it is very misleading because the child 
queues like root.users. actually do not inherit this setting at all and 
they still go on and use the default of 0.5 instead of 1.0, see the attached 
screenshot as an example.

  was:
Based on [Hadoop 
documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
 it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
This is similar to the {{reservation}} setting.

However, existing code only ensures that the reservation setting is not 
accepted for "parent" queues (see 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L233)
 but it is missing the checks for {{maxAMShare}}. Due to this, it is current 
possible to have an allocation similar to below:

{code}



1.0
drf
*
*

1.0
drf


1.0
drf
1.0


fair








{code}

where {{maxAMShare}} is 1.0f meaning, it is possible allocate 100% of the 
queue's resources for Application Masters. Notice above that root.users is a 
parent queue, however, it still gladly accepts {{maxAMShare}}. This is contrary 
to the documentation and in fact, it is very misleading because the child 
queues like root.users. actually do not inherit this setting at all and 
they still go on and use the default of 0.5 instead of 1.0, see the attached 
screenshot as an example.


> maxAMShare should only be accepted for leaf queues, not parent queues
> -
>
> Key: YARN-10528
> URL: https://issues.apache.org/jira/browse/YARN-10528
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Major
> Attachments: maxAMShare for root.users (parent queue) has no effect 
> as child queue does not inherit it.png
>
>
> Based on [Hadoop 
> documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
>  it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
> This is similar to the {{reservation}} setting.
> However, existing code only ensures that the reservation setting is not 
> accepted for "parent" queues (see 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
>  and 
> 

[jira] [Updated] (YARN-10528) maxAMShare should only be accepted for leaf queues, not parent queues

2020-12-11 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja updated YARN-10528:
---
Attachment: maxAMShare for root.users (parent queue) has no effect as child 
queue does not inherit it.png

> maxAMShare should only be accepted for leaf queues, not parent queues
> -
>
> Key: YARN-10528
> URL: https://issues.apache.org/jira/browse/YARN-10528
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Major
> Attachments: maxAMShare for root.users (parent queue) has no effect 
> as child queue does not inherit it.png
>
>
> Based on [Hadoop 
> documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
>  it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
> This is similar to the {{reservation}} setting.
> However, existing code only ensures that the reservation setting is not 
> accepted for "parent" queues (see 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L233)
>  but it is missing the checks for {{maxAMShare}}. Due to this, it is current 
> possible to have an allocation similar to below:
> {code}
> 
> 
> 
> 1.0
> drf
> *
> *
> 
> 1.0
> drf
> 
> 
> 1.0
> drf
> 1.0
> 
> 
> fair
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> where {{maxAMShare}} is 1.0f meaning, it is possible allocate 100% of the 
> queue's resources for Application Masters. Notice above that root.users is a 
> parent queue, however, it still gladly accepts {{maxAMShare}}. This is 
> contrary to the documentation and in fact, it is very misleading because the 
> child queues like root.users. actually do not inherit this setting at 
> all and they still go on and use the default of 0.5 instead of 1.0, see the 
> attached screenshot as an example.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10528) maxAMShare should only be accepted for leaf queues, not parent queues

2020-12-11 Thread Siddharth Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja reassigned YARN-10528:
--

Assignee: Siddharth Ahuja

> maxAMShare should only be accepted for leaf queues, not parent queues
> -
>
> Key: YARN-10528
> URL: https://issues.apache.org/jira/browse/YARN-10528
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
>Priority: Major
>
> Based on [Hadoop 
> documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
>  it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
> This is similar to the {{reservation}} setting.
> However, existing code only ensures that the reservation setting is not 
> accepted for "parent" queues (see 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L233)
>  but it is missing the checks for {{maxAMShare}}. Due to this, it is current 
> possible to have an allocation similar to below:
> {code}
> 
> 
> 
> 1.0
> drf
> *
> *
> 
> 1.0
> drf
> 
> 
> 1.0
> drf
> 1.0
> 
> 
> fair
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> where {{maxAMShare}} is 1.0f meaning, it is possible allocate 100% of the 
> queue's resources for Application Masters. Notice above that root.users is a 
> parent queue, however, it still gladly accepts {{maxAMShare}}. This is 
> contrary to the documentation and in fact, it is very misleading because the 
> child queues like root.users. actually do not inherit this setting at 
> all and they still go on and use the default of 0.5 instead of 1.0, see the 
> attached screenshot as an example.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10528) maxAMShare should only be accepted for leaf queues, not parent queues

2020-12-11 Thread Siddharth Ahuja (Jira)
Siddharth Ahuja created YARN-10528:
--

 Summary: maxAMShare should only be accepted for leaf queues, not 
parent queues
 Key: YARN-10528
 URL: https://issues.apache.org/jira/browse/YARN-10528
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Siddharth Ahuja


Based on [Hadoop 
documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html],
 it is clear that {{maxAMShare}} property can only be used for *leaf queues*. 
This is similar to the {{reservation}} setting.

However, existing code only ensures that the reservation setting is not 
accepted for "parent" queues (see 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L226
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/allocation/AllocationFileQueueParser.java#L233)
 but it is missing the checks for {{maxAMShare}}. Due to this, it is current 
possible to have an allocation similar to below:

{code}



1.0
drf
*
*

1.0
drf


1.0
drf
1.0


fair








{code}

where {{maxAMShare}} is 1.0f meaning, it is possible allocate 100% of the 
queue's resources for Application Masters. Notice above that root.users is a 
parent queue, however, it still gladly accepts {{maxAMShare}}. This is contrary 
to the documentation and in fact, it is very misleading because the child 
queues like root.users. actually do not inherit this setting at all and 
they still go on and use the default of 0.5 instead of 1.0, see the attached 
screenshot as an example.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10526) RMAppManager CS Placement ignores parent path

2020-12-11 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-10526:
--
Attachment: YARN-10526.002.patch

> RMAppManager CS Placement ignores parent path
> -
>
> Key: YARN-10526
> URL: https://issues.apache.org/jira/browse/YARN-10526
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10526.001.patch, YARN-10526.002.patch
>
>
> When RMAppManager creates the RMApp object using the placementContext's 
> results, it only uses the getQueue method which will return only the name of 
> the leaf queue in the case of CapacityScheduler.
> If a queue exists with this name, then the application will be placed into 
> the queue. If the queue does not exists, then CS will take the parent path 
> into consideration during the auto queue creation, however this only happens 
> if there is no queue with the leaf name.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10031) Create a general purpose log request with additional query parameters

2020-12-11 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247906#comment-17247906
 ] 

Adam Antal commented on YARN-10031:
---

Thanks [~gandras]!

LGTM. If there's no other reviews, I will commit this tomorrow.

> Create a general purpose log request with additional query parameters
> -
>
> Key: YARN-10031
> URL: https://issues.apache.org/jira/browse/YARN-10031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10031-WIP.001.patch, YARN-10031.001.patch, 
> YARN-10031.002.patch, YARN-10031.003.patch, YARN-10031.004.patch, 
> YARN-10031.005.patch, YARN-10031.005.patch, YARN-10031.006.patch
>
>
> The current endpoints are robust but not very flexible with regards to 
> filtering options. I suggest to add an endpoint which provides filtering 
> options.
> E.g.:
> In ATS we have multiple endpoints:
> /containers/{containerid}/logs/{filename}
> /containerlogs/{containerid}/{filename}
> We could add @QueryParams parameters to the REST endpoints like this:
> /containers/{containerid}/logs?fileName=stderr=FAILED=nm45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10031) Create a general purpose log request with additional query parameters

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247884#comment-17247884
 ] 

Hadoop QA commented on YARN-10031:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 3 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
44s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
45s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 24s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
54s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
16s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
0s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |

[jira] [Commented] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2020-12-11 Thread akiyamaneko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247874#comment-17247874
 ] 

akiyamaneko commented on YARN-1890:
---

[~rohithsharma]d Hello, we use hadopo3.1.1+spark3.0.1, this problem can be 
easily reproduced just by submiting  a spark-yarn app, and then open the ui web 
of the app. I think this it is better to merge the path, Can you try to 
reproduce it again?

> Too many unnecessary logs are logged while accessing applicationMaster web UI.
> --
>
> Key: YARN-1890
> URL: https://issues.apache.org/jira/browse/YARN-1890
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Minor
> Attachments: YARN-1890.patch
>
>
> Accessing applicationMaster UI which is redirected from RM UI, logs too many 
> logs in ResourceManager logs and ProxyServer logs. On every refresh, logging 
> is done at WebAppProxyServlet.doGet(). All my RM and Proxyserver logs are 
> filled with UI information logs which are not really necessary for user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247862#comment-17247862
 ] 

Hadoop QA commented on YARN-4783:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
50s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
24s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | 

[jira] [Commented] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2020-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247844#comment-17247844
 ] 

Hadoop QA commented on YARN-4783:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
37s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  4s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
39s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/377/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 11 new + 58 unchanged - 0 fixed = 69 total (was 58) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 47s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| 

[jira] [Assigned] (YARN-10504) Implement weight mode in Capacity Scheduler

2020-12-11 Thread Benjamin Teke (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Teke reassigned YARN-10504:


Assignee: Benjamin Teke  (was: zhuqi)

> Implement weight mode in Capacity Scheduler
> ---
>
> Key: YARN-10504
> URL: https://issues.apache.org/jira/browse/YARN-10504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
> Attachments: YARN-10504.001.patch, YARN-10504.ver-1.patch, 
> YARN-10504.ver-2.patch
>
>
> To allow the possibility to flexibly create queues in Capacity Scheduler a 
> weight mode should be introduced. The existing \{{capacity }}property should 
> be used with a different syntax, i.e:
> root.users.capacity = (1.0) or ~1.0 or ^1.0 or @1.0
> root.users.capacity = 1.0w
> root.users.capacity = w:1.0
> Weight support should not impact the existing functionality.
>  
> The new functionality should: 
>  * accept and validate the new weight values
>  * enforce a singular mode on the whole queue tree
>  * (re)calculate the relative (percentage-based) capacities based on the 
> weights during launch and every time the queue structure changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2020-12-11 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-4783:
---
Attachment: YARN-4783.006.patch

> Log aggregation failure for application when Nodemanager is restarted 
> --
>
> Key: YARN-4783
> URL: https://issues.apache.org/jira/browse/YARN-4783
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-4783.001.patch, YARN-4783.002.patch, 
> YARN-4783.003.patch, YARN-4783.004.patch, YARN-4783.005.patch, 
> YARN-4783.005.patch, YARN-4783.006.patch
>
>
> Scenario :
>  =
> 1.Start NM with user dsperf:hadoop
>  2.Configure linux-execute user as dsperf
>  3.Submit application with yarn user 
>  4.Once few containers are allocated to NM 1
>  5.Nodemanager 1 is stopped (wait for expiry )
>  6.Start node manager after application is completed
>  7.Check the log aggregation is happening for the containers log in NMLocal 
> directory
> Expect Output :
>  ===
>  Log aggregation should be succesful
> Actual Output :
>  ===
>  Log aggreation not successful



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2020-12-11 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-4783:
---
Attachment: (was: YARN-4783.005.patch)

> Log aggregation failure for application when Nodemanager is restarted 
> --
>
> Key: YARN-4783
> URL: https://issues.apache.org/jira/browse/YARN-4783
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-4783.001.patch, YARN-4783.002.patch, 
> YARN-4783.003.patch, YARN-4783.004.patch, YARN-4783.005.patch, 
> YARN-4783.005.patch
>
>
> Scenario :
>  =
> 1.Start NM with user dsperf:hadoop
>  2.Configure linux-execute user as dsperf
>  3.Submit application with yarn user 
>  4.Once few containers are allocated to NM 1
>  5.Nodemanager 1 is stopped (wait for expiry )
>  6.Start node manager after application is completed
>  7.Check the log aggregation is happening for the containers log in NMLocal 
> directory
> Expect Output :
>  ===
>  Log aggregation should be succesful
> Actual Output :
>  ===
>  Log aggreation not successful



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2020-12-11 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-4783:
---
Attachment: YARN-4783.005.patch

> Log aggregation failure for application when Nodemanager is restarted 
> --
>
> Key: YARN-4783
> URL: https://issues.apache.org/jira/browse/YARN-4783
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-4783.001.patch, YARN-4783.002.patch, 
> YARN-4783.003.patch, YARN-4783.004.patch, YARN-4783.005.patch, 
> YARN-4783.005.patch
>
>
> Scenario :
>  =
> 1.Start NM with user dsperf:hadoop
>  2.Configure linux-execute user as dsperf
>  3.Submit application with yarn user 
>  4.Once few containers are allocated to NM 1
>  5.Nodemanager 1 is stopped (wait for expiry )
>  6.Start node manager after application is completed
>  7.Check the log aggregation is happening for the containers log in NMLocal 
> directory
> Expect Output :
>  ===
>  Log aggregation should be succesful
> Actual Output :
>  ===
>  Log aggreation not successful



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10031) Create a general purpose log request with additional query parameters

2020-12-11 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247773#comment-17247773
 ] 

Andras Gyori edited comment on YARN-10031 at 12/11/20, 9:14 AM:


Thank you [~adam.antal] for the help. I have addressed the remaining concerns 
about checkstyle and javadoc. Also rebased on trunk and seems to be applicable 
without conflict.


was (Author: gandras):
Thank you [~adam.antal] for the help. I have addressed the remaining concerns 
about checkstyle and javadoc. Also rebased on trunk and seems to be appliable 
without conflict.

> Create a general purpose log request with additional query parameters
> -
>
> Key: YARN-10031
> URL: https://issues.apache.org/jira/browse/YARN-10031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10031-WIP.001.patch, YARN-10031.001.patch, 
> YARN-10031.002.patch, YARN-10031.003.patch, YARN-10031.004.patch, 
> YARN-10031.005.patch, YARN-10031.005.patch, YARN-10031.006.patch
>
>
> The current endpoints are robust but not very flexible with regards to 
> filtering options. I suggest to add an endpoint which provides filtering 
> options.
> E.g.:
> In ATS we have multiple endpoints:
> /containers/{containerid}/logs/{filename}
> /containerlogs/{containerid}/{filename}
> We could add @QueryParams parameters to the REST endpoints like this:
> /containers/{containerid}/logs?fileName=stderr=FAILED=nm45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10031) Create a general purpose log request with additional query parameters

2020-12-11 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247773#comment-17247773
 ] 

Andras Gyori commented on YARN-10031:
-

Thank you [~adam.antal] for the help. I have addressed the remaining concerns 
about checkstyle and javadoc. Also rebased on trunk and seems to be appliable 
without conflict.

> Create a general purpose log request with additional query parameters
> -
>
> Key: YARN-10031
> URL: https://issues.apache.org/jira/browse/YARN-10031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10031-WIP.001.patch, YARN-10031.001.patch, 
> YARN-10031.002.patch, YARN-10031.003.patch, YARN-10031.004.patch, 
> YARN-10031.005.patch, YARN-10031.005.patch, YARN-10031.006.patch
>
>
> The current endpoints are robust but not very flexible with regards to 
> filtering options. I suggest to add an endpoint which provides filtering 
> options.
> E.g.:
> In ATS we have multiple endpoints:
> /containers/{containerid}/logs/{filename}
> /containerlogs/{containerid}/{filename}
> We could add @QueryParams parameters to the REST endpoints like this:
> /containers/{containerid}/logs?fileName=stderr=FAILED=nm45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10031) Create a general purpose log request with additional query parameters

2020-12-11 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10031:

Attachment: YARN-10031.006.patch

> Create a general purpose log request with additional query parameters
> -
>
> Key: YARN-10031
> URL: https://issues.apache.org/jira/browse/YARN-10031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10031-WIP.001.patch, YARN-10031.001.patch, 
> YARN-10031.002.patch, YARN-10031.003.patch, YARN-10031.004.patch, 
> YARN-10031.005.patch, YARN-10031.005.patch, YARN-10031.006.patch
>
>
> The current endpoints are robust but not very flexible with regards to 
> filtering options. I suggest to add an endpoint which provides filtering 
> options.
> E.g.:
> In ATS we have multiple endpoints:
> /containers/{containerid}/logs/{filename}
> /containerlogs/{containerid}/{filename}
> We could add @QueryParams parameters to the REST endpoints like this:
> /containers/{containerid}/logs?fileName=stderr=FAILED=nm45



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org