[jira] [Comment Edited] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Bibin Chundatt (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149843#comment-17149843
 ] 

Bibin Chundatt edited comment on YARN-10335 at 7/2/20, 4:45 AM:


Thank you for showing interest in the JIRA [~cyrusjackson25]

Adding what i have in mind about the health detail. Node manager  has node 
health service which returns a boolean value .Sends UNHEALTHY if the node 
health script return error / If  we don't have any healthy local  directories. 

We will introduce field/fields which returns detailed node health value about 
the node along with the NodeHealthStatus.  

Example:
{quote}
message NodeHealthStatusProto {
optional bool isHealthy = 1;
optional string nodeHealthDescription = 2;
optional string exceptionString = 3;
optional NodeHealthDetail nodehealthDetail=4;
optional StringIntMapProto nodeHealthdetail=5;
}

message StringStringMapProto {
  optional string key = 1;
  optional int32 value = 2;
}

keys could be - overall , ssd, non ssd, etc.. 
{quote}

Also make the NodeHealthService pluggable to support custom implementations of 
NodeHealthServices.


was (Author: bibinchundatt):
Thank you for showing interest in the JIRA [~cyrusjackson25]

Adding the thought what i have in mind about the health value. Node manager  
has node health service which returns a boolean value . 
Sends UNHEALTHY if the node health script return error / If  we don't have any 
healthy local  directories. 

We want to introduce field/fields which returns detailed node health value 
about the node along with the NodeHealthStatus.  

Example:
{quote}
message NodeHealthStatusProto {
optional bool isHealthy = 1;
optional string nodeHealthDescription = 2;
optional string exceptionString = 3;
optional NodeHealthDetail nodehealthDetail=4;
optional StringIntMapProto nodeHealthdetail=5;
}

message StringStringMapProto {
  optional string key = 1;
  optional int32 value = 2;
}

keys could be - overall , ssd, non ssd, etc.. 
{quote}

Also make the NodeHealthService pluggable to support custom implementations of 
NodeHealthServices.

> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Assignee: Cyrus Jackson
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on node 
> health value send from nodemanagers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Bibin Chundatt (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149843#comment-17149843
 ] 

Bibin Chundatt commented on YARN-10335:
---

Thank you for showing interest in the JIRA [~cyrusjackson25]

Adding the thought what i have in mind about the health value. Node manager  
has node health service which returns a boolean value . 
Sends UNHEALTHY if the node health script return error / If  we don't have any 
healthy local  directories. 

We want to introduce field/fields which returns detailed node health value 
about the node along with the NodeHealthStatus.  

Example:
{quote}
message NodeHealthStatusProto {
optional bool isHealthy = 1;
optional string nodeHealthDescription = 2;
optional string exceptionString = 3;
optional NodeHealthDetail nodehealthDetail=4;
optional StringIntMapProto nodeHealthdetail=5;
}

message StringStringMapProto {
  optional string key = 1;
  optional int32 value = 2;
}

keys could be - overall , ssd, non ssd, etc.. 
{quote}

Also make the NodeHealthService pluggable to support custom implementations of 
NodeHealthServices.

> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Assignee: Cyrus Jackson
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on node 
> health value send from nodemanagers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-07-01 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149697#comment-17149697
 ] 

Jonathan Hung commented on YARN-10251:
--

[~epayne] thanks, generally 006 looks good to me, but have a question:
{noformat}totalReservedResourcesAcrossPartition = new ResourceInfo(
cs.getClusterResourceUsage().getReserved());{noformat}
This seems to fetch reserved resources for default partition only? Should we 
change to fetch across partitions like we do for usedResources?

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.006.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-2.10.006.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch, 
> YARN-10251.branch-3.2.006.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-07-01 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149673#comment-17149673
 ] 

Jim Brennan commented on YARN-10251:


Thanks for the additional patches [~epayne]!  I am +1 (non-binding) on the 
patches for branch-3.2 and branch-2.10.

 

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.006.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-2.10.006.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch, 
> YARN-10251.branch-3.2.006.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149646#comment-17149646
 ] 

Hadoop QA commented on YARN-10333:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m  
4s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26245/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006857/YARN-10333-003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux b4c40088c4db 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 04abd0eb17b |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/26245/testReport/ |
| Max. process+thread count | 578 (vs. ulimit of 

[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-07-01 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149627#comment-17149627
 ] 

Eric Payne commented on YARN-10251:
---

RE: UT failures for the branch-2.10 patch: {{TestContinuousScheduling}} and 
{{TestCapacityOverTimePolicy}} are succeeding for me in my local environment.

In my opinion, the current patches are ready for review. [~Jim_Brennan], 
[~jhung], would you please have time?

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.006.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-2.10.006.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch, 
> YARN-10251.branch-3.2.006.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149595#comment-17149595
 ] 

Prabhu Joseph edited comment on YARN-10333 at 7/1/20, 5:41 PM:
---

[~sunil.gov...@gmail.com] Can you review this Jira when you get time. Thanks.

Have verified with below combinations:

*fs.defaultFS   Log Aggregation Path*
 hdfs://nm1       s3a://tmp/app-logs
 hdfs://nm1       abfs://tmp/app-logs
 hdfs://nm1       hdfs://nm2/tmp/app-logs
 hdfs://nm1       hdfs://nm1/tmp/app-logs


was (Author: prabhu joseph):
[~sunil.gov...@gmail.com] Can you review this Jira when you get time. Thanks.

Have verified with below combinations:

*fs.defaultFS Log Aggregation Path*
hdfs://nm1   s3a://tmp/app-logs
hdfs://nm1   abfs://tmp/app-logs
hdfs://nm1   hdfs://nm2/tmp/app-logs
hdfs://nm1   hdfs://nm1/tmp/app-logs

> YarnClient obtain Delegation Token for Log Aggregation Path
> ---
>
> Key: YARN-10333
> URL: https://issues.apache.org/jira/browse/YARN-10333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10333-001.patch, YARN-10333-002.patch, 
> YARN-10333-003.patch
>
>
> There are use cases where Yarn Log Aggregation Path is configured to a 
> FileSystem like S3 or ABFS different from what is configured in fs.defaultFS 
> (HDFS). Log Aggregation fails as the client has token only for fs.defaultFS 
> and not for log aggregation path.
> This Jira is to improve YarnClient by obtaining delegation token for log 
> aggregation path and add it to the Credential of Container Launch Context 
> similar to how it does for Timeline Delegation Token.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149595#comment-17149595
 ] 

Prabhu Joseph commented on YARN-10333:
--

[~sunil.gov...@gmail.com] Can you review this Jira when you get time. Thanks.

Have verified with below combinations:

*fs.defaultFS Log Aggregation Path*
hdfs://nm1   s3a://tmp/app-logs
hdfs://nm1   abfs://tmp/app-logs
hdfs://nm1   hdfs://nm2/tmp/app-logs
hdfs://nm1   hdfs://nm1/tmp/app-logs

> YarnClient obtain Delegation Token for Log Aggregation Path
> ---
>
> Key: YARN-10333
> URL: https://issues.apache.org/jira/browse/YARN-10333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10333-001.patch, YARN-10333-002.patch, 
> YARN-10333-003.patch
>
>
> There are use cases where Yarn Log Aggregation Path is configured to a 
> FileSystem like S3 or ABFS different from what is configured in fs.defaultFS 
> (HDFS). Log Aggregation fails as the client has token only for fs.defaultFS 
> and not for log aggregation path.
> This Jira is to improve YarnClient by obtaining delegation token for log 
> aggregation path and add it to the Credential of Container Launch Context 
> similar to how it does for Timeline Delegation Token.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10333:
-
Attachment: YARN-10333-003.patch

> YarnClient obtain Delegation Token for Log Aggregation Path
> ---
>
> Key: YARN-10333
> URL: https://issues.apache.org/jira/browse/YARN-10333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10333-001.patch, YARN-10333-002.patch, 
> YARN-10333-003.patch
>
>
> There are use cases where Yarn Log Aggregation Path is configured to a 
> FileSystem like S3 or ABFS different from what is configured in fs.defaultFS 
> (HDFS). Log Aggregation fails as the client has token only for fs.defaultFS 
> and not for log aggregation path.
> This Jira is to improve YarnClient by obtaining delegation token for log 
> aggregation path and add it to the Credential of Container Launch Context 
> similar to how it does for Timeline Delegation Token.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149557#comment-17149557
 ] 

Hadoop QA commented on YARN-10304:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 1 new + 26 unchanged - 
0 fixed = 27 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26244/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10304 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006844/YARN-10304.007.patch |
| Optional Tests | dupname 

[jira] [Updated] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10330:
--
Fix Version/s: 3.3.1

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch, YARN-10330-branch-3.3-001.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149527#comment-17149527
 ] 

Szilard Nemeth commented on YARN-10330:
---

Thanks [~pbacsko],
Also committed branch-3.3 patch.
Resolving jira.

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch, YARN-10330-branch-3.3-001.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149519#comment-17149519
 ] 

Hadoop QA commented on YARN-10330:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
49s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26243/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10330 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006823/YARN-10330-branch-3.3-001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux c4d9e6f08fa1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-3.3 / cfb2084 |
| Default Java | Private 

[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149497#comment-17149497
 ] 

Hadoop QA commented on YARN-10330:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
12s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26242/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10330 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006823/YARN-10330-branch-3.3-001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux caa21edf2f53 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-3.3 / cfb2084 |
| Default Java | 

[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query

2020-07-01 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149485#comment-17149485
 ] 

Andras Gyori commented on YARN-10304:
-

Thank you [~mhudaky] for the review, good catch! I have uploaded a new patch 
that incorporates this idea.

> Create an endpoint for remote application log directory path query
> --
>
> Key: YARN-10304
> URL: https://issues.apache.org/jira/browse/YARN-10304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10304.001.patch, YARN-10304.002.patch, 
> YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch, 
> YARN-10304.006.patch, YARN-10304.007.patch
>
>
> The logic of the aggregated log directory path determination (currently based 
> on configuration) is scattered around the codebase and duplicated multiple 
> times. By providing a separate class for creating the path for a specific 
> user, it allows for an abstraction over this logic. This could be used in 
> place of the previously duplicated logic, moreover, we could provide an 
> endpoint to query this path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10304) Create an endpoint for remote application log directory path query

2020-07-01 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10304:

Attachment: YARN-10304.007.patch

> Create an endpoint for remote application log directory path query
> --
>
> Key: YARN-10304
> URL: https://issues.apache.org/jira/browse/YARN-10304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: YARN-10304.001.patch, YARN-10304.002.patch, 
> YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch, 
> YARN-10304.006.patch, YARN-10304.007.patch
>
>
> The logic of the aggregated log directory path determination (currently based 
> on configuration) is scattered around the codebase and duplicated multiple 
> times. By providing a separate class for creating the path for a specific 
> user, it allows for an abstraction over this logic. This could be used in 
> place of the previously duplicated logic, moreover, we could provide an 
> endpoint to query this path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149426#comment-17149426
 ] 

Hadoop QA commented on YARN-10333:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 1 new + 
23 unchanged - 0 fixed = 24 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
24s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26241/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006817/YARN-10333-002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux d7dcb9f19b3d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 6c57be48973 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149394#comment-17149394
 ] 

Hudson commented on YARN-10330:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18402 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18402/])
YARN-10330. Add missing test scenarios to (snemeth: rev 
04abd0eb17b58e321893e8651ec596e9f7ac786f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SimpleGroupsMapping.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestUserGroupMappingPlacementRule.java


> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch, YARN-10330-branch-3.3-001.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10330:

Attachment: YARN-10330-branch-3.3-001.patch

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch, YARN-10330-branch-3.3-001.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10330:
--
Fix Version/s: 3.4.0

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149382#comment-17149382
 ] 

Szilard Nemeth commented on YARN-10330:
---

Thanks [~pbacsko],
Latest patch LGTM, committed to trunk.
As discussed, please upload a patch for branch-3.3 as well.
Thanks.

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149370#comment-17149370
 ] 

Hadoop QA commented on YARN-10330:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 4 unchanged - 1 fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26240/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10330 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006808/YARN-10330-004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 5a12f984c66b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (YARN-10325) Document max-parallel-apps for Capacity Scheduler

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149364#comment-17149364
 ] 

Hudson commented on YARN-10325:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18401/])
YARN-10325. Document max-parallel-apps for Capacity Scheduler. (snemeth: rev 
9b5557a9e811f04b964aa3a31ba8846a907d26f9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md


> Document max-parallel-apps for Capacity Scheduler
> -
>
> Key: YARN-10325
> URL: https://issues.apache.org/jira/browse/YARN-10325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10325-001.patch, YARN-10325-branch-3.3.001.patch
>
>
> New feature introduced by YARN-9930 should be reflected in the upstream 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10325) Document max-parallel-apps for Capacity Scheduler

2020-07-01 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10325:
--
Fix Version/s: 3.3.1
   3.4.0

> Document max-parallel-apps for Capacity Scheduler
> -
>
> Key: YARN-10325
> URL: https://issues.apache.org/jira/browse/YARN-10325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10325-001.patch, YARN-10325-branch-3.3.001.patch
>
>
> New feature introduced by YARN-9930 should be reflected in the upstream 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10325) Document max-parallel-apps for Capacity Scheduler

2020-07-01 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149351#comment-17149351
 ] 

Szilard Nemeth commented on YARN-10325:
---

Thanks [~pbacsko],
Patch LGTM, committed to trunk and branch-3.3
Resolving jira.

> Document max-parallel-apps for Capacity Scheduler
> -
>
> Key: YARN-10325
> URL: https://issues.apache.org/jira/browse/YARN-10325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, capacityscheduler
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10325-001.patch, YARN-10325-branch-3.3.001.patch
>
>
> New feature introduced by YARN-9930 should be reflected in the upstream 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-07-01 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10318:
--
Fix Version/s: (was: 3.3.0)
   3.3.1

> ApplicationHistory Web UI incorrect column indexing
> ---
>
> Key: YARN-10318
> URL: https://issues.apache.org/jira/browse/YARN-10318
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Fix For: 3.4.0, 3.3.1
>
> Attachments: Screenshot 2020-06-25 at 10.15.32.png, 
> YARN-10318.001.patch, YARN-10318.branch-3.3.001.patch, 
> image-2020-06-16-17-14-55-921.png
>
>
> The ApplicationHistory UI is broken due to an incorrect column indexing. This 
> bug was probably introduced in YARN-10038, which presumes, that the table 
> contains the application tag column (which is true for RM Web UI, but not for 
> AH Web UI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-07-01 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149340#comment-17149340
 ] 

Szilard Nemeth commented on YARN-10318:
---

Thanks [~gandras],
Committed patch to branch-3.3 as well.
Resolving jira.

> ApplicationHistory Web UI incorrect column indexing
> ---
>
> Key: YARN-10318
> URL: https://issues.apache.org/jira/browse/YARN-10318
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Fix For: 3.3.0, 3.4.0
>
> Attachments: Screenshot 2020-06-25 at 10.15.32.png, 
> YARN-10318.001.patch, YARN-10318.branch-3.3.001.patch, 
> image-2020-06-16-17-14-55-921.png
>
>
> The ApplicationHistory UI is broken due to an incorrect column indexing. This 
> bug was probably introduced in YARN-10038, which presumes, that the table 
> contains the application tag column (which is true for RM Web UI, but not for 
> AH Web UI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10333) YarnClient obtain Delegation Token for Log Aggregation Path

2020-07-01 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10333:
-
Attachment: YARN-10333-002.patch

> YarnClient obtain Delegation Token for Log Aggregation Path
> ---
>
> Key: YARN-10333
> URL: https://issues.apache.org/jira/browse/YARN-10333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10333-001.patch, YARN-10333-002.patch
>
>
> There are use cases where Yarn Log Aggregation Path is configured to a 
> FileSystem like S3 or ABFS different from what is configured in fs.defaultFS 
> (HDFS). Log Aggregation fails as the client has token only for fs.defaultFS 
> and not for log aggregation path.
> This Jira is to improve YarnClient by obtaining delegation token for log 
> aggregation path and add it to the Credential of Container Launch Context 
> similar to how it does for Timeline Delegation Token.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149338#comment-17149338
 ] 

yehuanhuan edited comment on YARN-10332 at 7/1/20, 11:23 AM:
-

[~bibinchundatt] and [~adam.antal] Thank you for your reply. This transition 
was registered twice in RMNodeImpl.


was (Author: yehuanhuan):
[~bibinchundatt]  [~adam.antal] Thank you for your reply. This transition was 
registered twice in RMNodeImpl.

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149338#comment-17149338
 ] 

yehuanhuan edited comment on YARN-10332 at 7/1/20, 11:22 AM:
-

[~bibinchundatt]  [~adam.antal] Thank you for your reply. This transition was 
registered twice in RMNodeImpl.


was (Author: yehuanhuan):
[~bibinchundatt][~adam.antal] Thank you for your reply. This transition was 
registered twice in RMNodeImpl.

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yehuanhuan updated YARN-10332:
--
Comment: was deleted

(was: [~bibinchundatt] In RMNodeImpl, RESOURCE_UPDATE event was registered 
twice in DECOMMISSIONING state. )

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149338#comment-17149338
 ] 

yehuanhuan edited comment on YARN-10332 at 7/1/20, 11:22 AM:
-

[~bibinchundatt][~adam.antal] Thank you for your reply. This transition was 
registered twice in RMNodeImpl.


was (Author: yehuanhuan):
[~adam.antal] Thank you for your reply. This transition was registered twice in 
RMNodeImpl.

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149338#comment-17149338
 ] 

yehuanhuan commented on YARN-10332:
---

[~adam.antal] Thank you for your reply. This transition was registered twice in 
RMNodeImpl.

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149336#comment-17149336
 ] 

yehuanhuan commented on YARN-10332:
---

[~bibinchundatt] In RMNodeImpl, RESOURCE_UPDATE event was registered twice in 
DECOMMISSIONING state. 

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10266) Setting debug delay to a too high number will cause NM fail to start

2020-07-01 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal resolved YARN-10266.
---
  Assignee: Adam Antal
Resolution: Won't Fix

> Setting debug delay to a too high number will cause NM fail to start
> 
>
> Key: YARN-10266
> URL: https://issues.apache.org/jira/browse/YARN-10266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
>  Labels: newbie
>
> If I set some inappropriate number for 
> {{yarn.nodemanager.delete.debug-delay-sec}}, I'd rather have functional nM 
> with some ERROR messages in the log stating that it has been disabled due to 
> illegal argument, than to have a failed NM.
> Stack trace:
> {noformat}
> java.lang.NumberFormatException: For input string: "999"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:583)
>   at java.lang.Integer.parseInt(Integer.java:615)
>   at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1509)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DeletionService.serviceInit(DeletionService.java:179)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:478)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:936)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1016)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10266) Setting debug delay to a too high number will cause NM fail to start

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149311#comment-17149311
 ] 

Adam Antal commented on YARN-10266:
---

I agree [~BilwaST]. It makes no sense to handle this exception in this 
particular case. Since we're using Java-internal methods I don't think there's 
much to do. Closing this.

> Setting debug delay to a too high number will cause NM fail to start
> 
>
> Key: YARN-10266
> URL: https://issues.apache.org/jira/browse/YARN-10266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Priority: Trivial
>  Labels: newbie
>
> If I set some inappropriate number for 
> {{yarn.nodemanager.delete.debug-delay-sec}}, I'd rather have functional nM 
> with some ERROR messages in the log stating that it has been disabled due to 
> illegal argument, than to have a failed NM.
> Stack trace:
> {noformat}
> java.lang.NumberFormatException: For input string: "999"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:583)
>   at java.lang.Integer.parseInt(Integer.java:615)
>   at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1509)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DeletionService.serviceInit(DeletionService.java:179)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:478)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:936)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1016)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-07-01 Thread Tao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149285#comment-17149285
 ] 

Tao Yang commented on YARN-10319:
-

Thanks [~adam.antal] for the review and comments, [~prabhujoseph], could you 
please consider these suggestions as well?
Most changes in the latest patch LGTM, a minor suggestion is to change root 
element name of BulkActivitiesInfo from "schedulerActivities" to 
"bulkActivities", some related places like 
ActivitiesTestUtils#FN_SCHEDULER_BULK_ACT_ROOT should be changed as well.

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
> Attachments: Screen Shot 2020-06-18 at 1.26.31 PM.png, 
> YARN-10319-001-WIP.patch, YARN-10319-002.patch, YARN-10319-003.patch, 
> YARN-10319-004.patch
>
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10106) Yarn logs CLI filtering by application attempt

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149277#comment-17149277
 ] 

Adam Antal commented on YARN-10106:
---

Thanks for the patch [~mhudaky].

For backward compatibility reasons I think in case when only the containerId is 
specified let's not populate the appAttemptId - it was null before, and it 
doesn't matter if we add it to the {{ContainerLogsRequest}}.

The tests look good to me in general, could you please fix them?

> Yarn logs CLI filtering by application attempt
> --
>
> Key: YARN-10106
> URL: https://issues.apache.org/jira/browse/YARN-10106
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10106.001.patch, YARN-10106.002.patch, 
> YARN-10106.003.patch
>
>
> {{ContainerLogsRequest}} got a new parameter in YARN-10101, which is the 
> {{applicationAttempt}} - we can use this new parameter in Yarn logs CLI as 
> well to filter by application attempt.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10334) TestDistributedShell leaks resources on timeout/failure

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149259#comment-17149259
 ] 

Adam Antal commented on YARN-10334:
---

Nice finding [~ahussein]. It could potentially cause lots of intermittent 
issues in Hadoop's unit test runs.

I think revisiting this test may not be that easy, but I hope someone can 
afford some time to look at it.

> TestDistributedShell leaks resources on timeout/failure
> ---
>
> Key: YARN-10334
> URL: https://issues.apache.org/jira/browse/YARN-10334
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell, test, yarn
>Reporter: Ahmed Hussein
>Priority: Major
>  Labels: newbie, test
>
> {{TestDistributedShell}} times out on trunk. I found that the application, 
> and containers will stay running in the background long after the unit test 
> has failed.
> This causes failure of other test cases and several false positives failures 
> as result of:
> * Ports will stay busy, so other tests cases fail to launch.
> * Unit tests fail because of memory restrictions.
> Although the unit test is already broken on trunk, we do not want its 
> failures to other unit tests.
> {{TestDistributedShell}} needs to be revisited to make sure that all 
> {{YarnClients}}, and {{YarnApplications}} are closed properly at the end of 
> the each unit test (including exception and timeouts)
> Steps to reproduce:
> {code:bash}
> mvn test -Dtest=TestDistributedShell#testDSShellWithOpportunisticContainers
> ## this will timeout as
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 90.234 s <<< FAILURE! - in 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
> [ERROR] 
> testDSShellWithOpportunisticContainers(org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell)
>   Time elapsed: 90.018 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 9 
> milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.Client.monitorApplication(Client.java:1117)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:1089)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithOpportunisticContainers(TestDistributedShell.java:1438)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> [INFO] 
> [INFO] Results:
> [INFO] 
> [ERROR] Errors: 
> [ERROR]   TestDistributedShell.testDSShellWithOpportunisticContainers:1438 » 
> TestTimedOut
> [INFO] 
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {code}
> Using {{ps}} command, you can find the yarn processes are still in the 
> background
> {code:bash}
> /bin/bash -c $JRE_HOME/bin/java -Xmx512m 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster 
> --container_type OPPORTUNISTIC --container_memory 128 --container_vcores 1 
> --num_containers 2 --priority 0 --appname DistributedShell --homedir 
> file:/Users/ahussein 
> 1>$WORK_DIR8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/target/TestDistributedShell/TestDistributedShell-logDir-nm-0_0/application_1593554710896_0001/container_1593554710896_0001_01_01/AppMaster.stdout
>  
> 

[jira] [Commented] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149253#comment-17149253
 ] 

Adam Antal commented on YARN-10319:
---

Thanks for the patch [~prabhujoseph]. I have some minor nits if you don't mind.

- In {{RMWebServices}} I took a look at how the scheduler and pre-checks are 
performed for the existing {{#getActivities}} function, and it seems to me that 
there are some duplicates. I think the checks (choosing the scheduler, getting 
the {{ActivitiesManager}}) can be moved to a separate function that can be used 
by both endpoints. I think it would be nice, if we could provide the same error 
if not CS is used (I'm especially thinking about 
[L711|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java#L711]
 part).
- Let's {{RESTClient}} be private in {{ActivitiesTestUtils}}.
- Could you please also add an example output for {{/bulk-activities}} into 
{{ResourceManager.md}}

> Record Last N Scheduler Activities from ActivitiesManager
> -
>
> Key: YARN-10319
> URL: https://issues.apache.org/jira/browse/YARN-10319
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: activitiesmanager
> Attachments: Screen Shot 2020-06-18 at 1.26.31 PM.png, 
> YARN-10319-001-WIP.patch, YARN-10319-002.patch, YARN-10319-003.patch, 
> YARN-10319-004.patch
>
>
> ActivitiesManager records a call flow for a given nodeId or a last call flow. 
> This is useful when debugging the issue live where the user queries with 
> right nodeId. But capturing last N scheduler activities during the issue 
> period can help to debug the issue offline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149233#comment-17149233
 ] 

Adam Antal commented on YARN-10332:
---

Moved this under YARN-914.

I agree with [~bibinchundatt], removing that transition will cause 
{{InvalidStateTransitionException}}, which case should be avoided.

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10315) Avoid sending RMNodeResoureupdate event if resource is same

2020-07-01 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149232#comment-17149232
 ] 

Adam Antal commented on YARN-10315:
---

Moved this under YARN-914.

> Avoid sending RMNodeResoureupdate event if resource is same
> ---
>
> Key: YARN-10315
> URL: https://issues.apache.org/jira/browse/YARN-10315
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin Chundatt
>Assignee: Sushil Ks
>Priority: Major
>
> When the node is in DECOMMISSIONING state the RMNodeResourceUpdateEvent is 
> send for every heartbeat . Which will result in scheduler resource update.
> Avoid sending the same.
>  Scheduler node resource update iterates through all the queues for resource 
> update which is costly..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10315) Avoid sending RMNodeResoureupdate event if resource is same

2020-07-01 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-10315:
--
Parent: YARN-914
Issue Type: Sub-task  (was: Improvement)

> Avoid sending RMNodeResoureupdate event if resource is same
> ---
>
> Key: YARN-10315
> URL: https://issues.apache.org/jira/browse/YARN-10315
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin Chundatt
>Assignee: Sushil Ks
>Priority: Major
>
> When the node is in DECOMMISSIONING state the RMNodeResourceUpdateEvent is 
> send for every heartbeat . Which will result in scheduler resource update.
> Avoid sending the same.
>  Scheduler node resource update iterates through all the queues for resource 
> update which is costly..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-10332:
--
Parent: YARN-914
Issue Type: Sub-task  (was: Improvement)

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149230#comment-17149230
 ] 

Peter Bacsko commented on YARN-10330:
-

Yes, it's YARN-10329. TestDelegationTokenRenewer is also a long standing issue.

I uploaded patch v4 where I removed a TODO comment and fixed the checkstyle 
problem.

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10330:

Attachment: YARN-10330-004.patch

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch, YARN-10330-004.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Cyrus Jackson (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyrus Jackson reassigned YARN-10335:


Assignee: Cyrus Jackson

> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Assignee: Cyrus Jackson
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on node 
> health value send from nodemanagers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Cyrus Jackson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149190#comment-17149190
 ] 

Cyrus Jackson commented on YARN-10335:
--

I would like to work on this. [~bibinchundatt]

> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on node 
> health value send from nodemanagers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10330) Add missing test scenarios to TestUserGroupMappingPlacementRule and TestAppNameMappingPlacementRule

2020-07-01 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149188#comment-17149188
 ] 

Szilard Nemeth commented on YARN-10330:
---

Hi [~pbacsko],
Can you please justify the UT failures?
I'm aware there are some flaky FS preemption tests out there, I guess 
[~mhudaky] reported a jira lately to fix those.
Anyway, it's better to link the jira(s) here.
In the meantime, I will check your patch.

> Add missing test scenarios to TestUserGroupMappingPlacementRule and 
> TestAppNameMappingPlacementRule
> ---
>
> Key: YARN-10330
> URL: https://issues.apache.org/jira/browse/YARN-10330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, test
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10330-001.patch, YARN-10330-002.patch, 
> YARN-10330-003.patch
>
>
> After running {{TestUserGroupMappingPlacementRule}} with EclEmma, it turned 
> out that there are at least 8-10 missing test scenarios that are not covered. 
> Since we're planning to enhance mapping rule logic with extra features, it is 
> crucial to have good coverage so that we can verify backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Bibin Chundatt (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin Chundatt updated YARN-10335:
--
Description: 
YARN-7494 supports providing interface to choose nodeset for scheduler 
allocation.
We could leverage the same to support allocation of containers based on node 
health value send from nodemanagers

  was:
YARN-7494 supports providing interface to choose nodeset for scheduler 
allocation.
We could leverage the same to support allocation of containers based on 
nodehealth value


> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on node 
> health value send from nodemanagers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Bibin Chundatt (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin Chundatt updated YARN-10335:
--
Description: 
YARN-7494 supports providing interface to choose nodeset for scheduler 
allocation.
We could leverage the same to support allocation of containers based on 
nodehealth value

  was:
YARN-7494 supports providing interface to choose nodeset for scheduler 
allocation.
We could leverage the same to support allocation of containers based on 
nodehealth.


> Improve scheduling of containers based on node health
> -
>
> Key: YARN-10335
> URL: https://issues.apache.org/jira/browse/YARN-10335
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Priority: Major
>
> YARN-7494 supports providing interface to choose nodeset for scheduler 
> allocation.
> We could leverage the same to support allocation of containers based on 
> nodehealth value



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10335) Improve scheduling of containers based on node health

2020-07-01 Thread Bibin Chundatt (Jira)
Bibin Chundatt created YARN-10335:
-

 Summary: Improve scheduling of containers based on node health
 Key: YARN-10335
 URL: https://issues.apache.org/jira/browse/YARN-10335
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bibin Chundatt


YARN-7494 supports providing interface to choose nodeset for scheduler 
allocation.
We could leverage the same to support allocation of containers based on 
nodehealth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread Bibin Chundatt (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149166#comment-17149166
 ] 

Bibin Chundatt edited comment on YARN-10332 at 7/1/20, 6:56 AM:


[~yehuanhuan] looks like duplicate of YARN-10315. 

Current change will create InvalidStateTransitionException when Node is in 
decommissioning state and admin is calling node resource update.. Also during 
node update..



was (Author: bibinchundatt):
[~yehuanhuan] looks like duplicate of YARN-10315. 

Current change is got in create InvalidStateTransitionException when Node is in 
decommissioning state and admin is calling node resource update.. Also during 
node update..


> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread Bibin Chundatt (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149166#comment-17149166
 ] 

Bibin Chundatt edited comment on YARN-10332 at 7/1/20, 6:56 AM:


[~yehuanhuan] looks like duplicate of YARN-10315. 

Current change is got in create InvalidStateTransitionException when Node is in 
decommissioning state and admin is calling node resource update.. Also during 
node update..



was (Author: bibinchundatt):
[~yehuanhuan] looks like duplicate of YARN-10315. 

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10332) RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state

2020-07-01 Thread Bibin Chundatt (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149166#comment-17149166
 ] 

Bibin Chundatt commented on YARN-10332:
---

[~yehuanhuan] looks like duplicate of YARN-10315. 

> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state
> 
>
> Key: YARN-10332
> URL: https://issues.apache.org/jira/browse/YARN-10332
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: yehuanhuan
>Priority: Minor
> Attachments: YARN-10332.001.patch
>
>
> RESOURCE_UPDATE event was repeatedly registered in DECOMMISSIONING state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org