[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279796#comment-16279796
 ] 

Weiwei Yang commented on YARN-7610:
---

Thanks for the +1 [~asuresh], the patch has whitespace issue, I will fix that 
in v4 and once I get a clean jenkins result I will commit this to branch-2.9, 
branch-3.0 and trunk. Thanks for the review all along.

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279792#comment-16279792
 ] 

genericqa commented on YARN-7610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 207 unchanged - 0 fixed = 209 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 23s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
9s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900806/YARN-7610.002.patch |
| Optional Tests |  asflice

[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279787#comment-16279787
 ] 

genericqa commented on YARN-7119:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 99 unchanged - 1 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 46s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17

[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279786#comment-16279786
 ] 

Arun Suresh commented on YARN-7610:
---

+1 Thanks for the quick update [~cheersyang]

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279780#comment-16279780
 ] 

Weiwei Yang edited comment on YARN-7610 at 12/6/17 7:26 AM:


v3 patch attached which includes the minor doc changes to the 
{{OpportunisticContainers.md}} from [this commit | 
https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].
 Verified with mvn site again that to make sure the changes were added 
correctly.


was (Author: cheersyang):
v3 patch attached which includes the minor doc changes to the 
{{OpportunisticContainers.md}} from [this commit | 
https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7610:
--
Attachment: YARN-7610.003.patch

v3 patch attached which includes the minor doc changes to the 
{{OpportunisticContainers.md}} from [this commit | 
https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, 
> YARN-7610.003.patch, added_doc.png, outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279771#comment-16279771
 ] 

Weiwei Yang commented on YARN-7610:
---

Hi [~asuresh]

bq. Since you are updating the documentation, can you also include the 
following changes.

Sure thing. I assume you mean the changes to {{OpportunisticContainers.md}} in 
[this 
commit|https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b],
 if that's the case, I can add that in v3 patch.

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, added_doc.png, 
> outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279758#comment-16279758
 ] 

Weiwei Yang commented on YARN-7617:
---

Thanks for fixing the words [~asuresh].

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{promote_opportunistic_after_start =true}}. This is for the purpose of 
> demonstrating how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7617:
--
Description: Per discussion in YARN-7610, it would be good to add such a 
flag, e.g {{promote_opportunistic_after_start =true}}. This is for the purpose 
of demonstrating how container promotion works.  (was: Per discussion in 
YARN-7610, it would be good to add such a flag, e.g 
{{container_promote_on_start=true}}. This is for the purpose of demonstrating 
how container promotion works.)

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{promote_opportunistic_after_start =true}}. This is for the purpose of 
> demonstrating how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7617:
--
Summary: Add a flag in distributed shell to automatically PROMOTE 
opportunistic containers to guaranteed once they are started  (was: Add a flag 
in distributed shell to automatically prompt opportunistic containers to 
guaranteed once they are started)

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{container_prompt_on_start=true}}. This is for the purpose of demonstrating 
> how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279745#comment-16279745
 ] 

Arun Suresh edited comment on YARN-7617 at 12/6/17 6:49 AM:


Thanks for raising this [~cheersyang]...
I guess you mean "promote".
Lets name the flag "promote_opportunistic_after_start=true"


was (Author: asuresh):
I guess you mean "promote".
Lets name the flag "promote_opportunistic_after_start=true"

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{container_promote_on_start=true}}. This is for the purpose of demonstrating 
> how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279745#comment-16279745
 ] 

Arun Suresh commented on YARN-7617:
---

I guess you mean "promote".
Lets name the flag "promote_opportunistic_after_start=true"

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{container_promote_on_start=true}}. This is for the purpose of demonstrating 
> how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7617) Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7617:
--
Description: Per discussion in YARN-7610, it would be good to add such a 
flag, e.g {{container_promote_on_start=true}}. This is for the purpose of 
demonstrating how container promotion works.  (was: Per discussion in 
YARN-7610, it would be good to add such a flag, e.g 
{{container_prompt_on_start=true}}. This is for the purpose of demonstrating 
how container promotion works.)

> Add a flag in distributed shell to automatically PROMOTE opportunistic 
> containers to guaranteed once they are started
> -
>
> Key: YARN-7617
> URL: https://issues.apache.org/jira/browse/YARN-7617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>
> Per discussion in YARN-7610, it would be good to add such a flag, e.g 
> {{container_promote_on_start=true}}. This is for the purpose of demonstrating 
> how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7610:
--
Attachment: added_doc.png
outline_compare.png

v2 patch contains following updates

# Rename {{OpportunisticContainers.md}} to {{OpportunisticContainers.md.vm}} so 
that we can get rid of the hard coded versions. They are now replaced by 
{{$\{project.version\}}}.
# Replace  refs in this doc and replace by markdown syntax, without this 
change, the outline of the page will be messed up since it was renamed to md.vm.
# Added a section under {{Running a Sample Job}} to introduce how to run 
distributed shell job with specified type of containers.

I have verified this doc change locally with {{mvn site}}, 
[^outline_compare.png] compares the outline before and after the patch, 
[^added_doc.png] illustrates the section I added.

YARN-7617 was created as the followup on the flag for prompting containers. 
[~asuresh] please help to review.

Thanks

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch, added_doc.png, 
> outline_compare.png
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279724#comment-16279724
 ] 

Arun Suresh commented on YARN-7610:
---

Thanks for the update [~cheersyang]..

Since you are updating the OpportunisticContainers.md.vm file, can you also 
include the following 
[changes|https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].
 I had put them in when we released 2.9.0, but forgot to cherry-pick to 
branch-2.9 and above.

Since we are modifying the md to an md.vm, Can you also run mvn site and post a 
png of the updated section of the generated html page here. so we can verify if 
it looks good ?

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279724#comment-16279724
 ] 

Arun Suresh edited comment on YARN-7610 at 12/6/17 6:38 AM:


Thanks for the update [~cheersyang]..

Since you are updating the documentation, can you also include the following 
[changes|https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].
 I had put them in when we released 2.9.0, but forgot to cherry-pick to 
branch-2.9 and above.

Since we are modifying the md to an md.vm, Can you also run mvn site and post a 
png of the updated section of the generated html page here. so we can verify if 
it looks good ?


was (Author: asuresh):
Thanks for the update [~cheersyang]..

Since you are updating the OpportunisticContainers.md.vm file, can you also 
include the following 
[changes|https://github.com/apache/hadoop/commit/4e1fd7b022362ec7954b23d0ba6420dbc72a031b].
 I had put them in when we released 2.9.0, but forgot to cherry-pick to 
branch-2.9 and above.

Since we are modifying the md to an md.vm, Can you also run mvn site and post a 
png of the updated section of the generated html page here. so we can verify if 
it looks good ?

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7610:
--
Attachment: YARN-7610.002.patch

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch, YARN-7610.002.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7617) Add a flag in distributed shell to automatically prompt opportunistic containers to guaranteed once they are started

2017-12-05 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7617:
-

 Summary: Add a flag in distributed shell to automatically prompt 
opportunistic containers to guaranteed once they are started
 Key: YARN-7617
 URL: https://issues.apache.org/jira/browse/YARN-7617
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: applications/distributed-shell
Affects Versions: 2.9.0
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Minor


Per discussion in YARN-7610, it would be good to add such a flag, e.g 
{{container_prompt_on_start=true}}. This is for the purpose of demonstrating 
how container promotion works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279710#comment-16279710
 ] 

Weiwei Yang commented on YARN-7610:
---

I found {{OpportunisticContainers.md}} was using hardcoded version name like 
following

{noformat}
hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha2-SNAPSHOT.jar
{noformat}

I will fix that as well in next patch.

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7522:
-
Attachment: YARN-7522.YARN-6592.005.patch

Thanks reviews from [~asuresh]. Attached ver.5 patch, fixed the NPE issue and 
moved all new added classes to .constraint package. (node attribute YARN-3409 
should belongs to {{.nodeattribute}} package, so there's no conflict to 
YARN-3409. 

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.005.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-05 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279684#comment-16279684
 ] 

Manikandan R commented on YARN-7119:


Thanks. Attached new patch.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-05 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-7119:
---
Attachment: YARN-7119.009.patch

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch, YARN-7119.009.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7611) Node manager web UI should display container type in containers page

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279661#comment-16279661
 ] 

Hudson commented on YARN-7611:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #1 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/1/])
YARN-7611. Node manager web UI should display container type in (wwei: rev 
05c347fe51c01494ed8110f8f116a01c90205f13)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/AllContainersPage.java


> Node manager web UI should display container type in containers page
> 
>
> Key: YARN-7611
> URL: https://issues.apache.org/jira/browse/YARN-7611
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, webapp
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7611.001.patch, YARN-7611.002.patch, 
> after_patch.png, before_patch.png
>
>
> Currently node manager UI page 
> [http://:/node/allContainers] lists all containers, but 
> it doesn't contain {{ExecutionType}} column. To figure out the type, user has 
> to  click each container link which is quite cumbersome. We should add a 
> column to display this info to give a more straightforward view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7611) Node manager web UI should display container type in containers page

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279636#comment-16279636
 ] 

Weiwei Yang commented on YARN-7611:
---

Result seems good, the UT failure was not related as this is pure front-end 
change, I will commit this shortly.

> Node manager web UI should display container type in containers page
> 
>
> Key: YARN-7611
> URL: https://issues.apache.org/jira/browse/YARN-7611
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, webapp
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7611.001.patch, YARN-7611.002.patch, 
> after_patch.png, before_patch.png
>
>
> Currently node manager UI page 
> [http://:/node/allContainers] lists all containers, but 
> it doesn't contain {{ExecutionType}} column. To figure out the type, user has 
> to  click each container link which is quite cumbersome. We should add a 
> column to display this info to give a more straightforward view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4249) Many options in "yarn application" command is not documented

2017-12-05 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned YARN-4249:
---

Assignee: (was: nijel)

> Many options in "yarn application" command is not documented
> 
>
> Key: YARN-4249
> URL: https://issues.apache.org/jira/browse/YARN-4249
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: nijel
>
> in document only few options are specified.
> {code}
> Usage: `yarn application [options] `
> | COMMAND\_OPTIONS | Description |
> |: |: |
> | -appStates \ | Works with -list to filter applications based on 
> input comma-separated list of application states. The valid application state 
> can be one of the following:  ALL, NEW, NEW\_SAVING, SUBMITTED, ACCEPTED, 
> RUNNING, FINISHED, FAILED, KILLED |
> | -appTypes \ | Works with -list to filter applications based on 
> input comma-separated list of application types. |
> | -list | Lists applications from the RM. Supports optional use of -appTypes 
> to filter applications based on application type, and -appStates to filter 
> applications based on application state. |
> | -kill \ | Kills the application. |
> | -status \ | Prints the status of the application. |
> {code}
> some options are missing like
> -appId  Specify Application Id to be operated
> -help   Displays help for all commands.
> -movetoqueueMoves the application to a different queue.
> -queue  Works with the movetoqueue command to specify 
> which queue to move an application to.
> -updatePriority   update priority of an 
> application.ApplicationId can be passed using 'appId' option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7611) Node manager web UI should display container type in containers page

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279619#comment-16279619
 ] 

genericqa commented on YARN-7611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 11s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900792/YARN-7611.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6d9fc4ae9095 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 44b06d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18805/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18805/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
| modules | C

[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279602#comment-16279602
 ] 

Arun Suresh commented on YARN-7610:
---

Makes sense - we can track that in another JIRA

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279585#comment-16279585
 ] 

Weiwei Yang commented on YARN-7610:
---

Hi [~asuresh]

bq. Can you also update the doc as well ?

Sure, I will add some doc to {{OpportunisticContainers.md}} in next patch.

bq. If the flag is enabled, the distributed shell AM will first ask for opp 
containers like you have done in the patch, and once it starts, the AM will 
send an update request to the RM to promote the containers to guaranteed.

I am perfectly OK with this, by adding another tag, e.g 
{{container_prompt_on_start=true}}, default value is false. But if possible can 
we track this in another JIRA as it is less important, I want to get this 
committed ASAP so rest of my team member can easily try out.

Please let me know your thoughts, thanks.

> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7607) Remove the trailing duplicated timestamp in container diagnostics message

2017-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279564#comment-16279564
 ] 

Weiwei Yang commented on YARN-7607:
---

Hi [~ajisakaa], could you help to review this patch? Thanks

> Remove the trailing duplicated timestamp in container diagnostics message
> -
>
> Key: YARN-7607
> URL: https://issues.apache.org/jira/browse/YARN-7607
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: log
> Attachments: YARN-7607.001.patch
>
>
> There are some container diagnostic messages currently malformed, like below
> ###
> 017-12-05 11:43:21,319 INFO mapreduce.Job:  map 28% reduce 0%
> 2017-12-05 11:43:22,345 INFO mapreduce.Job: Task Id : 
> attempt_1512384455800_0003_m_12_0, Status : FAILED
> \[2017-12-05 11:43:21.265\]Container Killed to make room for Guaranteed 
> Container{color:red}\[2017-12-05 11:43:21.265\] {color}
> \[2017-12-05 11:43:21.265\]Container is killed before being launched.
> ###
> such logs are presented both in console and RM UI, we need to remove the 
> duplicated trailing timestamp from the log message. This is due to the 
> mis-use of {{addDiagnostics}} function in these places.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7611) Node manager web UI should display container type in containers page

2017-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7611:
--
Attachment: YARN-7611.002.patch

Thanks [~asuresh] for the review, somehow the jenkins job was not triggered, 
re-submit the patch. Once I get a clean jenkins result, I'll get this 
committed, thanks.

> Node manager web UI should display container type in containers page
> 
>
> Key: YARN-7611
> URL: https://issues.apache.org/jira/browse/YARN-7611
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, webapp
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7611.001.patch, YARN-7611.002.patch, 
> after_patch.png, before_patch.png
>
>
> Currently node manager UI page 
> [http://:/node/allContainers] lists all containers, but 
> it doesn't contain {{ExecutionType}} column. To figure out the type, user has 
> to  click each container link which is quite cumbersome. We should add a 
> column to display this info to give a more straightforward view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279525#comment-16279525
 ] 

genericqa commented on YARN-6483:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6483 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6483 |
| GITHUB PR | https://github.com/apache/hadoop/pull/289 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18804/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6483:
--
Attachment: (was: YARN-6483-branch-3.0.addendum.patch)

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6483:
--
Attachment: YARN-6483.branch-3.0.addendum.patch

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch, YARN-6483.branch-3.0.addendum.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279514#comment-16279514
 ] 

genericqa commented on YARN-6483:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6483 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6483 |
| GITHUB PR | https://github.com/apache/hadoop/pull/289 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18803/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-branch-3.0.addendum.patch, YARN-6483-v1.patch, 
> YARN-6483.002.patch, YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6483:
--
Target Version/s: 3.1.0, 3.0.1  (was: 3.1.0)

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-branch-3.0.addendum.patch, YARN-6483-v1.patch, 
> YARN-6483.002.patch, YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6483:
--
Attachment: YARN-6483-branch-3.0.addendum.patch

Attaching addendum patch for branch-3.0

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-branch-3.0.addendum.patch, YARN-6483-v1.patch, 
> YARN-6483.002.patch, YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reopened YARN-6483:
---

Re-opened to fix branch-3.0

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-branch-3.0.addendum.patch, YARN-6483-v1.patch, 
> YARN-6483.002.patch, YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279502#comment-16279502
 ] 

genericqa commented on YARN-7556:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 43 unchanged - 1 fixed = 43 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 27s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 33s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 37s{color} 
| {color:red} hadoop-yarn-server-resourceman

[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279500#comment-16279500
 ] 

genericqa commented on YARN-6704:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 10s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6704 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900772/YARN-6704.v8.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 27c19322ecce 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0311cf0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://

[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279497#comment-16279497
 ] 

Robert Kanter commented on YARN-6483:
-

Sounds good.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-05 Thread Gour Saha (JIRA)
Gour Saha created YARN-7616:
---

 Summary: App status does not return state STABLE for a running and 
stable service
 Key: YARN-7616
 URL: https://issues.apache.org/jira/browse/YARN-7616
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha


state currently returns null for a running and stable service. Looks like the 
code does not return ServiceState.STABLE under any circumstance. Will need to 
wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-05 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7616:
---

Assignee: Gour Saha

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279485#comment-16279485
 ] 

genericqa commented on YARN-7565:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 44s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900768/YARN-7565.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d774ffb3c671 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0311cf0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java 

[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279480#comment-16279480
 ] 

Arun Suresh commented on YARN-6483:
---

bq. .. is to simply remove (or update to not rely on XML) just the problematic 
test in branch-3.0.
Was thinking the same - given that the feature itself is working and is tested 
by the other testcases in this patch. I vote we add an addendum patch against 
this JIRA for branch-3.0 where we Ignore the test - and then create a new JIRA 
to fix this. Thoughts ?

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279477#comment-16279477
 ] 

Arun Suresh commented on YARN-7522:
---

[~leftnoteasy], The errors are due to this line:

{code}
281 if (allocationTags != null || !allocationTags.isEmpty()) {
287 
{code}

It should be an && not an ||

One other comment:
We should move it from the o.a.h.y.resourcemanager.placement package to 
something else (since it is already used for app placement) 

Am +1 on the patch pending the above and jenkins.

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279476#comment-16279476
 ] 

Daniel Templeton commented on YARN-7556:


Test failures are all unrelated.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch, 
> YARN-7556.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279451#comment-16279451
 ] 

genericqa commented on YARN-7556:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 44 unchanged - 1 fixed = 44 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  0s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 17s{color} 
| {color:red} hadoop-yarn-server-resourceman

[jira] [Comment Edited] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-05 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279381#comment-16279381
 ] 

Chandni Singh edited comment on YARN-7565 at 12/6/17 12:26 AM:
---

Patch 2
- Addressed [~jianhe]'s comments
- Instead of checking repeatedly if containers are recovered, made the change 
in ServiceScheduler to check just once if the unRecovered containers are still 
there. If they haven't been recovered by then, then the container is released 
and component instance is added to pending queue.
- Added another test


was (Author: csingh):
Patch 2
- Addressed [~jianhe]'s comments
- Instead of checking repeatedly if containers are recovered, made the change 
in ServiceScheduler to check just once if the unRecovered containers are 
available.
- Added another test

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-12-05 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6704:
---
Attachment: YARN-6704.v8.patch

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch, 
> YARN-6704-YARN-2915.v2.patch, YARN-6704.v3.patch, YARN-6704.v4.patch, 
> YARN-6704.v5.patch, YARN-6704.v6.patch, YARN-6704.v7.patch, YARN-6704.v8.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5871) Add support for reservation-based routing.

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279403#comment-16279403
 ] 

genericqa commented on YARN-5871:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-5871 does not apply to YARN-2915. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841137/YARN-5871-YARN-2915.04.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18801/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for reservation-based routing.
> --
>
> Key: YARN-5871
> URL: https://issues.apache.org/jira/browse/YARN-5871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: federation
> Attachments: YARN-5871-YARN-2915.01.patch, 
> YARN-5871-YARN-2915.01.patch, YARN-5871-YARN-2915.02.patch, 
> YARN-5871-YARN-2915.03.patch, YARN-5871-YARN-2915.04.patch
>
>
> Adding policies that can route reservations, and that then route applications 
> to where the reservation have been placed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5871) Add support for reservation-based routing.

2017-12-05 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5871:
-
Parent Issue: YARN-7402  (was: YARN-5597)

> Add support for reservation-based routing.
> --
>
> Key: YARN-5871
> URL: https://issues.apache.org/jira/browse/YARN-5871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: federation
> Attachments: YARN-5871-YARN-2915.01.patch, 
> YARN-5871-YARN-2915.01.patch, YARN-5871-YARN-2915.02.patch, 
> YARN-5871-YARN-2915.03.patch, YARN-5871-YARN-2915.04.patch
>
>
> Adding policies that can route reservations, and that then route applications 
> to where the reservation have been placed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7615) Federation StateStore: support storage/retrieval of reservations

2017-12-05 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-7615:
--

 Summary: Federation StateStore: support storage/retrieval of 
reservations
 Key: YARN-7615
 URL: https://issues.apache.org/jira/browse/YARN-7615
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7614) Support Reservation APIs in Federation Router

2017-12-05 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-7614:
--

 Summary: Support Reservation APIs in Federation Router
 Key: YARN-7614
 URL: https://issues.apache.org/jira/browse/YARN-7614
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279389#comment-16279389
 ] 

Eric Yang commented on YARN-7540:
-

[~billie.rinaldi] 
# I will reduce config object.
# I will reuse RMHAUtils.getRMHAWebappAddress.
# EnableFastLaunch seems to be a problem because Rest API server doesn't have 
this functionality.  I am more inclined to fix this issue by start up of 
Resource Manager to check if the required libraries are uploaded to hdfs, and 
remove the enableFastLaunch option.  Thoughts?
# I will fix the conflict introduced via YARN-6669.
# I agree that NATIVE_TYPE is confusing, and we need this special type for unit 
tests.  Therefore, I am inclined to rename NATIVE_TYPE to "unit-test" type for 
correctness.  

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279389#comment-16279389
 ] 

Eric Yang edited comment on YARN-7540 at 12/5/17 11:43 PM:
---

[~billie.rinaldi] Thank you for the review.
# I will reduce config object.
# I will reuse RMHAUtils.getRMHAWebappAddress.
# EnableFastLaunch seems to be a problem because Rest API server doesn't have 
this functionality.  I am more inclined to fix this issue by start up of 
Resource Manager to check if the required libraries are uploaded to hdfs, and 
remove the enableFastLaunch option.  Thoughts?
# I will fix the conflict introduced via YARN-6669.
# I agree that NATIVE_TYPE is confusing, and we need this special type for unit 
tests.  Therefore, I am inclined to rename NATIVE_TYPE to "unit-test" type for 
correctness.  


was (Author: eyang):
[~billie.rinaldi] 
# I will reduce config object.
# I will reuse RMHAUtils.getRMHAWebappAddress.
# EnableFastLaunch seems to be a problem because Rest API server doesn't have 
this functionality.  I am more inclined to fix this issue by start up of 
Resource Manager to check if the required libraries are uploaded to hdfs, and 
remove the enableFastLaunch option.  Thoughts?
# I will fix the conflict introduced via YARN-6669.
# I agree that NATIVE_TYPE is confusing, and we need this special type for unit 
tests.  Therefore, I am inclined to rename NATIVE_TYPE to "unit-test" type for 
correctness.  

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279386#comment-16279386
 ] 

genericqa commented on YARN-7522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 247 unchanged - 0 fixed = 257 total (was 247) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Null pointer dereference of allocationTags in 
org.apache.hadoop.yarn.server.resourcemanager.placement.AllocationTagsManager.removeContainer(NodeId,
 ApplicationId, ContainerId, Set)  Dereferenced at 
AllocationTagsManager.java:in 
org.apache.hadoop.yarn.server.resourcemanager.placement.AllocationTagsManager.removeContainer(NodeId,
 ApplicationId, ContainerId, Set)  Dereferenced at 
AllocationTagsManager.java:[line 281] |
|  |  Load of known null value in 
org.apache.hadoop.yarn.server.resourcemanager.placement.AllocationTagsManager.removeContainer(NodeId,
 ApplicationId, ContainerId, Set)  At AllocationTagsManager.java:in 
org.apache.hadoop.yarn.server.resourcemanager.placement.AllocationTagsManager.removeContainer(NodeId,
 ApplicationId, ContainerId, Set)  At AllocationTagsManager.java:[line 281] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanage

[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-05 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7565:

Attachment: YARN-7565.002.patch

Patch 2
- Addressed [~jianhe]'s comments
- Instead of checking repeatedly if containers are recovered, made the change 
in ServiceScheduler to check just once if the unRecovered containers are 
available.
- Added another test

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-12-05 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279363#comment-16279363
 ] 

Zian Chen commented on YARN-7274:
-

Hi [~leftnoteasy], looks like the failed cases were not introduced by this 
patch. Could you help review the latest patch and see if we need any 
improvement for this? Thank you so much!

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
> Attachments: YARN-7274.2.patch, YARN-7274.wip.1.patch
>
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-12-05 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279354#comment-16279354
 ] 

Subru Krishnan commented on YARN-6704:
--

Thanks [~botong] for updating the patch and for the clarification.

{code}I've changed the UAM token storage to use local NMSS instead when 
AMRMProxy HA is not enabled. {code}

Can you update the test to assert for the above and we are good to go!

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch, 
> YARN-6704-YARN-2915.v2.patch, YARN-6704.v3.patch, YARN-6704.v4.patch, 
> YARN-6704.v5.patch, YARN-6704.v6.patch, YARN-6704.v7.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279319#comment-16279319
 ] 

genericqa commented on YARN-7520:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 44 new + 78 unchanged - 26 fixed = 122 total (was 104) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy$PriorityQueueComparator.compare(CSQueue,
 CSQueue) incorrectly handles float value  At 
PriorityUtilizationQueueOrderingPolicy.java:value  At 
PriorityUtilizationQueueOrderingPolicy.java:[line 123] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900644/YARN-7520.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5bf8a80c8b6d 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86

[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279304#comment-16279304
 ] 

Wangda Tan commented on YARN-7612:
--

Thanks [~asuresh]. Will take a look today or tomorrow. 

+[~sunilg] as well.

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2017-12-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Attachment: YARN-7223.wip.001.pdf

Attached PDF for easier review (wip-001) patch.

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2017-12-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Attachment: YARN-7223.wip.001.patch

Attached patch (WIP).

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7223.wip.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor and planner framework

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Description: 
This introduces a Placement Processor and a Planning algorithm framework to 
handle placement constraints and scheduling requests from an app and places 
them on nodes.

The actual planning algorithm(s) will be handled in a YARN-7613.

  was:
This introduces a Placement Processor and a Planning algorithm framework to 
handle placement constraints and scheduling requests from an app and places 
them on nodes.

The actual planning algorithm(s) will be handled in a separate JIRA.


> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor and planner framework

2017-12-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612.wip.patch

Attaching a proof of concept patch.
It applies cleanly on top of the latest YARN-7522 (Allocation tags manager) 
patch.

* It adds a placeholder stub (and a default simple implementation) for the 
Placement Constraint Manager - which we can flesh out in YARN-6596.
* It also adds a placeholder and simple implementation of a Planning Algorithm 
which we can flesh out in YARN-7613
* It also introduces a new API in the ResourceScheduler to handle Scheduling 
Requests - this uses the ResourceCommitRequest introduced by [~leftnoteasy].

[~leftnoteasy] / [~kkaranasos] / [~pg1...@imperial.ac.uk] - do take a look.

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a separate JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7613) Implement Planning algorithms for rich placement

2017-12-05 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7613:
-

 Summary: Implement Planning algorithms for rich placement
 Key: YARN-7613
 URL: https://issues.apache.org/jira/browse/YARN-7613
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Panagiotis Garefalakis






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279280#comment-16279280
 ] 

Wangda Tan commented on YARN-7242:
--

Hi [~GergelyNovak], do you have the bandwidth to update the patch? This will be 
important for our testing. we can help with reviews.

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279277#comment-16279277
 ] 

Robert Kanter commented on YARN-7577:
-

The changes overall seem fine to me, but we should subclass 
{{ParameterizedSchedulerTestBase}} instead of doing the parameterization here 
again.  Plus, you can then call {{getSchedulerType()}} instead of using 
{{instanceof}}.

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7612) Add Placement Processor and planner framework

2017-12-05 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7612:
-

 Summary: Add Placement Processor and planner framework
 Key: YARN-7612
 URL: https://issues.apache.org/jira/browse/YARN-7612
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh


This introduces a Placement Processor and a Planning algorithm framework to 
handle placement constraints and scheduling requests from an app and places 
them on nodes.

The actual planning algorithm(s) will be handled in a separate JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled by default in yarn-default.xml

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279216#comment-16279216
 ] 

Hudson commented on YARN-7381:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13330 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13330/])
YARN-7381. Enable the configuration: (wangda: rev 
0311cf05358cd75388f48f048c44fba52ec90f00)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled 
> by default in yarn-default.xml
> --
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: YARN-7381.1.patch
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7443) Add native FPGA module support to do isolation with cgroups

2017-12-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279194#comment-16279194
 ] 

Wangda Tan commented on YARN-7443:
--

Thanks [~tangzhankun], latest patch looks good, will commit by end of this week 
if no opposite opinions.

> Add native FPGA module support to do isolation with cgroups
> ---
>
> Key: YARN-7443
> URL: https://issues.apache.org/jira/browse/YARN-7443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-7443-trunk.001.patch, YARN-7443-trunk.002.patch, 
> YARN-7443-trunk.003.patch, YARN-7443-trunk.004.patch
>
>
> Only support one major number devices configured in c-e.cfg for now. So 
> almost same with GPU native module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279185#comment-16279185
 ] 

Daniel Templeton commented on YARN-7119:


Looks good to me, except one tiny issue, which might not even be an issue.  In 
the javadoc for {{parseResourceValue()}}, you use '>'.  The javadoc will be 
converted into HTML, and it won't automatically replace that character with 
{{>}}.  I don't remember if HTML parsers will ignore a closing angle bracket 
without an opening angle bracket, but to be safe, I'd either use the HTML code 
or find another way to say it.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled by default in yarn-default.xml

2017-12-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7381:
-
Summary: Enable the configuration: 
yarn.nodemanager.log-container-debug-info.enabled by default in 
yarn-default.xml  (was: Enable the configuration: 
yarn.nodemanager.log-container-debug-info.enabled)

> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled 
> by default in yarn-default.xml
> --
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7381.1.patch
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7556:
---
Attachment: YARN-7556.006.patch

Reposting patch 6 because I think it might have been incomplete.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch, 
> YARN-7556.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7556:
---
Attachment: (was: YARN-7556.006.patch)

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7556:
---
Attachment: YARN-7556.006.patch

Whoops.  Used the wrong branch.  Here's a trunk patch.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch, 
> YARN-7556.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7443) Add native FPGA module support to do isolation with cgroups

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279094#comment-16279094
 ] 

genericqa commented on YARN-7443:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 47s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 37s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900695/YARN-7443-trunk.004.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c9f63af1e49a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3150c01 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18791/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18791/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18791/testReport/ |
| Max. process+thread count | 832 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/h

[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279046#comment-16279046
 ] 

Robert Kanter commented on YARN-6483:
-

The other option if we don't want to completely revert this from branch-3.0, is 
to simply remove (or update to not rely on XML) just the problematic test in 
branch-3.0.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279047#comment-16279047
 ] 

genericqa commented on YARN-7522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 32m  
5s{color} | {color:red} root in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-6592 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
16s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 247 unchanged - 0 fixed = 257 total (was 247) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
37s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900721/YARN-7522.YARN-6592.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0c639d8b9ce9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 2d5d3f1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/18794/artifact/out/branch-mvninstall-root.txt
 |
| compile | 

[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM

2017-12-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279006#comment-16279006
 ] 

Robert Kanter commented on YARN-6483:
-

YARN-7162 is the one that actually removes the XML parsing code.  There's more 
details on YARN-7162, but in a nutshell, we didn't want to get locked into 
supporting this exact XML formatting for the excludes file, because it could 
change once YARN-5536 is completed, which aims to add a JSON format, and make 
the format pluggable.  Not shipping the current XML format in 3.0 allows us to 
do that.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned to the AM
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Juan Rodríguez Hortalá
>Assignee: Juan Rodríguez Hortalá
> Fix For: 3.1.0
>
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, 
> YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7610) Extend Distributed Shell to support launching job with opportunistic containers

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278971#comment-16278971
 ] 

Arun Suresh commented on YARN-7610:
---

Thanks for taking a stab at this [~cheersyang].. Much appreciated!!
Couple of comments:
* Can you also update the doc as well ?
* I was also hoping to demonstrate container promotion as well. I was thinking 
to add a flag to promote opportunistic containers. If the flag is enabled, the 
distributed shell AM will first ask for opp containers like you have done in 
the patch, and once it starts, the AM will send an update request to the RM to 
promote the containers to guaranteed.



> Extend Distributed Shell to support launching job with opportunistic 
> containers
> ---
>
> Key: YARN-7610
> URL: https://issues.apache.org/jira/browse/YARN-7610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications/distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7610.001.patch
>
>
> Per doc in 
> [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Running_a_Sample_Job],
>  user can run some of PI job mappers as O containers. Similarly, propose to 
> extend distributed shell to support specifying the container type, it will be 
> very helpful for testing. Propose to add following argument
> {code}
> $./bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client
> -container_type   Container execution type,
> GUARANTEED or
> OPPORTUNISTIC
> {code}
> Implication: all containers in a distributed shell job will be launching as 
> user-specified container type (except for AM), if not given, default type is 
> {{GUARANTEED}}. AM is always launched as {{GUARANTEED}} container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278955#comment-16278955
 ] 

genericqa commented on YARN-7473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 164 new + 793 unchanged - 19 fixed = 957 total (was 812) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:queue in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addQueue(Queue)
  Dereferenced at CapacityScheduler.java:[line 2039] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.queuemanagement.GuaranteedOrZeroCapacityOverTimePolicy$PendingApplicationComparator
 is serializable but also an inner class of a non-serializable class  At 
GuaranteedOrZeroCapacityOverTimePolicy.java:an inner class of a 
non-serializable class  At GuaranteedOrZeroCapacityOverTimePolicy.java:[lines 
235-251] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.0

[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278945#comment-16278945
 ] 

Wangda Tan commented on YARN-7522:
--

Thanks [~kkaranasos] for your comments, addressed most of your comments, see 
below.

bq. Should we start the tags manager in all cases or should we have a parameter 
for enabling the whole thing for the first version, given it will add some load 
to the RM?
I would prefer to enable it by default, in performance's perspective, it will 
be called twice for every container, I think it should be acceptable. I tend to 
not introduce too many knobs which complex how people use this feature.

bq. The current getCardinality() tries to somehow impose constraints and at the 
same time check cardinalities ... I would rather have a method that for a set 
of tags returns a set of their cardinalities so that my scheduler has the 
flexibility to decide what to do with them.
I'm completely aware of merits of your proposal. However, returning set of 
cardinalities needs an additional copy of the data structure. I'd prefer to 
push this and do it if it is really needed.

bq. I would add one more getCardinality method that gets a single string tag 
and no binary operator just for simplicity.
Done.

bq. It might be useful to have an exists method for a tag. Like checking 
without the exact cardinality whether this tag exists.
Done.

bq. In getCardinality, what is the else doing when the specified tags are null 
or empty?
When tags are null/empty, all tags exist on the node will be considered, just 
like passing in a wildcard. 

bq. In RMContainerImpl, the TODO for setting the allocation tags should be part 
of this JIRA, right?
No, the TODO needs to be done when we finish allocating container from 
SchedulingRequest from scheduler and scheduler set tags to RMContainer. 

bq. We probably need to clean up the tags of all containers of an app when the 
app finishes too. Will the current code path take care of this even in cases of 
app failures etc.?
I think so. This should be enough to cover RMContainer related updates, 
however, for latest proposal from Arun (see below), we need an external module 
to handle the update separately. 

bq. We should define a namespace for the appID, so that we can easily retrieve 
it. Like appID: and then the actual ID.
Done.

bq. In the PlacementConstraint(s) classes, we call the tags "allocation tags", 
so I suggest to rename the corresponding classes in this JIRA to AllocationTags 
instead of PlacementTags too.
Done.

bq. You can just have a boolean instead of the numValues in the getCardinality.
Done.

bq. Why do we need the ImmutableSet in the add/removeContainer?
Done, added single tag method to avoid creating ImmutableSet.

bq. In the removeTagsFromNode(), let's add some warn messages in case the node 
was not found or the count is already 0. It will help to catch bugs.
Done. 

Thanks [~asuresh], 

bq. Essentially, we need to simply be able to add/remove a tag to a node to 
allow the scheduler/planning system to keep track of node to tag mappings 
during intermediate processing as well.
Since AllocationTagsManager is a separate module and accessible from RMContext, 
it should be capable to support the use case you mentioned. However, the 
planning system needs to cleanup tags from external.


> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7522) Add application tags manager implementation

2017-12-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7522:
-
Attachment: YARN-7522.YARN-6592.004.patch

Attached ver.4 patch.

[~kkaranasos]/[~asuresh], mind to check again?

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.004.patch, 
> YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278938#comment-16278938
 ] 

Hudson commented on YARN-7438:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13328 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13328/])
YARN-7438. Additional changes to make SchedulingPlacementSet agnostic to 
(sunilg: rev a957f1c60e1308d1d70a1803381994f59949c5f8)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/ContainerRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/AppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/PendingAskUpdateResult.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/ResourceRequestUpdateResult.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ContainerUpdateContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java


> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.1.0
>
> Attachments: YARN-7438.001.patch, YARN-7438.002.patch, 
> YARN-7438.003.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should upd

[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-12-05 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278930#comment-16278930
 ] 

Manikandan R commented on YARN-7119:


Hope you had a good time :) Can you please confirm latest patch based on your 
recent comments?

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch, YARN-7119.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278915#comment-16278915
 ] 

genericqa commented on YARN-7556:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7556 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7556 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900716/YARN-7556.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18793/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-05 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7556:
---
Attachment: YARN-7556.005.patch

I reworked the patch to not use -1 as a flag, so there's no longer any risk of 
things going totally off the rails.  To address Wilfred's points:
# The fair scheduler docs do say to use memory-mb in the example for 
maxResources, and the new "Resource Model" page also talks about the way 
resources work now.
# I added support for percentages.  It was an intentional omission, but not a 
good choice.
# The decimal thing must be a red herring.  The CPU and memory were and still 
are stored as integers.  Allowing decimal values is just lying to the users.  
The old format still allows the lie, but I see no reason to perpetuate the lie 
in the new format.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7420) YARN UI changes to depict auto created queues

2017-12-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278877#comment-16278877
 ] 

Sunil G commented on YARN-7420:
---

As mentioned offline, could you please share screen shots with 0 capacity.

> YARN UI changes to depict auto created queues 
> --
>
> Key: YARN-7420
> URL: https://issues.apache.org/jira/browse/YARN-7420
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7420.1.patch
>
>
> Auto created queues will be depicted in a different color to indicate they 
> have been auto created and for easier distinction from manually 
> pre-configured queues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues

2017-12-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278876#comment-16278876
 ] 

Sunil G commented on YARN-7520:
---

[~suma.shivaprasad], seems some jenkins problem. Could you please check it.

> Queue Ordering policy changes for ordering auto created leaf queues within 
> Managed parent Queues
> 
>
> Key: YARN-7520
> URL: https://issues.apache.org/jira/browse/YARN-7520
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7520.1.patch, YARN-7520.2.patch, YARN-7520.3.patch, 
> YARN-7520.4.patch
>
>
> Queue Ordering policy currently uses priority, utilization and absolute 
> capacity for pre-configured parent queues to order leaf queues while 
> assigning containers. It needs modifications for auto created leaf queues 
> since they can have zero capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-05 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278862#comment-16278862
 ] 

Billie Rinaldi commented on YARN-7540:
--

I haven't tested the patch manually yet. Hopefully I will have time for this 
later this week. These are the comments I have so far:
* ApiServiceClient doesn't need to store a copy of the configuration; it can 
call getConfig() to access the configuration that will be initialized in 
AbstractService.serviceInit
* Instead of this implementation of ApiServiceClient.getRMAddress, I suspect we 
should be doing something similar to RMWebApp.buildRedirectPath or reusing 
existing utility methods like RMHAUtils.getRMHAWebappAddresses
* The enableFastLaunch method probably isn't working because the ServiceClient 
needs to be inited
* The patch doesn't apply because the log statement added in ApiServer has 
already been added
* There are a couple of problems with the NATIVE_TYPE. For one, the name is 
confusing; I don't think we should use "native" for this. Also, there is no way 
to create an application that has NATIVE_TYPE, because YARN service is coded to 
use "yarn-service" through the YarnServiceConstants.APP_TYPE variable. I would 
think we should just make the new ApiServiceClient the only provided 
AppAdminClient, but I guess that would leave us unable to unit test some 
features that it's good to have unit tests for. What do you think?

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278764#comment-16278764
 ] 

Arun Suresh commented on YARN-7438:
---

[~sunilg], given this is minor refactoring - can you see if this can be 
committed to branch-2 / branch-2.9 as well ?

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch, YARN-7438.002.patch, 
> YARN-7438.003.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-12-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278756#comment-16278756
 ] 

Sunil G commented on YARN-7438:
---

Thanks [~asuresh]. Committing shortly.

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch, YARN-7438.002.patch, 
> YARN-7438.003.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-12-05 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7473:
---
Attachment: YARN-7473.13.patch

Fixed UT failure

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.11.patch, YARN-7473.12.patch, YARN-7473.12.patch, 
> YARN-7473.13.patch, YARN-7473.2.patch, YARN-7473.3.patch, YARN-7473.4.patch, 
> YARN-7473.5.patch, YARN-7473.6.patch, YARN-7473.7.patch, YARN-7473.8.patch, 
> YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7443) Add native FPGA module support to do isolation with cgroups

2017-12-05 Thread Zhankun Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-7443:
---
Attachment: YARN-7443-trunk.004.patch

> Add native FPGA module support to do isolation with cgroups
> ---
>
> Key: YARN-7443
> URL: https://issues.apache.org/jira/browse/YARN-7443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-7443-trunk.001.patch, YARN-7443-trunk.002.patch, 
> YARN-7443-trunk.003.patch, YARN-7443-trunk.004.patch
>
>
> Only support one major number devices configured in c-e.cfg for now. So 
> almost same with GPU native module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278739#comment-16278739
 ] 

Arun Suresh commented on YARN-7438:
---

The latest patch LGTM. +1

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch, YARN-7438.002.patch, 
> YARN-7438.003.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7611) Node manager web UI should display container type in containers page

2017-12-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278735#comment-16278735
 ] 

Arun Suresh commented on YARN-7611:
---

+1, looks pretty straight forward. please go ahead and commit. Kindly also 
commit to branch-2.9 and branch-3.0 as well.

> Node manager web UI should display container type in containers page
> 
>
> Key: YARN-7611
> URL: https://issues.apache.org/jira/browse/YARN-7611
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, webapp
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7611.001.patch, after_patch.png, before_patch.png
>
>
> Currently node manager UI page 
> [http://:/node/allContainers] lists all containers, but 
> it doesn't contain {{ExecutionType}} column. To figure out the type, user has 
> to  click each container link which is quite cumbersome. We should add a 
> column to display this info to give a more straightforward view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7092) Render application specific log under application tab in new YARN UI

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278625#comment-16278625
 ] 

Hudson commented on YARN-7092:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13325 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13325/])
YARN-7092. Render application specific log under application tab in new 
(sunilg: rev 99ccca341f3669b801428dea0acdba597f34c668)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/logs.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-log-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/logs.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app/logs-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/timeline-view.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/collapsible-panel.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/logs.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app/logs-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/collapsible-panel-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/attempts.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/collapsible-panel.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-log-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/timeline-view.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/attempts.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-log-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempt.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower-shrinkwrap.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/router.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app-attempt.js


> Render application specific log under application tab in new YARN UI
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Fix For: 3.1.0
>
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7092) Render application specific log under application tab in new YARN UI

2017-12-05 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-7092.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0

Thanks [~akhilpb] for working on this patch and thanks [~skmvasu] for helping 
in rebasing and review. Committed to trunk

> Render application specific log under application tab in new YARN UI
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Fix For: 3.1.0
>
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7092) Render application specific log under application tab in new YARN UI

2017-12-05 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7092:
--
Summary: Render application specific log under application tab in new YARN 
UI  (was: [YARN-3368] Log viewer in application page in yarn-ui-v2)

> Render application specific log under application tab in new YARN UI
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >