[jira] [Commented] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-02-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372526#comment-16372526
 ] 

Wangda Tan commented on YARN-7738:
--

[~yufeigu], CS support refresh maximum allocation while RM is running, and the 
new_max_alloc must >= old_max_alloc. 

The patch fixed an issue that resource-types.xml isn't loaded during the 
refresh. This causes after refresh, all maximum-allocation for non-memory/vcore 
resource becomes 0 

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372525#comment-16372525
 ] 

genericqa commented on YARN-7835:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911508/YARN-7835.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 24c9069d6cf5 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19775/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19775/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Atsv2] Race condition in NM while pu

[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372522#comment-16372522
 ] 

Sunil G commented on YARN-6858:
---

Yes. +1. pending jenkins.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372518#comment-16372518
 ] 

Naganarasimha G R edited comment on YARN-6858 at 2/22/18 7:44 AM:
--

[~sunilg], Have attached the patch. Lets wait for the jenkins results.


was (Author: naganarasimha):
[~sunilg], Have attached the patch. Lets wait for the results.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372518#comment-16372518
 ] 

Naganarasimha G R commented on YARN-6858:
-

[~sunilg], Have attached the patch. Lets wait for the results.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: YARN-6858-YARN-3409.008.patch

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch, 
> YARN-6858-YARN-3409.008.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372503#comment-16372503
 ] 

Naganarasimha G R commented on YARN-6858:
-

Hi [~sunilg],

    As mentioned earlier this test case was failing even without the patch. 
Checking other javadoc and ccheckstyle  issues.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Attachment: YARN-7835.004.patch

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch, 
> YARN-7835.003.patch, YARN-7835.004.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372496#comment-16372496
 ] 

genericqa commented on YARN-7346:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/target/findbugsXml.xml)
 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-server-2 in 
the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the p

[jira] [Commented] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372485#comment-16372485
 ] 

genericqa commented on YARN-7955:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 10s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911498/YARN-7955.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cce84857c79e 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19773/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-ya

[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372484#comment-16372484
 ] 

genericqa commented on YARN-7835:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 6 new + 2 unchanged - 0 fixed = 8 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911502/YARN-7835.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f4326c17c56f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19774/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19774/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineserv

[jira] [Commented] (YARN-7871) Node attributes reporting from NM to RM

2018-02-21 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372455#comment-16372455
 ] 

Weiwei Yang commented on YARN-7871:
---

Hi [~sunilg]

RMNodeAttributesManager is a place holder, but since YARN-6858 is close to be 
done, I will wait until that one is committed then consolidate the changes 
again. But yeah, based on my tests, the patch should be working so it just 
needs some modification when calling the attribute manager APIs.

> Node attributes reporting from NM to RM 
> 
>
> Key: YARN-7871
> URL: https://issues.apache.org/jira/browse/YARN-7871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7871-YARN-3409.001.patch
>
>
> Support to initialize proper attribute provider based on user's configuration.
> NM collects node attributes from a configured attribute provider and send 
> them to RM via HB. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372454#comment-16372454
 ] 

Sunil G commented on YARN-6858:
---

Hi [~Naganarasimha],

javadoc error seems valid and one test case failure too. Could you pls check.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7856) Validation node attributes in NM

2018-02-21 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372450#comment-16372450
 ] 

Weiwei Yang commented on YARN-7856:
---

Hi [~sunilg]

Thanks for looking into the patch, this one seems to have some overlapping with 
YARN-6858, I will wait until YARN-6858 resolved then I will rebase the patch 
again. Will try to address your comments then.

> Validation node attributes in NM
> 
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7856-YARN-3409.001.patch
>
>
> NM needs to do proper validation about the attributes before sending them to 
> RM, this includes
> # a valid prefix is presented
> # no duplicate entries
> # do not allow two attributes with same prefix/name but different types
> This could be an utility class that can be used on both RM/NM sides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372437#comment-16372437
 ] 

Rohith Sharma K S commented on YARN-7835:
-

Updated the patch as per comments

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch, 
> YARN-7835.003.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Attachment: YARN-7835.003.patch

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch, 
> YARN-7835.003.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-21 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372432#comment-16372432
 ] 

Haibo Chen commented on YARN-7346:
--

Attached a patch that builds on top of Rohith's increment change of 04 patch.

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, 
> YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 beta release

2018-02-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7346:
-
Attachment: YARN-7346.05.patch

> Fix compilation errors against hbase2 beta release
> --
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7346.00.patch, YARN-7346.01.patch, 
> YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, 
> YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, 
> YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7955:

Attachment: YARN-7955.001.patch

> Calling stop on an already stopped service says "Successfully stopped service"
> --
>
> Key: YARN-7955
> URL: https://issues.apache.org/jira/browse/YARN-7955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7955.001.patch
>
>
> If you invoke "yarn app -stop " on an already stopped service 
> it confusingly responds with message "Successfully stopped service 
> ". It should say "Service is already stopped".
> The same is seen with the REST API PUT request with data \{ "state": 
> "STOPPED"}, the response is 200 OK and diagnostics with same message 
> "Successfully stopped service ". It should return 400 Bad 
> Request with message "Service is already stopped".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7944) Remove master node link from headers of application pages

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372413#comment-16372413
 ] 

genericqa commented on YARN-7944:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910975/YARN-7944.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 4428290d12fc 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 442 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19771/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove master node link from headers of application pages
> -
>
> Key: YARN-7944
> URL: https://issues.apache.org/jira/browse/YARN-7944
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7944.001.patch
>
>
> Rm UI2 has links for Master container log and master node. 
> This link published on application and service page. This links are not 
> required on all pages because AM container node link and container log link 
> are already present in Application view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372407#comment-16372407
 ] 

Rohith Sharma K S commented on YARN-7835:
-

Thanks [~haibochen] for detailed review. Make sense to me, I will update the 
patch with test modifications.

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-21 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372404#comment-16372404
 ] 

Haibo Chen commented on YARN-7835:
--

Thanks [~rohithsharma] for  the updated patch! There are a few misspellings, 
let's fix that, collectos -> collectors, applicatin -> application.--

One issue I see with the new test (and all the other test methods regarding 
stopContainer() ) is that it is flaky given it depends on how quickly the 
executor inside the PerNodeTimelineCollectorsAuxService runs the deletion task 
when an application is supposed to be removed. We have two threads, the thread 
that runs the test code and calls auxService.stopContainer() and the executor 
thread that removes the application asynchronously. Consider the following code,
{code:java}
auxService.stopContainer(context);

// auxService should have the app's collector and need to remove only after
// a configured period
assertTrue("Applicatin not found in collectors.",
    auxService.hasApplication(appAttemptId.getApplicationId()));

// 2nd attempt container removed, still collectos should hold applicatin id.
assertTrue("collector has removed application though 2nd attempt"
    + " is running this node",
    waitFor(auxService, appAttemptId.getApplicationId(), 4, 500));

{code}
If the executor thread is slow, both assertTrue() could succeed even though the 
application would still be removed.

A more reliable way is to extract the asynchronous application removing part 
into a function that we can override in a test class, that is, in 
PerNodeTimelineCollectorsAuxService.java, we'd have a method
{code:java}
protected Future removeApplicationCollector(ApplicationId appId) {
  return scheduler.schedule(new Runnable() {
    public void run() {
  synchronized (appIdToContainerId) {
    Set masterContainers = appIdToContainerId.get(appId);
    if (masterContainers == null) {
  LOG.info("Stop container for " + containerId
  + " is called before initializing container.");
  return;
    }
    masterContainers.remove(containerId);
    if (masterContainers.size() == 0) {
  // remove only if it is last master container
  removeApplication(appId);
  appIdToContainerId.remove(appId);
    }
  }
    }
  }, collectorLingerPeriod, TimeUnit.MILLISECONDS);
}
{code}
In TestPerNodeTimelineCollectorsAuxService.java, we can then create a test 
version of PerNodeTimelineCollectorsAuxService that does the application 
removing synchronously by overriding the function as
{code:java}
protected Future removeApplicationCollector(ApplicationId appId) {
   Future future = super.removeApplicationCollector(appId);
   future.get();
   return future;
}
{code}
Even though this is more code, it make our test code 
auxServer.hasApplication(appId) absolutely a clear indication of whether the 
app collector is removed or not. We can remove all the waitFor() calls.

 

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372382#comment-16372382
 ] 

genericqa commented on YARN-7953:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of f in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.recursiveChildrenByName(FedQueue,
 String)  Dereferenced at FedQueue.java:f in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.recursiveChildrenByName(FedQueue,
 String)  Dereferenced at FedQueue.java:[line 379] |
|  |  Nullcheck of FedQueue.children at line 162 of value previously 
dereferenced in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.propagate(ResourceInfo)
  At FedQueue.java:162 of value previously dereferenced in 
org.apache.hadoop

[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372379#comment-16372379
 ] 

genericqa commented on YARN-6858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 115 unchanged - 1 fixed = 124 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 1 new + 4183 unchanged - 0 fixed = 4184 total (was 4183) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 39s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerMana

[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372376#comment-16372376
 ] 

genericqa commented on YARN-6528:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 412 unchanged - 3 fixed = 413 total (was 415) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m  
7s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911477/YARN-6528.v009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff250338b882 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19767/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19767/testReport/ |
| Max. process+thread count | 803 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-reso

[jira] [Commented] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372375#comment-16372375
 ] 

genericqa commented on YARN-7403:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 13s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 27 new + 212 unchanged - 0 fixed = 239 total (was 212) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
26s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 13 new + 4 unchanged - 0 fixed = 17 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| J

[jira] [Commented] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372344#comment-16372344
 ] 

Gour Saha commented on YARN-7954:
-

Not sure how to write a test for this patch, since it will involve ATSv2 and 
UI2 which is not straightforward.

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7954.001.patch, YARN-7954.002.patch
>
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372341#comment-16372341
 ] 

genericqa commented on YARN-7954:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 10s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911472/YARN-7954.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux daa82ee25baf 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19766/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19766/testReport/ |
| Max. process+thread count | 624 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoo

[jira] [Commented] (YARN-7834) [GQ] Rebalance queue configuration for load-balancing and locality affinities

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372339#comment-16372339
 ] 

genericqa commented on YARN-7834:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 25 new + 0 unchanged - 0 fixed = 25 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
15s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 5 new + 4 unchanged - 0 fixed = 9 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910823/YARN-7834.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 79fb1b09ce85 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java |

[jira] [Commented] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372338#comment-16372338
 ] 

Yufei Gu commented on YARN-7859:


Hi [~wangwj], let me reiterate my stands. This is a real issue needs to be 
solve. Thank you for filing this. I am open to any proposal. Unfortunately the 
solution you propose here breaks the fairness, which leads to potential issues. 
A new queue property could also bring lots of corner cases. Other than that, 
global scheduling seems a right way to fix this issue. You can either wait for 
it or work on it. Before global scheduling is available, there are some 
workarounds(decreasing minShare of long-running job) to mitigate. Global 
scheduling may not be the only right solution. I would also recommend you 
rethink the solution, so that it doesn't break fairness, and validate potential 
risks thoroughly if you want to introduce any new queue property. Anyway, thank 
you for filing this. Great job so far. 

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Assignee: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 3.0.0
>
> Attachments: YARN-7859-v1.patch, YARN-7859-v2.patch, log, 
> screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7403:
---
Attachment: YARN-7403.v3.patch

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, YARN-7403.v2.patch, 
> YARN-7403.v3.patch, global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372324#comment-16372324
 ] 

Carlo Curino commented on YARN-7403:


[~kkaranasos] thanks for looking at this. I initially put it together because 
it is not easy to understand why we have certain data structures without the 
code that use them, but if it is easier to review for you I am ok to split.
# YARN-7953 is now a data-structure only patch (with minor refactoring should 
now compile fine, and be reasonably self-sustaining)
# YARN-7403 (this patch) is now algo-only and depends on YARN-7953 and 
YARN-7934 (the hook in CS/preemption code patch)

BTW the choice of JAX-B is because we are considering REST endpoint as a way to 
communicate between RM and GPG.

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, YARN-7403.v2.patch, 
> global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-21 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372322#comment-16372322
 ] 

Suma Shivaprasad edited comment on YARN-7935 at 2/22/18 2:08 AM:
-

The current patch does not take care of "host" mode networking where the 
container's hostname should be set to the NM host's hostname. Will fix that and 
upload a patch.


was (Author: suma.shivaprasad):
The current patch does not take care of "host" mode networking where the 
container's hostname is set to the NM host's hostname. Will fix that and upload 
a patch.

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-21 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372322#comment-16372322
 ] 

Suma Shivaprasad commented on YARN-7935:


The current patch does not take care of "host" mode networking where the 
container's hostname is set to the NM host's hostname. Will fix that and upload 
a patch.

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-21 Thread Mridul Muralidharan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372320#comment-16372320
 ] 

Mridul Muralidharan commented on YARN-7935:
---

The hostname executor's bind port(s) to and share with other executors/driver 
(AM), is passed in as cli parameter for executor in case of apache spark. 
Particularly in deployments with multiple interfaces and hostnames, it does not 
try to determine which hostname to use : simply depend on what the resource 
manager (yarn in this case) is configured with and shared with the container to 
use.

 

Apache spark works across resource managers (standalone, mesos, k8s, yarn, etc) 
and expects this to be determined by the individual resource manager it is 
running on.

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7939) Yarn Service: add support to upgrade a component instance

2018-02-21 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7939:

Summary: Yarn Service: add support to upgrade a component instance   (was: 
Yarn Service: add supported to upgrade a component instance )

> Yarn Service: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread Xiaohua (Victor) Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372308#comment-16372308
 ] 

Xiaohua (Victor) Liang commented on YARN-6528:
--

YARN-6528.v009.patch fixes all the checkstyle warnings in YARN-6528.v008.patch

Note that we have to supress a few ParameterNumber and FinalClass warnings

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, 
> YARN-6528.v009.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread Xiaohua (Victor) Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohua (Victor) Liang updated YARN-6528:
-
Attachment: YARN-6528.v009.patch

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, 
> YARN-6528.v009.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372305#comment-16372305
 ] 

Carlo Curino edited comment on YARN-7953 at 2/22/18 1:41 AM:
-

Per [this |#comment-16370912] ask by [~kkaranasos], I am splitting YARN-7403 
into a data-only patch, this one, and the algo side in YARN-7403.


was (Author: curino):
Per [this |#comment-16370912] ask by, I am splitting YARN-7403 into a data-only 
patch, this one, and the algo side in YARN-7403.

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-7953:
--

Assignee: Carlo Curino

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-21 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7953:
---
Attachment: YARN-7953.v1.patch

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372287#comment-16372287
 ] 

Gour Saha commented on YARN-7954:
-

sorry uploaded the wrong patch

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7954.001.patch, YARN-7954.002.patch
>
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7954:

Attachment: YARN-7954.002.patch

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7954.001.patch, YARN-7954.002.patch
>
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Simplify configuration for PlacementConstraints

2018-02-21 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372281#comment-16372281
 ] 

Weiwei Yang commented on YARN-7920:
---

Hi [~leftnoteasy],[~kkaranasos]

Sorry for the late response, I was on vacation.
{quote}for the md.vm format, any advantages in this format
{quote}
this format of markdown file can help to substitute ${project.version} variable 
to the under-building version. I think we need to rename it back otherwise the 
doc will be looking like following

{code}
$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
share/hadoop/yarn/hadoop-yarn-applications-distributedshell-${project.version}.jar
 -shell_command sleep -shell_args 10 -placement_spec PlacementSpec
{code}

what do you think?

> Simplify configuration for PlacementConstraints
> ---
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372279#comment-16372279
 ] 

Naganarasimha G R commented on YARN-6858:
-

Thanks for the Comments [~bibinchundatt],

Have incorporated the changes for all your comments but had few points to 
mention
{quote}Had an offline discussion with Sunil G we thought to using separate 
store for Nodelabels and Attributes enabled separately
{quote}
Even during my initial discussions with Sunil we had concluded on the same 
approach. generalize the store api and have different store for Attribute so 
that its compatible and does not break on rollback.
{quote}Param name mismatch in following method
{quote}
 I believe i had missed a param, but param names were same. Please correct me 
if i am wrong.
{quote}Currently manager doesnt provide a way to filter out nodes of type 
central, distributed type. 
{quote}
I have taken care of it by providing a set of prefixes in the existing 
interface to get the clusterAttributes. Please check whether its sufficient.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372278#comment-16372278
 ] 

genericqa commented on YARN-7942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry: The patch generated 5 new 
+ 107 unchanged - 2 fixed = 112 total (was 109) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911457/YARN-7942.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 488782ea1ebd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19764/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19764/testReport/ |
| Max. process+thread count | 355 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/

[jira] [Created] (YARN-7958) ServiceMaster should only wait for recovery of containers with id that match the current application id

2018-02-21 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-7958:
---

 Summary: ServiceMaster should only wait for recovery of containers 
with id that match the current application id
 Key: YARN-7958
 URL: https://issues.apache.org/jira/browse/YARN-7958
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Chandni Singh
Assignee: Chandni Singh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: YARN-6858-YARN-3409.007.patch

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch, YARN-6858-YARN-3409.007.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2018-02-21 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372275#comment-16372275
 ] 

Devaraj K commented on YARN-5764:
-

[~miklos.szeg...@cloudera.com] Thanks for comments.

bq. Is MB not supported?
Here conversion is happening to MB, directly taking the value if it is already 
in MB.

bq. Containers can change their resource usage. I do not see that supported, 
yet. It may need another jira.
Agree, will create an another jira to handle this.

I have addressed the other comments in the patch, please have a look into the 
patch.

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
>Priority: Major
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch, YARN-5764-v4.patch, YARN-5764-v5.patch, 
> YARN-5764-v6.patch, YARN-5764-v7.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372265#comment-16372265
 ] 

genericqa commented on YARN-5764:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5764 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911440/YARN-5764-v7.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f5556913585b 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Buil

[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372242#comment-16372242
 ] 

Naganarasimha G R commented on YARN-6858:
-

last build results eclipsed [~bibinchundatt]'s comments so missed it out. 
Uploading the patch at the shortly.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-21 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372241#comment-16372241
 ] 

Billie Rinaldi commented on YARN-7942:
--

I propose a slightly different solution. In the cluster where I encountered, 
the RM was configured with its own jaas configuration, but RegistrySecurity was 
overwriting that with a bad jaas config. I think we should use the provided 
jaas config instead of overwriting it. Since the ServiceClient is only being 
run by the RM currently, it seems reasonable to require that the RM will be 
configured properly for secure ZK access.

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.Serv

[jira] [Comment Edited] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372236#comment-16372236
 ] 

Yufei Gu edited comment on YARN-7738 at 2/22/18 12:10 AM:
--

[~leftnoteasy], could you tell me a little more about why CS supports refresh 
maximum allocation while RM is running? It looks kind of weird to me that 
reload the whole setting in resource-type.xml isn't supported, but only one of 
these setting is allowed to reload dynamically. Thanks. 


was (Author: yufeigu):
[~leftnoteasy], could you tell me a little more about why CS supports refresh 
maximum allocation while RM is running? Thanks. 

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372239#comment-16372239
 ] 

genericqa commented on YARN-5714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 8 new + 150 unchanged - 0 fixed = 158 total (was 150) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911442/YARN-5714.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7d841231588 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19763/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19763/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372236#comment-16372236
 ] 

Yufei Gu commented on YARN-7738:


[~leftnoteasy], could you tell me a little more about why CS supports refresh 
maximum allocation while RM is running? Thanks. 

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-21 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7942:
-
Attachment: YARN-7942.002.patch

> Yarn ServiceClient does not not delete znode from secure ZooKeeper
> --
>
> Key: YARN-7942
> URL: https://issues.apache.org/jira/browse/YARN-7942
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7942.001.patch, YARN-7942.002.patch
>
>
> Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
> property), the RM fails to remove the node with the below error. Also, the 
> destroy call succeeds.
> {code}
> 2018-02-16 15:49:29,691 WARN  client.ServiceClient 
> (ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
> /users/hbase/services/yarn-service/hbase-app-test
> org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
> `/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized 
> to access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
> /registry/users/hbase/services/yarn-service/hbase-app-test
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
> at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
> at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
> at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
> at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> com.google.inject.ser

[jira] [Commented] (YARN-7916) Remove call to docker logs on failure in container-executor

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372180#comment-16372180
 ] 

Hudson commented on YARN-7916:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13699 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13699/])
YARN-7916. Remove call to docker logs on failure in container-executor. (jlowe: 
rev 3132709b46a35f70cf5278f3ace677e6e18a1d03)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c


> Remove call to docker logs on failure in container-executor
> ---
>
> Key: YARN-7916
> URL: https://issues.apache.org/jira/browse/YARN-7916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7916.001.patch
>
>
> If a Docker container fails with a non-zero exit code, container-executor 
> attempts to run a {{docker logs --tail=250 container_name}} to provide more 
> details on why the container failed. While the idea is good, the current 
> implementation will fail for most containers as they are leveraging a launch 
> script whose output will be redirected to a file. The {{--tail}} option 
> throws an error if no log output is available for the container, resulting in 
> the docker logs command returning rc=1 in most cases.
> I propose we remove this code from container-executor. Alternative approaches 
> to handle logging can be explored as part of supporting an image's entrypoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2018-02-21 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372162#comment-16372162
 ] 

Jim Brennan commented on YARN-5714:
---

As this has become a blocking issue for YARN-7677, I took the liberty updating 
[~rcatherinot]'s patch.

I made minimal changes to it, fixing some compilation issues that cropped up 
due to changes since then, fixed a bug in one of the tests where it was 
stopping after a single iteration instead of running through all combinations, 
and adding a few more test cases for getEnvDependencies.

I tested this locally and it appears to work well.   I put up a new patch to 
get the ball rolling on this one again.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2018-02-21 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: YARN-5714.007.patch

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Attachments: YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5764) NUMA awareness support for launching containers

2018-02-21 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-5764:

Attachment: YARN-5764-v7.patch

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
>Priority: Major
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch, YARN-5764-v4.patch, YARN-5764-v5.patch, 
> YARN-5764-v6.patch, YARN-5764-v7.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7916) Remove call to docker logs on failure in container-executor

2018-02-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372144#comment-16372144
 ] 

Jason Lowe commented on YARN-7916:
--

Thanks for the patch!  +1 lgtm as well.  Committing this.

> Remove call to docker logs on failure in container-executor
> ---
>
> Key: YARN-7916
> URL: https://issues.apache.org/jira/browse/YARN-7916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7916.001.patch
>
>
> If a Docker container fails with a non-zero exit code, container-executor 
> attempts to run a {{docker logs --tail=250 container_name}} to provide more 
> details on why the container failed. While the idea is good, the current 
> implementation will fail for most containers as they are leveraging a launch 
> script whose output will be redirected to a file. The {{--tail}} option 
> throws an error if no log output is available for the container, resulting in 
> the docker logs command returning rc=1 in most cases.
> I propose we remove this code from container-executor. Alternative approaches 
> to handle logging can be explored as part of supporting an image's entrypoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372133#comment-16372133
 ] 

genericqa commented on YARN-7945:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 0 new + 0 unchanged - 2 fixed = 0 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-7945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911433/YARN-7945-branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 706f9708a6d5 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / f7e5e45 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19761/testReport/ |
| Max. process+thread count | 93 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19761/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Java Doc error in UnmanagedAMPoolManager for branch-2
> ---

[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372117#comment-16372117
 ] 

Jason Lowe commented on YARN-7935:
--

Curious, why does Spark (or other applications) need help from YARN to 
determine the hostname?  Most applications simply determine the hostname on 
their own, e.g.: calling gethostname(2) or the equivalent for the runtime for 
that application.  For example, org.apache.hadoop.net.NetUtils has methods that 
could either be leveraged directly or used as Java sample code.


> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5764) NUMA awareness support for launching containers

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372115#comment-16372115
 ] 

genericqa commented on YARN-5764:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
33s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 219 unchanged - 0 fixed = 220 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
42s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5764 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911421/YARN-5764-v6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux bc7ca109aa8e

[jira] [Updated] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-21 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7446:

Target Version/s:   (was: 3.1.0)
Priority: Major  (was: Blocker)

> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-02-21 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Target Version/s:   (was: 3.1.0)
Priority: Major  (was: Blocker)

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-02-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372110#comment-16372110
 ] 

Eric Yang commented on YARN-7221:
-

YARN-7654 will change the launcher script invocation to be external of docker 
container instead of running launcher script inside docker container.  Until 
that work is completed, it is not safe to run privileged container because data 
written to yarn localizer directory might contain root user files.  This will 
prevent localized directory from clean up.  YARN-7654 might not be completed in 
3.1 release.  Hence, removing this JIAR as blocker for 3.1 release.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372112#comment-16372112
 ] 

Eric Yang commented on YARN-7446:
-

YARN-7654 will change the launcher script invocation to be external of docker 
container instead of running launcher script inside docker container.  Until 
that work is completed, it is not safe to run privileged container because data 
written to yarn localizer directory might contain root user files.  This will 
prevent localized directory from clean up.  YARN-7654 might not be completed in 
3.1 release.  Hence, removing this JIAR as blocker for 3.1 release.

> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372108#comment-16372108
 ] 

genericqa commented on YARN-7954:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  9m 
24s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-services-core in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-services-core in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 14s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911422/YARN-7954.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c5e61ae069e7 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2bc3351 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/19760/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/19760/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-serv

[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372106#comment-16372106
 ] 

Botong Huang commented on YARN-7945:


Oh I didn't know that will do, 002 patch loaded. Thanks! 

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch, 
> YARN-7945-branch-2.002.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-21 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7945:
---
Attachment: YARN-7945-branch-2.002.patch

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch, 
> YARN-7945-branch-2.002.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-02-21 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372089#comment-16372089
 ] 

Eric Yang commented on YARN-7221:
-

[~ebadger] Patch 005 will pass string username instead of uid:gid from node 
manager to container-executor when requesting for a privileged container.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-21 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372082#comment-16372082
 ] 

Jason Lowe commented on YARN-7945:
--

Sure, or you can fully qualify the classname in the Javadoc comment which fixes 
the doc build issue and also avoids the unused import warning.

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7871) Node attributes reporting from NM to RM

2018-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371815#comment-16371815
 ] 

Sunil G commented on YARN-7871:
---

Make sense, with this assumption, is this patch good to go in based on your 
tests?

Thanks [~cheersyang]

> Node attributes reporting from NM to RM 
> 
>
> Key: YARN-7871
> URL: https://issues.apache.org/jira/browse/YARN-7871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7871-YARN-3409.001.patch
>
>
> Support to initialize proper attribute provider based on user's configuration.
> NM collects node attributes from a configured attribute provider and send 
> them to RM via HB. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-02-21 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-7953:
--

 Summary: [GQ] Data structures for federation global queues 
calculations
 Key: YARN-7953
 URL: https://issues.apache.org/jira/browse/YARN-7953
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino


This Jira tracks data structures and helper classes used by the core algorithms 
of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7957) Yarn service delete option disappears after stopping application

2018-02-21 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7957:


 Summary: Yarn service delete option disappears after stopping 
application
 Key: YARN-7957
 URL: https://issues.apache.org/jira/browse/YARN-7957
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.1.0
Reporter: Yesha Vora


Steps:

1) Launch yarn service
2) Go to service page and click on Setting button->"Stop Service". The 
application will be stopped.
3) Refresh page

Here, setting button disappears. Thus, user can not delete service from UI 
after stopping application

Expected behavior:
Setting button should be present on UI page after application is stopped. If 
application is stopped, setting button should only have "Delete Service" action 
available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7956) HOME/Services/ and HOME/Services//Components refer to same page

2018-02-21 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7956:


 Summary: HOME/Services/ and 
HOME/Services//Components refer to same page
 Key: YARN-7956
 URL: https://issues.apache.org/jira/browse/YARN-7956
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.1.0
Reporter: Yesha Vora


Scenario:

1) Start Yarn service
2) Click on a Running yarn service ( example : yesha-sleeper)
http://:8088/ui2/#/yarn-app/application_1518804855867_0002/components?service=yesha-sleeper
3) Now click on yesha-sleeper [application_1518804855867_0002] link

Both components and yesha-sleeper [application_1518804855867_0002] link point 
to one page. 
HOME/Services/ and HOME/Services//Components refer to 
same page.

We should not need two links to refer to one page
h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372036#comment-16372036
 ] 

genericqa commented on YARN-6528:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 23 new + 412 unchanged - 3 fixed = 435 total (was 415) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911415/YARN-6528.v008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d90a0e653487 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 004b722 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19758/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19758/testReport/ |
| Max. process+thread count | 800 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-res

[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-02-21 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.005.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7954:

Attachment: YARN-7954.001.patch

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7954.001.patch
>
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5764) NUMA awareness support for launching containers

2018-02-21 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-5764:

Attachment: YARN-5764-v6.patch

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
>Priority: Major
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch, YARN-5764-v4.patch, YARN-5764-v5.patch, YARN-5764-v6.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread Xiaohua (Victor) Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohua (Victor) Liang updated YARN-6528:
-
Attachment: YARN-6528.v008.patch

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371928#comment-16371928
 ] 

Hudson commented on YARN-5028:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13697 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13697/])
YARN-5028. RMStateStore should trim down app state for completed (yufei: rev 
92cbbfe79ec009a19a71a7f44329a4b2f9fa9be6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java


> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371908#comment-16371908
 ] 

Yufei Gu commented on YARN-5028:


Committed to trunk. Thanks for the patch, [~grepas]. Thanks for the review, 
[~snemeth] and [~rohithsharma]. 

Do you need a branch-2 version? [~grepas]

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch, 
> YARN-5028.005.patch, YARN-5028.006.patch, YARN-5028.007.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-21 Thread Gour Saha (JIRA)
Gour Saha created YARN-7955:
---

 Summary: Calling stop on an already stopped service says 
"Successfully stopped service"
 Key: YARN-7955
 URL: https://issues.apache.org/jira/browse/YARN-7955
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.1.0
Reporter: Gour Saha


If you invoke "yarn app -stop " on an already stopped service it 
confusingly responds with message "Successfully stopped service 
". It should say "Service is already stopped".

The same is seen with the REST API PUT request with data \{ "state": 
"STOPPED"}, the response is 200 OK and diagnostics with same message 
"Successfully stopped service ". It should return 400 Bad Request 
with message "Service is already stopped".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7954:

Affects Version/s: 3.1.0
 Target Version/s: 3.1.0

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7949) ArtifactsId should not be a compulsory field for new service

2018-02-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371934#comment-16371934
 ] 

genericqa commented on YARN-7949:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7949 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911335/YARN-7949.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux af4a590ef3dc 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 004b722 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 456 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19757/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ArtifactsId should not be a compulsory field for new service
> 
>
> Key: YARN-7949
> URL: https://issues.apache.org/jira/browse/YARN-7949
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Attachments: YARN-7949.001.patch
>
>
> 1) Click on New Service 
> 2) Create a component
> Create Component page has Artifacts Id as compulsory entry. Few yarn service 
> example such as sleeper.json does not need to provide artifacts id.
> {code:java|title=sleeper.json}
> {
>   "name": "sleeper-service",
>   "components" :
>   [
> {
>   "name": "sleeper",
>   "number_of_containers": 2,
>   "launch_command": "sleep 90",
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   }
> }
>   ]
> }{code}
> Thus, artifactsId should not be compulsory field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7954:


 Summary: Component status stays "Ready" when yarn service is 
stopped
 Key: YARN-7954
 URL: https://issues.apache.org/jira/browse/YARN-7954
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Yesha Vora


Steps:

1) Launch yarn service application

2) Stop application

3) Run get status from yarn cli

 {code}
[hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper

WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
YARN_LOG_DIR.

WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
YARN_LOGFILE.

WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
YARN_PID_DIR.

WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.

18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
xxx/xx.xx.xx.xx:8050

18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
server at xxx/xx.xx.xx.xx:10200

18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
xxx/xx.xx.xx.xx:8050

18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
server at xxx/xx.xx.xx.xx:10200

18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms

{"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}

 {code}
4) Validate UI2 for service status

Here, Yarn service status is marked as "finished". However, components status 
still shows Ready. 

On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7856) Validation node attributes in NM

2018-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371814#comment-16371814
 ] 

Sunil G commented on YARN-7856:
---

Thanks [~cheersyang]

 

Few comments.

1. {{equals}} could u pls check a case where one of getAttributePrefix may be 
null. (we check both null scenario only now)

2. In {{validateNodeAttributes}} we could add some checks to avoid special 
characters etc, correct? like alphanumerical string.

3. May be the number of attributes are smaller, however instead of calling 
{{validateNodeAttributes}} with a set of attributes, we could validate each of 
these attribute alone.

> Validation node attributes in NM
> 
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7856-YARN-3409.001.patch
>
>
> NM needs to do proper validation about the attributes before sending them to 
> RM, this includes
> # a valid prefix is presented
> # no duplicate entries
> # do not allow two attributes with same prefix/name but different types
> This could be an utility class that can be used on both RM/NM sides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371805#comment-16371805
 ] 

Sunil G commented on YARN-6858:
---

[~Naganarasimha] Could u pls check [~bibinchundatt]'s comments so that we can 
try to get this in this week.

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7955) Calling stop on an already stopped service says "Successfully stopped service"

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7955:
---

Assignee: Gour Saha

> Calling stop on an already stopped service says "Successfully stopped service"
> --
>
> Key: YARN-7955
> URL: https://issues.apache.org/jira/browse/YARN-7955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
>
> If you invoke "yarn app -stop " on an already stopped service 
> it confusingly responds with message "Successfully stopped service 
> ". It should say "Service is already stopped".
> The same is seen with the REST API PUT request with data \{ "state": 
> "STOPPED"}, the response is 200 OK and diagnostics with same message 
> "Successfully stopped service ". It should return 400 Bad 
> Request with message "Service is already stopped".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7945) Java Doc error in UnmanagedAMPoolManager for branch-2

2018-02-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371964#comment-16371964
 ] 

Botong Huang commented on YARN-7945:


Hi [~jlowe], looks like in branch-2, it always complains about this import not 
being used as a check style issue. But in YARN-6128 in trunk the same patch 
doesn't trigger check style. Can we just add this import here to fix the 
javadoc failure? 

> Java Doc error in UnmanagedAMPoolManager for branch-2
> -
>
> Key: YARN-7945
> URL: https://issues.apache.org/jira/browse/YARN-7945
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.9.1
>Reporter: Rohith Sharma K S
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7945-branch-2.001.patch
>
>
> In branch-2, I see an java doc error while building package. 
> {code}
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:151:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> [ERROR]   ^
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:204:
>  error: reference not found
> [ERROR]* @see ApplicationSubmissionContext
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371878#comment-16371878
 ] 

Gour Saha commented on YARN-7954:
-

I have a patch for this

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7954) Component status stays "Ready" when yarn service is stopped

2018-02-21 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7954:
---

Assignee: Gour Saha

> Component status stays "Ready" when yarn service is stopped
> ---
>
> Key: YARN-7954
> URL: https://issues.apache.org/jira/browse/YARN-7954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
>
> Steps:
> 1) Launch yarn service application
> 2) Stop application
> 3) Run get status from yarn cli
>  {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 10:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:37 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 10:54:37 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.xx:10200
> 18/02/16 10:54:38 INFO util.log: Logging initialized @1957ms
> {"name":"yesha-sleeper","lifetime":-1,"components":[],"configuration":{"properties":{},"env":{},"files":[]},"state":"STOPPED","quicklinks":{},"kerberos_principal":{}}
>  {code}
> 4) Validate UI2 for service status
> Here, Yarn service status is marked as "finished". However, components status 
> still shows Ready. 
> On stopping yarn service, component status should be updated to "Stop"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations

2018-02-21 Thread Xiaohua (Victor) Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371859#comment-16371859
 ] 

Xiaohua (Victor) Liang commented on YARN-6528:
--

As mentioned in the previous comment. The newly posted v008 patch contains the 
necessary changes to bring v007 up to be compatible with the latest trunk.

> [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan 
> Operations
> -
>
> Key: YARN-6528
> URL: https://issues.apache.org/jira/browse/YARN-6528
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Xiaohua (Victor) Liang
>Priority: Major
> Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, 
> YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, 
> YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. In order to understand in finer detail the 
> performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan 
> Follower, Agent Placement and Plan Operations components of Rayon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-21 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371824#comment-16371824
 ] 

Carlo Curino commented on YARN-7732:


Thanks [~leftnoteasy], so should we then push to branch-3.0 (for all 3.x future 
branches?)

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch, YARN-7732.06.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4488) CapacityScheduler: Compute per-container allocation latency and roll up to get per-application and per-queue

2018-02-21 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371732#comment-16371732
 ] 

Manikandan R edited comment on YARN-4488 at 2/21/18 5:43 PM:
-

[~leftnoteasy] Thanks.

Overall approach (as described in YARN-4485) is
 # Set the request start time while creating ResourceRequest object
 # Once container is allocated, subtract #1 from now() to compute the time 
taken/duration
 # Use #2 to update the corresponding queue metrics. For example, Lets say, for 
a job using 2 containers, newly added Queue Metrics would look like

{"ContainerAllocationDelayNumOps" : 2, "ContainerAllocationDelayAvgTime" : 
1755.5}

As described in YARN-7902, To set the start time in ResourceRequest (which is 
in hadoop-yarn-api package), it requires SystemClock class and its dependent 
Clock interface to be in hadoop-common instead of hadoop-yarn-common (which is 
dependent on hadoop-yarn-api package). Hence, idea is to move SystemClock class 
etc to hadoop-common package so that it can be used any packages (not only 
related to YARN). Subsequently, we will need to do corresponding import changes 
in all classes where those clock related classes has been used. Thoughts? If it 
makes sense, Can we also move other classes like MonotonicClock class etc?

Main changes are in {{AppSchedulingInfo}}, {{QueueMetrics}} and 
{{ResourceRequest}} class. Attached patch covers metrics per Queue only. Once 
approach is confirmed, will do the same per app as it is also an requirement of 
this JIRA.


was (Author: maniraj...@gmail.com):
[~leftnoteasy] Thanks.

Overall approach (as described in YARN-4485) is
 # Set the request start time while creating ResourceRequest object
 # Once container is allocated, subtract #1 from now() to compute the time 
taken/duration
 # Use #2 to update the corresponding queue metrics. For example, Lets say, for 
a job using 2 containers, newly added Queue Metrics would look like

{"ContainerAllocationDelayNumOps" : 2, "ContainerAllocationDelayAvgTime" : 
1755.5}

As described in YARN-7902, To set the start time in ResourceRequest (which is 
in hadoop-yarn-api package), it requires SystemClock class and its dependent 
Clock interface to be in hadoop-common instead of hadoop-yarn-common (which is 
dependent on hadoop-yarn-api package). Hence, idea is to move SystemClock class 
etc to hadoop-common package so that it can be used any packages (not only 
related to YARN). Subsequently, we will need to do corresponding import changes 
in all classes where those clock related classes has been used. Thoughts? If it 
makes sense, Can we also move other classes like MonotonicClock class etc?

Main changes are in {{AppSchedulingInfo}}, {{QueueMetrics}} and 
{{ResourceRequest}} class. Attached patch covers metrics per Queue only. Once 
approach is confirmed, will do the same per app as it is also an requirement of 
this JIRA.

 

 

 

 

> CapacityScheduler: Compute per-container allocation latency and roll up to 
> get per-application and per-queue
> 
>
> Key: YARN-4488
> URL: https://issues.apache.org/jira/browse/YARN-4488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-4485.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4488) CapacityScheduler: Compute per-container allocation latency and roll up to get per-application and per-queue

2018-02-21 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371732#comment-16371732
 ] 

Manikandan R commented on YARN-4488:


[~leftnoteasy] Thanks.

Overall approach (as described in YARN-4485) is
 # Set the request start time while creating ResourceRequest object
 # Once container is allocated, subtract #1 from now() to compute the time 
taken/duration
 # Use #2 to update the corresponding queue metrics. For example, Lets say, for 
a job using 2 containers, newly added Queue Metrics would look like

{"ContainerAllocationDelayNumOps" : 2, "ContainerAllocationDelayAvgTime" : 
1755.5}

As described in YARN-7902, To set the start time in ResourceRequest (which is 
in hadoop-yarn-api package), it requires SystemClock class and its dependent 
Clock interface to be in hadoop-common instead of hadoop-yarn-common (which is 
dependent on hadoop-yarn-api package). Hence, idea is to move SystemClock class 
etc to hadoop-common package so that it can be used any packages (not only 
related to YARN). Subsequently, we will need to do corresponding import changes 
in all classes where those clock related classes has been used. Thoughts? If it 
makes sense, Can we also move other classes like MonotonicClock class etc?

Main changes are in {{AppSchedulingInfo}}, {{QueueMetrics}} and 
{{ResourceRequest}} class. Attached patch covers metrics per Queue only. Once 
approach is confirmed, will do the same per app as it is also an requirement of 
this JIRA.

 

 

 

 

> CapacityScheduler: Compute per-container allocation latency and roll up to 
> get per-application and per-queue
> 
>
> Key: YARN-4488
> URL: https://issues.apache.org/jira/browse/YARN-4488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-4485.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-21 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371523#comment-16371523
 ] 

Eric Payne commented on YARN-7813:
--

Awesome! Thanks [~jlowe]

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2
>
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, 
> YARN-7813.005.branch-2.8.patch, YARN-7813.005.branch-3.0.patch, 
> YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7947) Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps

2018-02-21 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371520#comment-16371520
 ] 

Eric Payne commented on YARN-7947:
--

Great! Thanks a lot, [~sunilg]

> Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps
> --
>
> Key: YARN-7947
> URL: https://issues.apache.org/jira/browse/YARN-7947
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2, 3.2.0
>
> Attachments: YARN-7947.001.patch
>
>
> Intra-queue preemption policy can cause NPE for pending users with no 
> schedulable apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >