[jira] [Reopened] (YARN-4227) FairScheduler: RM quits processing expired container from a removed node

2017-08-23 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg reopened YARN-4227:
-

[~Steven Rand] looks like you are correct, lets re-open this issue.
The {{containerCompleted}} call we do just before we release the container on 
the node is synchronised which means that we could have been waiting there for 
just long enough to cause the node to become {{null}}
The patch will need a rebase due to all the pre-emption changes that have gone 
in and I'll upload a new patch soon.

> FairScheduler: RM quits processing expired container from a removed node
> 
>
> Key: YARN-4227
> URL: https://issues.apache.org/jira/browse/YARN-4227
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0, 2.5.0, 2.7.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: YARN-4227.2.patch, YARN-4227.3.patch, YARN-4227.4.patch, 
> YARN-4227.patch
>
>
> Under some circumstances the node is removed before an expired container 
> event is processed causing the RM to exit:
> {code}
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: 
> Expired:container_1436927988321_1307950_01_12 Timed out after 600 secs
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_1436927988321_1307950_01_12 Container Transitioned from 
> ACQUIRED to EXPIRED
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: 
> Completed container: container_1436927988321_1307950_01_12 in state: 
> EXPIRED event:EXPIRE
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=system_op   
>OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS  
> APPID=application_1436927988321_1307950 
> CONTAINERID=container_1436927988321_1307950_01_12
> 2015-10-04 21:14:01,063 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type CONTAINER_EXPIRED to the scheduler
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.completedContainer(FairScheduler.java:849)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:585)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-10-04 21:14:01,063 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> The stack trace is from 2.3.0 but the same issue has been observed in 2.5.0 
> and 2.6.0 by different customers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7091) Some rename changes in yarn-native-services

2017-08-23 Thread Jian He (JIRA)
Jian He created YARN-7091:
-

 Summary: Some rename changes in yarn-native-services
 Key: YARN-7091
 URL: https://issues.apache.org/jira/browse/YARN-7091
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7011) yarn-daemon.sh is not respecting --config option

2017-08-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved YARN-7011.

Resolution: Not A Problem

Closing this again given the behavior is actually the same given the same 
configs.

> yarn-daemon.sh is not respecting --config option
> 
>
> Key: YARN-7011
> URL: https://issues.apache.org/jira/browse/YARN-7011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Trivial
>
> Steps to reproduce:
> 1. Copy the conf to a temporary location /tmp/Conf
> 2. Modify anything in yarn-site.xml under /tmp/Conf/. Ex: Give invalid RM 
> address
> 3. Restart the resourcemanager using yarn-daemon.sh using --config /tmp/Conf
> 4. --config is not respected as the changes made in /tmp/Conf/yarn-site.xml 
> is not taken in while restarting RM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn

2017-08-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139539#comment-16139539
 ] 

Akira Ajisaka commented on YARN-6712:
-

Hi [~vinodkv], now I filed MAPREDUCE-6946.

> Moving logging APIs over to slf4j in hadoop-yarn
> 
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139516#comment-16139516
 ] 

Hadoop QA commented on YARN-7047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_144
 with JDK v1.8.0_144 generated 0 new + 23 unchanged - 5 fixed = 23 total (was 
28) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_131
 with JDK v1.7.0_131 generated 0 new + 25 unchanged - 5 fixed = 25 total (was 
30) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 4 new + 1663 unchanged - 14 fixed = 1667 total (was 1677) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 20s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-7047 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139506#comment-16139506
 ] 

Hadoop QA commented on YARN-7047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_144
 with JDK v1.8.0_144 generated 0 new + 23 unchanged - 5 fixed = 23 total (was 
28) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_151
 with JDK v1.7.0_151 generated 0 new + 25 unchanged - 5 fixed = 25 total (was 
30) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 4 new + 1669 unchanged - 14 fixed = 1673 total (was 1683) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 51s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_151 Failed junit tests | 
hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_151 Timed out junit tests | 

[jira] [Updated] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7047:
---
Attachment: YARN-7047-branch-2.004.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-7047
> URL: https://issues.apache.org/jira/browse/YARN-7047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7047.001.patch, YARN-7047.002.patch, 
> YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, 
> YARN-7047-branch-2.002.patch, YARN-7047-branch-2.003.patch, 
> YARN-7047-branch-2.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139476#comment-16139476
 ] 

Yeliang Cang commented on YARN-7047:


Sorry, I forgot NavBlock.java. Submit branch-2 patch003

> Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-7047
> URL: https://issues.apache.org/jira/browse/YARN-7047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7047.001.patch, YARN-7047.002.patch, 
> YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, 
> YARN-7047-branch-2.002.patch, YARN-7047-branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7047:
---
Attachment: YARN-7047-branch-2.003.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-7047
> URL: https://issues.apache.org/jira/browse/YARN-7047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7047.001.patch, YARN-7047.002.patch, 
> YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, 
> YARN-7047-branch-2.002.patch, YARN-7047-branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139463#comment-16139463
 ] 

Hadoop QA commented on YARN-7010:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: 
The patch generated 0 new + 36 unchanged - 14 fixed = 36 total (was 50) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7010 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883462/YARN-7010.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 86d14b024fbd 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1000a2a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17107/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn

2017-08-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139460#comment-16139460
 ] 

Vinod Kumar Vavilapalli commented on YARN-6712:
---

[~ajisakaa], is there a MapReduce umbrella for similar migration?

> Moving logging APIs over to slf4j in hadoop-yarn
> 
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail

2017-08-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139459#comment-16139459
 ] 

Akira Ajisaka commented on YARN-7077:
-

My patch removes these properties in 
{{.hadoop-tools/hadoop-sls/src/test/resources/yarn-site.xml}}. Am I missing 
something?

> TestAMSimulator and TestNMSimulator fail
> 
>
> Key: YARN-7077
> URL: https://issues.apache.org/jira/browse/YARN-7077
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7077.001.patch
>
>
> TestAMSimulator and TestNMSimulator are failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class 
> org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139453#comment-16139453
 ] 

Hadoop QA commented on YARN-7047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_144
 with JDK v1.8.0_144 generated 0 new + 23 unchanged - 5 fixed = 23 total (was 
28) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_151
 with JDK v1.7.0_151 generated 0 new + 25 unchanged - 5 fixed = 25 total (was 
30) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 4 new + 1658 unchanged - 14 fixed = 1662 total (was 1672) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  5s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_151 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-7047 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7090) testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is configured

2017-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139444#comment-16139444
 ] 

Hudson commented on YARN-7090:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12234/])
YARN-7090. testRMRestartAfterNodeLabelDisabled get failed when (junping_du: rev 
652dd434d96cd1a1fe25cd8c636b1859c29f462b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


> testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is 
> configured
> ---
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-7090.001.patch, YARN-7090.002.patch
>
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7090) testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is configured

2017-08-23 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7090:
-
Summary: testRMRestartAfterNodeLabelDisabled get failed when 
CapacityScheduler is configured  (was: Only run 
testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured)

> testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is 
> configured
> ---
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
> Attachments: YARN-7090.001.patch, YARN-7090.002.patch
>
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139407#comment-16139407
 ] 

Hadoop QA commented on YARN-7090:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883457/YARN-7090.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6ab4e790e771 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26d8c8f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17106/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17106/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17106/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
> 

[jira] [Updated] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7047:
---
Attachment: YARN-7047-branch-2.002.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-7047
> URL: https://issues.apache.org/jira/browse/YARN-7047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7047.001.patch, YARN-7047.002.patch, 
> YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, 
> YARN-7047-branch-2.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager

2017-08-23 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7047:
---
Attachment: (was: YARN-7047-branch-2.002.patch)

> Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-7047
> URL: https://issues.apache.org/jira/browse/YARN-7047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha4
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7047.001.patch, YARN-7047.002.patch, 
> YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-23 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139397#comment-16139397
 ] 

Joep Rottinghuis commented on YARN-7019:


Agreed that we should generalize this to a score. The same could be used for 
asking for additional resources.
For example, if thousands mappers ran, and several reducers are done but tons 
more are running then a map output cannot be fetched (and a mapper has to be 
re-run), getting one more resource to quickly get this done and release a bunch 
or resources has one score.
Earlier in the job execution when waves of mappers run, killing one would have 
a lower score.

Similarly a score could be used to indicate to please not kill an app-master, 
or perhaps if one container gets killed, all the work is lost (in case of 
gang-scheduling needs).

A generalized score of give me x containers and I can make Y progress, vs. cost 
of killing these containers is Y would be useful.

I think the challenge will be to multiply an app or framework reported score to 
scale that to a fair share or priority for the application / user / queue so 
that apps cannot lie about these scores and rig scheduling decisions.

> Ability for applications to notify YARN about container reuse
> -
>
> Key: YARN-7019
> URL: https://issues.apache.org/jira/browse/YARN-7019
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>
> During preemption calculations YARN can try to reduce the amount of work lost 
> by considering how long a container has been running.  However when an 
> application framework like Tez reuses a container across multiple tasks it 
> changes the work lost calculation since the container has essentially 
> checkpointed between task assignments.  It would be nice if applications 
> could inform YARN when a container has been reused/checkpointed and therefore 
> is a better candidate for preemption wrt. lost work than other, younger 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139389#comment-16139389
 ] 

Giovanni Matteo Fumarola commented on YARN-7010:


In V2 I added metrics and javadocs, fixed the failed tests related to the patch 
and all the checkstyle warnings.  

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7010:
---
Attachment: YARN-7010.v2.patch

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139376#comment-16139376
 ] 

Junping Du commented on YARN-6876:
--

006 patch LGTM. The findbugs issue and unit test failure is not related to the 
patch. 
+1. Will commit it tomorrow if no comments from others.

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch, 
> YARN-6876-trunk.006.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139374#comment-16139374
 ] 

Junping Du commented on YARN-7090:
--

002 patch LGTM. +1 pending on Jenkins report.

> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
> Attachments: YARN-7090.001.patch, YARN-7090.002.patch
>
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7090:
-
Attachment: YARN-7090.002.patch

Fixed another potential issue as well (002).

> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
> Attachments: YARN-7090.001.patch, YARN-7090.002.patch
>
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139330#comment-16139330
 ] 

Hadoop QA commented on YARN-7090:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883442/YARN-7090.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d5c088979dc0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17105/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17105/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17105/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
>  

[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139317#comment-16139317
 ] 

Hadoop QA commented on YARN-7010:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 12 new 
+ 37 unchanged - 13 fixed = 49 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 13s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.router.webapp.TestFederationInterceptorREST |
|   | hadoop.yarn.server.router.webapp.TestFederationInterceptorRESTRetry |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7010 |
| JIRA 

[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139307#comment-16139307
 ] 

Hadoop QA commented on YARN-6876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
19s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 79 
unchanged - 7 fixed = 79 total (was 86) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 525 unchanged - 10 fixed = 544 total (was 535) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 58s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
27s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883426/YARN-6876-trunk.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d50af8daad4e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 

[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139291#comment-16139291
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 51 new + 117 unchanged - 1 fixed = 168 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 18s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883437/YARN-6550.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 67a958da30ff 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17104/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17104/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17104/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  

[jira] [Assigned] (YARN-4090) Make Collections.sort() more efficient by caching resource usage

2017-08-23 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-4090:
--

Assignee: Yufei Gu  (was: zhangshilong)

> Make Collections.sort() more efficient by caching resource usage
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Xianyin Xin
>Assignee: Yufei Gu
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090.005.patch, YARN-4090.006.patch, YARN-4090.007.patch, 
> YARN-4090-preview.patch, YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4090) Make Collections.sort() more efficient by caching resource usage

2017-08-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139276#comment-16139276
 ] 

Yufei Gu commented on YARN-4090:


Thanks [~zsl2007].

> Make Collections.sort() more efficient by caching resource usage
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Xianyin Xin
>Assignee: zhangshilong
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090.005.patch, YARN-4090.006.patch, YARN-4090.007.patch, 
> YARN-4090-preview.patch, YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail

2017-08-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139268#comment-16139268
 ] 

Yufei Gu commented on YARN-7077:


Thanks [~ajisakaa] for filing this. It makes more sense to me to remove or 
comment these in yarn-site.xml of SLS.
{code}
  
yarn.resourcemanager.scheduler.monitor.enable
true
  

  
yarn.resourcemanager.scheduler.monitor.policies

org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
  
{code}
yarn.resourcemanager.scheduler.monitor.enable is false by default, and not 
necessary used by FS. 

> TestAMSimulator and TestNMSimulator fail
> 
>
> Key: YARN-7077
> URL: https://issues.apache.org/jira/browse/YARN-7077
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7077.001.patch
>
>
> TestAMSimulator and TestNMSimulator are failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class 
> org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7090:
-
Attachment: YARN-7090.001.patch

Attached ver.1 patch

> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
> Attachments: YARN-7090.001.patch
>
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7090) Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is configured

2017-08-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7090:
-
Summary: Only run testRMRestartAfterNodeLabelDisabled when 
CapacityScheduler is configured  (was: testRMRestartAfterNodeLabelDisabled[1] 
UT Fails)

> Only run testRMRestartAfterNodeLabelDisabled when CapacityScheduler is 
> configured
> -
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7090) testRMRestartAfterNodeLabelDisabled[1] UT Fails

2017-08-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7090:


Assignee: Wangda Tan

> testRMRestartAfterNodeLabelDisabled[1] UT Fails
> ---
>
> Key: YARN-7090
> URL: https://issues.apache.org/jira/browse/YARN-7090
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Wangda Tan
>
> testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
> {code}
> Error Message
> expected:<[x]> but was:<[]>
> Stacktrace
> org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7086) Release all containers aynchronously

2017-08-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139241#comment-16139241
 ] 

Wangda Tan commented on YARN-7086:
--

The only potential issue I can see is: prior to this change, AM can assume 
containers are released by RM once allocate() returns. In the new world, AM has 
to check completed container list in AllocateResponse to make sure containers 
are released. It may not be a big issue though since I don't think we guarantee 
this in API description.

Beyond that, I like Jason's idea as well, share one fact: When I was doing 
async scheduling test in YARN-5139, I found resource commit phase (acquires 
write lock, check and update scheduler internal state such as resource usages, 
etc.) only takes less than 6% time, most of the time are consumed by 
{{CapacityScheduler#allocateContainersToNode}}. I suspect container release 
take the similar amount of time (around 6%).

> Release all containers aynchronously
> 
>
> Key: YARN-7086
> URL: https://issues.apache.org/jira/browse/YARN-7086
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> We have noticed in production two situations that can cause deadlocks and 
> cause scheduling of new containers to come to a halt, especially with regard 
> to applications that have a lot of live containers:
> # When these applicaitons release these containers in bulk.
> # When these applications terminate abruptly due to some failure, the 
> scheduler releases all its live containers in a loop.
> To handle the issues mentioned above, we have a patch in production to make 
> sure ALL container releases happen asynchronously - and it has served us well.
> Opening this JIRA to gather feedback on if this is a good idea generally (cc 
> [~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd])
> BTW, In YARN-6251, we already have an asyncReleaseContainer() in the 
> AbstractYarnScheduler and a corresponding scheduler event, which is currently 
> used specifically for the container-update code paths (where the scheduler 
> realeases temp containers which it creates for the update)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7090) testRMRestartAfterNodeLabelDisabled[1] UT Fails

2017-08-23 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7090:


 Summary: testRMRestartAfterNodeLabelDisabled[1] UT Fails
 Key: YARN-7090
 URL: https://issues.apache.org/jira/browse/YARN-7090
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Yesha Vora


testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. 
{code}
Error Message

expected:<[x]> but was:<[]>
Stacktrace

org.junit.ComparisonFailure: expected:<[x]> but was:<[]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139233#comment-16139233
 ] 

Eric Yang commented on YARN-7066:
-

The findbug warning is false positive for mounting /sys/fs/cgroup.  This patch 
didn't introduce the findbug issue.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs

2017-08-23 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6550:
---
Attachment: YARN-6550.008.patch

Fixed Container Launch UT failures which read from prelaunch.stderr and stdout 
instead of the shell's stdout and stderr respectively.

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7073) Rest API site documentation

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139224#comment-16139224
 ] 

Hadoop QA commented on YARN-7073:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
57s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883424/YARN-7073-yarn-native-services.002.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux cd8df1d887d0 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / e00bb2b |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17102/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch, 
> YARN-7073-yarn-native-services.002.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7089) Mark the log-aggregation-controller APIs as public

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139218#comment-16139218
 ] 

Junping Du commented on YARN-7089:
--

This is from: 
https://issues.apache.org/jira/browse/YARN-6876?focusedCommentId=16139157=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16139157

> Mark the log-aggregation-controller APIs as public
> --
>
> Key: YARN-7089
> URL: https://issues.apache.org/jira/browse/YARN-7089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139206#comment-16139206
 ] 

Hadoop QA commented on YARN-7066:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 5 new + 18 unchanged - 0 fixed = 23 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7066 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883417/YARN-7066.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6603d14af679 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17100/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17100/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17100/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17100/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Commented] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139197#comment-16139197
 ] 

Hadoop QA commented on YARN-7037:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
44s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883301/YARN-7037.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9d4e9588a6f4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17098/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 

[jira] [Assigned] (YARN-7087) NM failed to perform log aggregation due to absent container

2017-08-23 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-7087:


Assignee: Jason Lowe

> NM failed to perform log aggregation due to absent container
> 
>
> Key: YARN-7087
> URL: https://issues.apache.org/jira/browse/YARN-7087
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
>
> Saw a case where the NM failed to aggregate the logs for a container because 
> it claimed it was absent:
> {noformat}
> 2017-08-23 18:35:38,283 [AsyncDispatcher event handler] WARN 
> logaggregation.LogAggregationService: Log aggregation cannot be started for 
> container_e07_1503326514161_502342_01_01, as its an absent container
> {noformat}
> Containers should not be allowed to disappear if they're not done being fully 
> processed by the NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139177#comment-16139177
 ] 

Giovanni Matteo Fumarola edited comment on YARN-7010 at 8/23/17 9:39 PM:
-

Thanks [~curino] for the feedback. Pushed v1.
2. Fixed.
3. I use CachedThreadPool. The size is not fixed, it creates new threads as 
needed, but will reuse previously constructed threads when they are available 
with LRU Cache system.This webiste explain the difference between them: 
[Executors.newCachedThreadPool() versus Executors.newFixedThreadPool() | 
https://stackoverflow.com/questions/949355/executors-newcachedthreadpool-versus-executors-newfixedthreadpool]
4. Fixed.

For 1. the fixed is not trivial. My current solution has the following behavior:
If one subcluster is down, I merge the reports I have by discarding the UAMs 
without an AM.
To discriminate the UAM from the Federated ones I use size and name of the 
applications.

If the size is greater than 1 it means the AM did spread in more than 1 
subcluster. In this case the UAM is related to Federation -> Discarded.
If the size is 1 - it could be an simple UAM or an UAM depending from AMRMProxy 
- and the name starts with the same convention we create the UAM inside 
AMRMProxy, it means the UAM is related to Federation -> Discarded.
We kept otherwise. 


was (Author: giovanni.fumarola):
Thanks [~curino] for the feedback. Pushed v1.
2. Fixed.
3. I use CachedThreadPool. The size is not fixed, it creates new threads as 
needed, but will reuse previously constructed threads when they are available 
with LRU Cache system.This webiste explain the difference between them: 
[Executors.newCachedThreadPool() versus Executors.newFixedThreadPool() | 
https://stackoverflow.com/questions/949355/executors-newcachedthreadpool-versus-executors-newfixedthreadpool]
4. Fixed.

For 1. the fixed is not trivial. My current solution has the following behavior:
If one subcluster is down, I merge the reports I have by discarding the UAMs 
without an AM.
To discriminate the UAM from the Federated ones I use size and name of the 
applications.

If the size is greater than 1 it means the AM spread in more than 1 subcluster 
-> Discarded.
If the size is 1 - it could be an UAM or an UAM depending from AMRMProxy - If 
the name starts with the same convention we create the UAM inside AMRMProxy -> 
Discarded. 

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139177#comment-16139177
 ] 

Giovanni Matteo Fumarola commented on YARN-7010:


Thanks [~curino] for the feedback. Pushed v1.
2. Fixed.
3. I use CachedThreadPool. The size is not fixed, it creates new threads as 
needed, but will reuse previously constructed threads when they are available 
with LRU Cache system.This webiste explain the difference between them: 
[Executors.newCachedThreadPool() versus Executors.newFixedThreadPool() | 
https://stackoverflow.com/questions/949355/executors-newcachedthreadpool-versus-executors-newfixedthreadpool]
4. Fixed.

For 1. the fixed is not trivial. My current solution has the following behavior:
If one subcluster is down, I merge the reports I have by discarding the UAMs 
without an AM.
To discriminate the UAM from the Federated ones I use size and name of the 
applications.

If the size is greater than 1 it means the AM spread in more than 1 subcluster 
-> Discarded.
If the size is 1 - it could be an UAM or an UAM depending from AMRMProxy - If 
the name starts with the same convention we create the UAM inside AMRMProxy -> 
Discarded. 

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139165#comment-16139165
 ] 

Eric Yang commented on YARN-7066:
-

[~miklos.szeg...@cloudera.com] This is designed to work with YARN-4266.  The 
user UID:GID are enforced to mounted file system.  The unix process of the 
docker container would be owned by UID:GID of launching user.  Hence, user 
doesn't get additional privileges through mounting.  If someone tries to mount 
same mount point twice, such as /etc/sudoers file.  Docker will detect 
duplicated entries and abort execution.  Therefore, there is no loophole to 
fake /etc/sudoers file in container to gain extra privileges.  As long as the 
white list mount points are secured, and no privileges escalation possible in 
container, this feature does not contain security hole.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6876) Create an abstract log writer for extendability

2017-08-23 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6876:

Attachment: YARN-6876-trunk.006.patch

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch, 
> YARN-6876-trunk.006.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-23 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139157#comment-16139157
 ] 

Xuan Gong commented on YARN-6876:
-

Thanks for the review. [~djp]

bq. LogAggregationFileController.java is supposed to be a public interface so 
third-party can implement its own log format. We should add public, unstable 
annotation to the class. If we concern it is too early to get public, then we 
should have a separated jira to track this effort when it is becoming more 
stable.

Yes, it is too early to mark the API as public. For now, I prefer to mark the 
APIs as private. And we can change them back to Public when the feature is more 
stable. Created https://issues.apache.org/jira/browse/YARN-7089 to track this.

Other comments make sense to me. Uploaded a new patch to address them.


> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7089) Mark the log-aggregation-controller APIs as public

2017-08-23 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-7089:
---

 Summary: Mark the log-aggregation-controller APIs as public
 Key: YARN-7089
 URL: https://issues.apache.org/jira/browse/YARN-7089
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139152#comment-16139152
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 49 new + 117 unchanged - 1 fixed = 166 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 32s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883415/YARN-6550.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7077f11fedef 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17099/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17099/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17099/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17099/testReport/ |
| 

[jira] [Updated] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-23 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-7010:
---
Attachment: YARN-7010.v1.patch

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7073) Rest API site documentation

2017-08-23 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139144#comment-16139144
 ] 

Gour Saha commented on YARN-7073:
-

Uploading 002 with whitespace fix.

> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch, 
> YARN-7073-yarn-native-services.002.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7073) Rest API site documentation

2017-08-23 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7073:

Attachment: YARN-7073-yarn-native-services.002.patch

> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch, 
> YARN-7073-yarn-native-services.002.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139142#comment-16139142
 ] 

Junping Du commented on YARN-5464:
--

Nope. I haven't find any slot yet. May be we should go for plan B as you 
mentioned above for 3.0-beta timeline.

> Server-Side NM Graceful Decommissioning with RM HA
> --
>
> Key: YARN-5464
> URL: https://issues.apache.org/jira/browse/YARN-5464
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5464.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139138#comment-16139138
 ] 

Miklos Szegedi commented on YARN-7066:
--

[~eyang], is not this a security issue? What protects against mounting any 
directory from the client on the node and modifying as root?

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow users to enter containers as UID:GID pair instead of by username

2017-08-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139136#comment-16139136
 ] 

Eric Yang commented on YARN-4266:
-

[~templedf] GroupMappingServiceProvider contains only string representation of 
group.  It does not contain GID.  Docker container can work transparently using 
GID without configure docker container to look up LDAP source.  If string 
representation of group is passed to -u parameter for docker container, there 
is no ability for container to lookup string representation translation to GID. 
 For container to lookup LDAP data source on their own, it would become more 
bloat and increase LDAP server load.  It may become self defeating to use 
container isolation.  This feature requires the node manager host to connect to 
LDAP via sssd via PAM to be fully working and cache user/group lookup at host 
level, but this will scale better than NodeManager > Namenode > 
GroupMappingServiceProvider > LDAP lookup.

> Allow users to enter containers as UID:GID pair instead of by username
> --
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266.002.patch, YARN-4266.003.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139127#comment-16139127
 ] 

Hadoop QA commented on YARN-7083:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883413/YARN-7083.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d7a65b2fdf7e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e6463d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17097/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17097/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17097/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| 

[jira] [Commented] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA

2017-08-23 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139123#comment-16139123
 ] 

Robert Kanter commented on YARN-5464:
-

[~djp], have you been able to make any progress on this?

> Server-Side NM Graceful Decommissioning with RM HA
> --
>
> Key: YARN-5464
> URL: https://issues.apache.org/jira/browse/YARN-5464
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5464.wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-23 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7037:
-
Target Version/s: 2.8.2

> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139117#comment-16139117
 ] 

Junping Du commented on YARN-7037:
--

Thanks [~Tao Yang] for delivering the patch. The approach (bypass unnecessary 
buffer) here make sense to me. 
For trunk branch patch, instead of adding a new method - 
{{outputContainerLogThroughZeroCopy}}, may be we should replace original method 
- {{outputContainerLog}} directly? So that other callers like 
outputAggregatedContainerLog() can benefit from the performance improvement 
here.

> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6640) AM heartbeat stuck when responseId overflows MAX_INT

2017-08-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139055#comment-16139055
 ] 

Botong Huang commented on YARN-6640:


Unit test failure is irrelevant and being tracked under YARN-7044. 

>  AM heartbeat stuck when responseId overflows MAX_INT
> -
>
> Key: YARN-6640
> URL: https://issues.apache.org/jira/browse/YARN-6640
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Blocker
> Attachments: YARN-6640.v1.patch, YARN-6640.v2.patch
>
>
> The current code in {{ApplicationMasterService}}: 
> if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {/* old 
> heartbeat */  return lastResponse;}
> else if (request.getResponseId() + 1 < lastResponse.getResponseId()) { throw 
> ... }
> process the heartbeat...
> When a heartbeat comes in, in usual case we are expecting 
> request.getResponseId() == lastResponse.getResponseId(). The “if“ is for the 
> duplicate heartbeat that’s one step old, the “else if” is to throw and 
> complain for heartbeats more than two steps old, otherwise we accept the new 
> heartbeat and process it.
> So the bug is: when lastResponse.getResponseId() == MAX_INT, the newest 
> heartbeat comes in with responseId == MAX_INT. However reponseId + 1 will be 
> MIN_INT, and we will fall into the “else if” case and RM will throw. Then we 
> are stuck here…



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7066:

Attachment: YARN-7066.001.patch

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7066.001.patch
>
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6550) Capture launch_container.sh logs

2017-08-23 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6550:
---
Attachment: YARN-6550.007.patch

The previous patch had  portability issues on Ubuntu. Attaching an updated 
patch which instead of retaining stderr and stdout with a tee for the 
launch_container script, redirects it to a file. It then reads from the 
prelaunch.err file and updates container diagnostics with the error(s) that 
occurred.

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-23 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-7083:


Assignee: Jason Lowe

> Log aggregation deletes/renames while file is open
> --
>
> Key: YARN-7083
> URL: https://issues.apache.org/jira/browse/YARN-7083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: Daryn Sharp
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7083.001.patch
>
>
> YARN-6288 changes the log aggregation writer to be an autoclosable.  
> Unfortunately the try-with-resources block for the writer will either rename 
> or delete the log while open.
> Assuming the NM's behavior is correct, deleting open files only results in 
> ominous WARNs in the nodemanager log and increases the rate of logging in the 
> NN when the implicit try-with-resource close fails.  These red herrings 
> complicate debugging efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7083) Log aggregation deletes/renames while file is open

2017-08-23 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7083:
-
Attachment: YARN-7083.001.patch

Here's a patch that should fix the issue, since it closes the 
try-with-resources block for the writer at the same place we used to call 
close() explicitly before YARN-6288.  It looks like a lot of change, but it's 
really just adding a new outer try block and indenting for that block.  The 
same diff while ignoring whitespace-only changes looks like this:
{noformat}
$ git diff -b trunk
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
index 1601c3f..af655fc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
@@ -293,6 +293,8 @@ private void uploadLogsForContainers(boolean appFinished) {
 logAggregationTimes++;
 String diagnosticMessage = "";
 boolean logAggregationSucceedInThisCycle = true;
+try {
+  boolean uploadedLogsInThisCycle = false;
   try (LogWriter writer = createLogWriter()) {
 try {
   writer.initialize(this.conf, this.remoteNodeTmpLogFileForApp,
@@ -308,7 +310,6 @@ private void uploadLogsForContainers(boolean appFinished) {
   return;
 }
 
-  boolean uploadedLogsInThisCycle = false;
 for (ContainerId container : pendingContainerInThisCycle) {
   ContainerLogAggregator aggregator = null;
   if (containerLogAggregators.containsKey(container)) {
@@ -343,6 +344,7 @@ private void uploadLogsForContainers(boolean appFinished) {
   cleanOldLogs();
   cleanupOldLogTimes++;
 }
+  }
 
   long currentTime = System.currentTimeMillis();
   final Path renamedPath = getRenamedPath(currentTime);
{noformat}

No unit test for it yet.  I'll try to get some time to see if that's a 
straightforward thing to add.

> Log aggregation deletes/renames while file is open
> --
>
> Key: YARN-7083
> URL: https://issues.apache.org/jira/browse/YARN-7083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: Daryn Sharp
>Priority: Critical
> Attachments: YARN-7083.001.patch
>
>
> YARN-6288 changes the log aggregation writer to be an autoclosable.  
> Unfortunately the try-with-resources block for the writer will either rename 
> or delete the log while open.
> Assuming the NM's behavior is correct, deleting open files only results in 
> ominous WARNs in the nodemanager log and increases the rate of logging in the 
> NN when the implicit try-with-resource close fails.  These red herrings 
> complicate debugging efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-23 Thread Abdullah Yousufi (JIRA)
Abdullah Yousufi created YARN-7088:
--

 Summary: Fix application start time and add submit time to UIs
 Key: YARN-7088
 URL: https://issues.apache.org/jira/browse/YARN-7088
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha4
Reporter: Abdullah Yousufi
Assignee: Abdullah Yousufi


Currently, the start time in the old and new UI actually shows the app 
submission time. There should actually be two different fields; one for the 
app's submission and one for its start, as well as the elapsed pending time 
between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-23 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7037:
-
Affects Version/s: (was: 2.8.3)
   2.8.0

> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139039#comment-16139039
 ] 

Junping Du commented on YARN-6876:
--

Thanks [~xgong] for updating the patch. The latest patch looks much better!
A few minor comments:
In yarn-default.xml, "yarn.log-aggregation.file-controllers" better to change 
to file-formats for better understanding. But 
"yarn.log-aggregation.file-controller.TFile.class" and other implementation 
class name (with controller) should be fine.

LogAggregationFileController.java is supposed to be a public interface so 
third-party can implement its own log format. We should add public, unstable 
annotation to the class. If we concern it is too early to get public, then we 
should have a separated jira to track this effort when it is becoming more 
stable.

Other looks fine to me. 

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, 
> YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7074) Fix NM state store update comment

2017-08-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139038#comment-16139038
 ] 

Botong Huang commented on YARN-7074:


Hi [~kasha], can you help me commit this one? Thanks in advance! 

> Fix NM state store update comment
> -
>
> Key: YARN-7074
> URL: https://issues.apache.org/jira/browse/YARN-7074
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7074.v1.patch
>
>
> A follow up of YARN-6798 to fix a typo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7073) Rest API site documentation

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139032#comment-16139032
 ] 

Hadoop QA commented on YARN-7073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883369/YARN-7073-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 13e638b2b380 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / e00bb2b |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/17096/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17096/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7087) NM failed to perform log aggregation due to absent container

2017-08-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139030#comment-16139030
 ] 

Jason Lowe commented on YARN-7087:
--

Looks like this is related to YARN-221 and YARN-4152.  The latter fixed the NPE 
issue introduced by the first, but unfortunately simply ignoring container IDs 
that are absent isn't a real fix.  The end result when that scenario occurs is 
that we will always skip aggregating that container's logs, and that may or may 
not be the desired effect.  In this case it was not.

I believe the scenario occurs because LogAggregationService has not seen the 
event requesting log aggregation before the NM heartbeats to the RM and then 
decides to remove the container because the app has completed.  The aggregation 
service appears to only need to lookup the container to get the container type, 
so maybe we can simply store the container type in the log aggregation event so 
it doesn't need to lookup the container to process the event.

> NM failed to perform log aggregation due to absent container
> 
>
> Key: YARN-7087
> URL: https://issues.apache.org/jira/browse/YARN-7087
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Priority: Critical
>
> Saw a case where the NM failed to aggregate the logs for a container because 
> it claimed it was absent:
> {noformat}
> 2017-08-23 18:35:38,283 [AsyncDispatcher event handler] WARN 
> logaggregation.LogAggregationService: Log aggregation cannot be started for 
> container_e07_1503326514161_502342_01_01, as its an absent container
> {noformat}
> Containers should not be allowed to disappear if they're not done being fully 
> processed by the NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139020#comment-16139020
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
10s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 357 unchanged - 0 fixed = 358 total (was 357) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
36s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed 

[jira] [Created] (YARN-7087) NM failed to perform log aggregation due to absent container

2017-08-23 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7087:


 Summary: NM failed to perform log aggregation due to absent 
container
 Key: YARN-7087
 URL: https://issues.apache.org/jira/browse/YARN-7087
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation
Affects Versions: 2.8.1
Reporter: Jason Lowe
Priority: Critical


Saw a case where the NM failed to aggregate the logs for a container because it 
claimed it was absent:
{noformat}
2017-08-23 18:35:38,283 [AsyncDispatcher event handler] WARN 
logaggregation.LogAggregationService: Log aggregation cannot be started for 
container_e07_1503326514161_502342_01_01, as its an absent container
{noformat}

Containers should not be allowed to disappear if they're not done being fully 
processed by the NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7086) Release all containers aynchronously

2017-08-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138947#comment-16138947
 ] 

Karthik Kambatla commented on YARN-7086:


I like Jason's idea. 

> Release all containers aynchronously
> 
>
> Key: YARN-7086
> URL: https://issues.apache.org/jira/browse/YARN-7086
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> We have noticed in production two situations that can cause deadlocks and 
> cause scheduling of new containers to come to a halt, especially with regard 
> to applications that have a lot of live containers:
> # When these applicaitons release these containers in bulk.
> # When these applications terminate abruptly due to some failure, the 
> scheduler releases all its live containers in a loop.
> To handle the issues mentioned above, we have a patch in production to make 
> sure ALL container releases happen asynchronously - and it has served us well.
> Opening this JIRA to gather feedback on if this is a good idea generally (cc 
> [~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd])
> BTW, In YARN-6251, we already have an asyncReleaseContainer() in the 
> AbstractYarnScheduler and a corresponding scheduler event, which is currently 
> used specifically for the container-update code paths (where the scheduler 
> realeases temp containers which it creates for the update)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7050) Post cleanup after YARN-6903, removal of org.apache.slider package

2017-08-23 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7050:
-
Summary: Post cleanup after YARN-6903, removal of org.apache.slider package 
 (was: Post cleanup after YARN-6903)

> Post cleanup after YARN-6903, removal of org.apache.slider package
> --
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch, 
> YARN-7050.yarn-native-services.07.patch, 
> YARN-7050.yarn-native-services.08.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7050) Post cleanup after YARN-6903

2017-08-23 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138925#comment-16138925
 ] 

Billie Rinaldi commented on YARN-7050:
--

Patch 08 looks good. (The unit test failures are unrelated.) I think we should 
commit this one and complete the renaming / removal of "slider" in another 
ticket. I would also suggest some package name changes in the follow up:
* ServiceApiConstants - maybe move from 
org.apache.hadoop.yarn.service.api.constants to 
org.apache.hadoop.yarn.service.api
* ServiceMetrics - maybe move from org.apache.hadoop.yarn.service.metrics to 
org.apache.hadoop.yarn.service
* package org.apache.hadoop.yarn.service.compinstance - it might not be clear 
what "comp" is referring to, so perhaps 
org.apache.hadoop.yarn.service.component.instance would be better
* package org.apache.hadoop.yarn.service.servicemonitor - maybe change to 
org.apache.hadoop.yarn.service.monitor

> Post cleanup after YARN-6903
> 
>
> Key: YARN-7050
> URL: https://issues.apache.org/jira/browse/YARN-7050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7050.yarn-native-services.01.patch, 
> YARN-7050.yarn-native-services.02.patch, 
> YARN-7050.yarn-native-services.03.patch, 
> YARN-7050.yarn-native-services.04.patch, 
> YARN-7050.yarn-native-services.05.patch, 
> YARN-7050.yarn-native-services.06.patch, 
> YARN-7050.yarn-native-services.07.patch, 
> YARN-7050.yarn-native-services.08.patch
>
>
> This jira tries to remove some old code, and moves dependency classes to the 
> new package, and also some other side changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7086) Release all containers aynchronously

2017-08-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138915#comment-16138915
 ] 

Jason Lowe commented on YARN-7086:
--

We've noticed container release is particularly painful as well, although we 
haven't seen it deadlock.

Whether we do this asynchronously or not, one issue is that releasing a bunch 
of containers requires grabbing a highly-contended lock for every container 
released.  Do this in a loop and it ends up taking a long time since getting 
the lock is not cheap.  Async scheduling helps since we can wait in some other 
thread rather than in the AM handler threads or scheduler dispatcher thread, 
but it will still take a long time looping through all those events.  I think 
it would be a lot better if there was a bulk-release interface so we could grab 
the critical lock once.  We can put a limit on how many we do per batch if 
we're worried it will hold that lock for too long, but I don't think it's so 
much the actual work per container as it is the time spent waiting for the lock 
that makes this so painful.


> Release all containers aynchronously
> 
>
> Key: YARN-7086
> URL: https://issues.apache.org/jira/browse/YARN-7086
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> We have noticed in production two situations that can cause deadlocks and 
> cause scheduling of new containers to come to a halt, especially with regard 
> to applications that have a lot of live containers:
> # When these applicaitons release these containers in bulk.
> # When these applications terminate abruptly due to some failure, the 
> scheduler releases all its live containers in a loop.
> To handle the issues mentioned above, we have a patch in production to make 
> sure ALL container releases happen asynchronously - and it has served us well.
> Opening this JIRA to gather feedback on if this is a good idea generally (cc 
> [~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd])
> BTW, In YARN-6251, we already have an asyncReleaseContainer() in the 
> AbstractYarnScheduler and a corresponding scheduler event, which is currently 
> used specifically for the container-update code paths (where the scheduler 
> realeases temp containers which it creates for the update)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7052) RM SchedulingMonitor gives no indication why the spawned thread crashed.

2017-08-23 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7052:
-
Description: 
In YARN-7051, we ran into a case where the preemption monitor thread hung with 
no indication of why.

The preemption monitor is started by the {{SchedulingExecutorService}} from 
{{SchedulingMonitor#serviceStart}}. Once an uncaught throwable happens, nothing 
ever gets the result of the future, the thread running the preemption monitor 
never dies, and it never gets rescheduled.

If {{HadoopExecutor}} were used, it would at least provide a 
{{HadoopScheduledThreadPoolExecutor}} that logs the exception if one happens.

  was:
In YARN-7051, we ran into a case where the preemption monitor thread hung with 
no indication of why. This was because the preemption monitor is started by the 
{{SchedulingExecutorService}} from {{SchedulingMonitor#serviceStart}}, and then 
nothing ever gets the result of the future or allows it to throw an exception 
if needed.

At least with {{HadoopExecutor}}, it will provide a 
{{HadoopScheduledThreadPoolExecutor}} that logs the exception if one happens.

Summary: RM SchedulingMonitor gives no indication why the spawned 
thread crashed.  (was: RM SchedulingMonitor should use HadoopExecutors when 
creating ScheduledExecutorService)

Using {{HadoopExecutor}} may not be feasible.

The following is used to launch the preemption thread:
{code:title=SchedulingMonitor#serviceStart}
ses = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
...
handler = ses.scheduleAtFixedRate(new PreemptionChecker(),
0, monitorInterval, TimeUnit.MILLISECONDS);
{code}

{{HadoopExecutors}} provides a {{newSingleThreadScheduledExecutor}} interface, 
but it just turns around and calls 
{{Executors#newSingleThreadScheduledExecutor}}. The 
{{HadoopExecutors#newSingleThreadScheduledExecutor}} method does not provide 
the {{HadoopScheduledThreadPoolExecutor}} wrapper in the return value of that 
interface, so you don't get the logging benefits if you use 
{{HadoopExecutors#newSingleThreadScheduledExecutor}}

Alternatively, we could have the thread itself catch and handle throwables.

The thread being launched by {{SchedulingMonitor#serviceStart}} is calling 
{{PreemptionChecker#run}}, which only handles {{YarnRuntimeException}}. 
Anything else will cause the thread to hang and not get rescheduled.

I suggest that another solution would be to handle other throwables, log them, 
and either re-throw or cancel the thread.

> RM SchedulingMonitor gives no indication why the spawned thread crashed.
> 
>
> Key: YARN-7052
> URL: https://issues.apache.org/jira/browse/YARN-7052
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In YARN-7051, we ran into a case where the preemption monitor thread hung 
> with no indication of why.
> The preemption monitor is started by the {{SchedulingExecutorService}} from 
> {{SchedulingMonitor#serviceStart}}. Once an uncaught throwable happens, 
> nothing ever gets the result of the future, the thread running the preemption 
> monitor never dies, and it never gets rescheduled.
> If {{HadoopExecutor}} were used, it would at least provide a 
> {{HadoopScheduledThreadPoolExecutor}} that logs the exception if one happens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7079) to support nodemanager ports management

2017-08-23 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138895#comment-16138895
 ] 

Devaraj K commented on YARN-7079:
-

[~tianjuan428], Thanks for the patch, patch seems to be quite large. I think 
you've spent good effort on this, Can you also upload the design draft/details 
if you have any?

>  to support nodemanager  ports management
> -
>
> Key: YARN-7079
> URL: https://issues.apache.org/jira/browse/YARN-7079
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: 田娟娟
> Attachments: YARN_7079.001.patch
>
>
> Just like the vcores and memory, ports is also  important resource 
> information to job allocation . So we add the ports management logic to yarn. 
> It can meet the user jobs' ports  request, and  never allocate two jobs(with 
> same port requirement) to one machine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7056) Document Resource Profiles feature

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138880#comment-16138880
 ] 

Hadoop QA commented on YARN-7056:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
23s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883384/YARN-7056.YARN-3926.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 5203b3506b87 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 2144629 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17094/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Resource Profiles feature
> --
>
> Key: YARN-7056
> URL: https://issues.apache.org/jira/browse/YARN-7056
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7056.YARN-3926.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5355) YARN Timeline Service v.2: alpha 2

2017-08-23 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5355:
---
Attachment: Documentation - The YARN Timeline Service v2.pdf

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
> Attachments: Documentation - The YARN Timeline Service v2.pdf, 
> Timeline Service v2_ Ideas for Next Steps.pdf, YARN-5355.01.patch, 
> YARN-5355.02.patch, YARN-5355.03.patch, YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138862#comment-16138862
 ] 

Eric Yang edited comment on YARN-7066 at 8/23/17 6:43 PM:
--

The current proposed syntax looks like this:

{code}
{
  "name": "hbase-app-1",
  "components": [
{
  "name": "hbasemaster",
  ...
  "configuration": {
"env": {
  "HBASE_LOG_DIR": "",
  "MOUNTS": "[{ \"source\":\"/home/${USER}\", 
\"target\":\"/mnt/hdfs/user/${USER}\", \"option\":\"ro\" },{ 
\"source\":\"/tmp/${USER}/data\", \"target\":\"/mnt/hdfs/tmp/${USER}/data\" }]"
},
},
{
  ...
}
  ],
  ...
}
{code}

Where "MOUNTS" is a string of JSON that specifies list of mount point source, 
target, and option.

{code}
{
  "source": "/home/${USER}",
  "target": "/mnt/hdfs/${USER}",
  "option": "ro"
}
{code}

The nicer design looks like this in Yarnfile:

{code}
{
  "name": "serving",
  ...
  "configuration": {
"volumes": [
  {
"source": "/mnt/hdfs/user/${USER}",
"target": "/home/${USER}",
"option": "ro"
  }
]
  }
}
{code}

The nice design will break a couple Yarn container interface because the 
original design doesn't contain volumes.  Hence, I will go with environment 
variable implementation.  It might be possible to expose the volumes keyword 
for Yarnfile, then pass the information through interface using the environment 
variables to avoid changes to container interface.


was (Author: eyang):
The current proposed syntax looks like this:

{code}
{
  "name": "hbase-app-1",
  "components": [
{
  "name": "hbasemaster",
  ...
  "configuration": {
"env": {
  "HBASE_LOG_DIR": "",
  "MOUNTS": "[{ \"source\":\"/home/${USER}\", 
\"target\":\"/mnt/hdfs/user/${USER}\", \"option\":\"ro\" },{ 
\"source\":\"/tmp/${USER}/data\", \"target\":\"/mnt/hdfs/tmp/${USER}/data\" }]"
},
},
{
  ...
}
  ],
  ...
}
{code}

Where "MOUNTS" is a string of JSON that specifies list of mount point source, 
target, and option.

{code}
{
  "source": "/home/${USER}",
  "target": "/mnt/hdfs/${USER}",
  "option": "ro"
}
{code}

The nicer design looks like this in Yarnfile:

{code}
{
  "name": "serving",
  ...
  "configuration": {
"volumes": [
  {
"source": "/mnt/hdfs/user/${user}",
"target": "/home/${user}",
"option": "ro"
  }
]
  }
}
{code}

The nice design will break a couple Yarn container interface because the 
original design doesn't contain volumes.  Hence, I will go with environment 
variable implementation.  It might be possible to expose the volumes keyword 
for Yarnfile, then pass the information through interface using the environment 
variables to avoid changes to container interface.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7066) Add ability to specify volumes to mount for DockerContainerRuntime

2017-08-23 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138862#comment-16138862
 ] 

Eric Yang commented on YARN-7066:
-

The current proposed syntax looks like this:

{code}
{
  "name": "hbase-app-1",
  "components": [
{
  "name": "hbasemaster",
  ...
  "configuration": {
"env": {
  "HBASE_LOG_DIR": "",
  "MOUNTS": "[{ \"source\":\"/home/${USER}\", 
\"target\":\"/mnt/hdfs/user/${USER}\", \"option\":\"ro\" },{ 
\"source\":\"/tmp/${USER}/data\", \"target\":\"/mnt/hdfs/tmp/${USER}/data\" }]"
},
},
{
  ...
}
  ],
  ...
}
{code}

Where "MOUNTS" is a string of JSON that specifies list of mount point source, 
target, and option.

{code}
{
  "source": "/home/${USER}",
  "target": "/mnt/hdfs/${USER}",
  "option": "ro"
}
{code}

The nicer design looks like this in Yarnfile:

{code}
{
  "name": "serving",
  ...
  "configuration": {
"volumes": [
  {
"source": "/mnt/hdfs/user/${user}",
"target": "/home/${user}",
"option": "ro"
  }
]
  }
}
{code}

The nice design will break a couple Yarn container interface because the 
original design doesn't contain volumes.  Hence, I will go with environment 
variable implementation.  It might be possible to expose the volumes keyword 
for Yarnfile, then pass the information through interface using the environment 
variables to avoid changes to container interface.

> Add ability to specify volumes to mount for DockerContainerRuntime
> --
>
> Key: YARN-7066
> URL: https://issues.apache.org/jira/browse/YARN-7066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>
> Yarnfile describes environment, docker image, and configuration template for 
> launching docker containers in YARN.  It would be nice to have ability to 
> specify the volumes to mount.  This can be used in combination to 
> AMBARI-21748 to mount HDFS as data directories to docker containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5592) Add support for dynamic resource updates with multiple resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5592:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-3926)

> Add support for dynamic resource updates with multiple resource types
> -
>
> Key: YARN-5592
> URL: https://issues.apache.org/jira/browse/YARN-5592
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5592) Add support for dynamic resource updates with multiple resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5592:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7069

> Add support for dynamic resource updates with multiple resource types
> -
>
> Key: YARN-5592
> URL: https://issues.apache.org/jira/browse/YARN-5592
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5591) Update web UIs to reflect multiple resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5591:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7069

> Update web UIs to reflect multiple resource types
> -
>
> Key: YARN-5591
> URL: https://issues.apache.org/jira/browse/YARN-5591
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5590:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7069

> Add support for increase and decrease of container resources with resource 
> profiles
> ---
>
> Key: YARN-5590
> URL: https://issues.apache.org/jira/browse/YARN-5590
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5590:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-3926)

> Add support for increase and decrease of container resources with resource 
> profiles
> ---
>
> Key: YARN-5590
> URL: https://issues.apache.org/jira/browse/YARN-5590
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5591) Update web UIs to reflect multiple resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5591:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-3926)

> Update web UIs to reflect multiple resource types
> -
>
> Key: YARN-5591
> URL: https://issues.apache.org/jira/browse/YARN-5591
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5589:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7069

> Update CapacitySchedulerConfiguration minimum and maximum calculations to 
> consider all resource types
> -
>
> Key: YARN-5589
> URL: https://issues.apache.org/jira/browse/YARN-5589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5589:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-3926)

> Update CapacitySchedulerConfiguration minimum and maximum calculations to 
> consider all resource types
> -
>
> Key: YARN-5589
> URL: https://issues.apache.org/jira/browse/YARN-5589
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7056) Document Resource Profiles feature

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-7056:
-

Assignee: Sunil G

> Document Resource Profiles feature
> --
>
> Key: YARN-7056
> URL: https://issues.apache.org/jira/browse/YARN-7056
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7056.YARN-3926.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7056) Document Resource Profiles feature

2017-08-23 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7056:
--
Attachment: YARN-7056.YARN-3926.001.patch

Attaching an initial patch to add basic documentation for resource profiles and 
how to config/use same. 
cc/[~leftnoteasy]

> Document Resource Profiles feature
> --
>
> Key: YARN-7056
> URL: https://issues.apache.org/jira/browse/YARN-7056
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
> Attachments: YARN-7056.YARN-3926.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7086) Release all containers aynchronously

2017-08-23 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7086:
-

 Summary: Release all containers aynchronously
 Key: YARN-7086
 URL: https://issues.apache.org/jira/browse/YARN-7086
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Arun Suresh
Assignee: Arun Suresh


We have noticed in production two situations that can cause deadlocks and cause 
scheduling of new containers to come to a halt, especially with regard to 
applications that have a lot of live containers:
# When these applicaitons release these containers in bulk.
# When these applications terminate abruptly due to some failure, the scheduler 
releases all its live containers in a loop.

To handle the issues mentioned above, we have a patch in production to make 
sure ALL container releases happen asynchronously - and it has served us well.

Opening this JIRA to gather feedback on if this is a good idea generally (cc 
[~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd])

BTW, In YARN-6251, we already have an asyncReleaseContainer() in the 
AbstractYarnScheduler and a corresponding scheduler event, which is currently 
used specifically for the container-update code paths (where the scheduler 
realeases temp containers which it creates for the update)





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7024) Fix issues on recovery in LevelDB store

2017-08-23 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138816#comment-16138816
 ] 

Xuan Gong commented on YARN-7024:
-

+1 lgtm.

Committed into YARN-5734 branch. Thanks,  Jonathan

> Fix issues on recovery in LevelDB store
> ---
>
> Key: YARN-7024
> URL: https://issues.apache.org/jira/browse/YARN-7024
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-7024-YARN-5734.001.patch
>
>
> On recovery, we get the pending mutations not yet committed to store. Upon 
> confirming them, we remove them in memory while iterating, which may cause 
> issues in the future.
> Also, when getting the pending mutations which were logged but not committed 
> on recovery, we don't update the txnId in memory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7085) Application.schedule(), Application.getResources(), and Application.assign() appear to only be used via test code

2017-08-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7085:
---
Summary: Application.schedule(), Application.getResources(), and 
Application.assign() appear to only be used via test code  (was: 
Application.schedule() and Application.assign() appear to only be used in test 
code)

> Application.schedule(), Application.getResources(), and Application.assign() 
> appear to only be used via test code
> -
>
> Key: YARN-7085
> URL: https://issues.apache.org/jira/browse/YARN-7085
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>
> That's a pretty big chunk of code to be purely for tests.  I haven't looked 
> at it closely enough yet to tell if the code is there to support the tests, 
> or if the tests are just testing dead code.  Either way, we should remove the 
> code from {{Application}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7085) Application.schedule() and Application.assign() appear to only be used in test code

2017-08-23 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-7085:
--

 Summary: Application.schedule() and Application.assign() appear to 
only be used in test code
 Key: YARN-7085
 URL: https://issues.apache.org/jira/browse/YARN-7085
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 3.0.0-alpha4
Reporter: Daniel Templeton


That's a pretty big chunk of code to be purely for tests.  I haven't looked at 
it closely enough yet to tell if the code is there to support the tests, or if 
the tests are just testing dead code.  Either way, we should remove the code 
from {{Application}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-23 Thread Aaron Gresch (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Gresch updated YARN-6736:
---
Attachment: YARN-6736-YARN-5355.005.patch

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch, YARN-6736-YARN-5355.003.patch, 
> YARN-6736-YARN-5355.004.patch, YARN-6736-YARN-5355.005.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6640) AM heartbeat stuck when responseId overflows MAX_INT

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138734#comment-16138734
 ] 

Hadoop QA commented on YARN-6640:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6640 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883359/YARN-6640.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e6e65f7572a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4249172 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17090/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17090/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17090/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



>  AM heartbeat stuck when responseId overflows MAX_INT
> -
>
> Key: YARN-6640
> URL: https://issues.apache.org/jira/browse/YARN-6640
> Project: Hadoop YARN
>   

[jira] [Commented] (YARN-6933) ResourceUtils.DISALLOWED_NAMES and ResourceUtils.checkMandatoryResources() are duplicating work

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138690#comment-16138690
 ] 

Hadoop QA commented on YARN-6933:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 7s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6933 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883358/YARN-6933-YARN-3926.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a34eae277603 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 2144629 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17091/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17091/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

  1   2   >