[jira] [Updated] (YARN-5966) AMRMClient changes to support ExecutionType update

2017-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5966:
--
Attachment: YARN-5966.006.patch

Updating patch to fix the javac warnings and the white-space checkstyle warning.

> AMRMClient changes to support ExecutionType update
> --
>
> Key: YARN-5966
> URL: https://issues.apache.org/jira/browse/YARN-5966
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5966.001.patch, YARN-5966.002.patch, 
> YARN-5966.003.patch, YARN-5966.004.patch, YARN-5966.005.patch, 
> YARN-5966.006.patch, YARN-5966.wip.001.patch
>
>
> {{AMRMClient}} changes to support change of container ExecutionType



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839241#comment-15839241
 ] 

Xiao Chen commented on YARN-5641:
-

It seems this commit broke branch-2 compilation. Could you take a look? Thanks.

https://builds.apache.org/job/PreCommit-HADOOP-Build/11511/artifact/patchprocess/branch-compile-root-jdk1.7.0_121.txt

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch, YARN-5641.004.patch, YARN-5641.005.patch, 
> YARN-5641.006.patch, YARN-5641.007.patch, YARN-5641.008.patch, 
> YARN-5641.009.patch, YARN-5641.009.patch, YARN-5641.010.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5966) AMRMClient changes to support ExecutionType update

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839135#comment-15839135
 ] 

Hadoop QA commented on YARN-5966:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 20s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 35 unchanged - 
0 fixed = 38 total (was 35) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 105 unchanged - 3 fixed = 112 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m  2s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5966 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849399/YARN-5966.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 36318d23ee8a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (YARN-4658) Typo in o.a.h.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler comment

2017-01-25 Thread Udai Kiran Potluri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Kiran Potluri updated YARN-4658:
-
Attachment: YARN-4658.001.patch

Patch to fix the typo.

> Typo in o.a.h.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
> comment
> --
>
> Key: YARN-4658
> URL: https://issues.apache.org/jira/browse/YARN-4658
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Udai Kiran Potluri
> Attachments: YARN-4658.001.patch
>
>
> Comment in {{testContinuousSchedulingInterruptedException()}} is
> {code}
>   // Add one nodes 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5831) Propagate allowPreemptionFrom flag all the way down to the app

2017-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839029#comment-15839029
 ] 

Yufei Gu commented on YARN-5831:


Thanks Karthik!

> Propagate allowPreemptionFrom flag all the way down to the app
> --
>
> Key: YARN-5831
> URL: https://issues.apache.org/jira/browse/YARN-5831
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5831.001.patch, YARN-5831.002.patch, 
> YARN-5831.003.patch, YARN-5831.004.patch, YARN-5831.005.patch
>
>
> FairScheduler allows disallowing preemption from a queue. When checking if 
> preemption for an application is allowed, the new preemption code recurses 
> all the way to the root queue to check this flag. 
> Propagating this information all the way to the app will be more efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839026#comment-15839026
 ] 

Yufei Gu commented on YARN-4975:


I've tested these failure tests locally. They are unrelated. 

> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 
>
> Key: YARN-4975
> URL: https://issues.apache.org/jira/browse/YARN-4975
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
> Attachments: YARN-4975.001.patch, YARN-4975.002.patch
>
>
> We upgraded our clusters to 2.7.2 from 2.4.1 and saw the following exception 
> in RM logs :
> {code}
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationConfigurationException:
>  Both  and type="parent" found for queue root.adhoc which is 
> unsupported
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:519)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:352)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1440)
> {code}
> From the exception, it looks like we've configured 'reservation', but we've 
> not. The issue is that AllocationFileLoaderService#loadQueue assumes that a 
> parent queue marked as 'type=parent' cannot have configured child queues. 
> That can be a problem in cases where we mark a queue as 'parent' which has no 
> configured child queues to start with, but we can add child queues later on.
> Also the exception message is kind of misleading since we haven't configured 
> 'reservation'. 
> How to reproduce:
> Run fair scheduler with following queue config:
> {code}
> 
> 10
> 300
> 
> 3
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839013#comment-15839013
 ] 

ASF GitHub Bot commented on YARN-4212:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/181#discussion_r97920043
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
 ---
@@ -305,4 +305,24 @@ public void recoverContainer(Resource clusterResource,
 // TODO Auto-generated method stub
 
   }
+
+  /**
+   * Recursively check policies for queues in pre-order. Get queue policies
+   * from the allocation file instead of properties of {@link FSQueue} 
objects.
+   *
+   * @param queueConf allocation configuration
+   * @throws AllocationConfigurationException if there is any policy 
violation
+   */
+  public void checkPoliciesFromConf(AllocationConfiguration queueConf)
--- End diff --

Good idea! 


> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839009#comment-15839009
 ] 

ASF GitHub Bot commented on YARN-4212:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/181#discussion_r97919979
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
 ---
@@ -282,6 +292,13 @@ private FSQueue createNewQueues(FSQueueType queueType,
 queue = newParent;
   }
 
+  try {
+policy.initialize(scheduler.getClusterResource());
--- End diff --

My first thought of this one is similar to the depth checking, planned to 
refactor it in next JIRA. Another question in my mind is - do we need to 
initialize policy every time setting the policy?


> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839007#comment-15839007
 ] 

ASF GitHub Bot commented on YARN-4212:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/181#discussion_r97919802
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
 ---
@@ -272,6 +272,16 @@ private FSQueue createNewQueues(FSQueueType queueType,
   FSParentQueue newParent = null;
   String queueName = i.next();
 
+  // Check if child policy is allowed
--- End diff --

Yes,  my original thought is to do that in another JIRA. The depth and 
parent-child policy are not the same. It mighty a good idea to combine them 
since the logic of depth checking only prevent fifo policy to be non-leaf 
queue. The current implementation seems a bit heavy. I can do it in this JIRA.


> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838986#comment-15838986
 ] 

Hadoop QA commented on YARN-4975:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849388/YARN-4975.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b1904647aadc 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 425a7e5 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14755/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14755/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14755/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 

[jira] [Updated] (YARN-5966) AMRMClient changes to support ExecutionType update

2017-01-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5966:
--
Attachment: YARN-5966.005.patch

Updating patch. Agree with [~subru]

> AMRMClient changes to support ExecutionType update
> --
>
> Key: YARN-5966
> URL: https://issues.apache.org/jira/browse/YARN-5966
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5966.001.patch, YARN-5966.002.patch, 
> YARN-5966.003.patch, YARN-5966.004.patch, YARN-5966.005.patch, 
> YARN-5966.wip.001.patch
>
>
> {{AMRMClient}} changes to support change of container ExecutionType



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838923#comment-15838923
 ] 

Hudson commented on YARN-3637:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11176 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11176/])
YARN-3637. Handle localization sym-linking correctly at the YARN level. (sjlee: 
rev 425a7e502869c4250aba927ecc3c6f3c561c6ff2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestSharedCacheClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/SharedCacheClient.java


> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch, 
> YARN-3637-trunk.003.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-01-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838885#comment-15838885
 ] 

Jian He edited comment on YARN-3053 at 1/26/17 12:27 AM:
-

bq. Because for such clients we will not have a mechanism to pass the token 
when collector/NM restarts.
sorry, didn't get that. For such apps, won't the client still need to pass the 
new address to the AMs in some way. IIUC, it has no difference with passing the 
token.
Also, I'm not sure the original collector design had accounted for unmanaged AM 
in general case. (I think the collector is not even launched currently for 
unmanaged AM). A lot other details need to be freshed out. 


was (Author: jianhe):
bq. Because for such clients we will not have a mechanism to pass the token 
when collector/NM restarts.
sorry, didn't get that. For such apps, won't the client still need to pass the 
new address to the AMs in app's own way. IIUC, it has no difference with 
passing the token.
Also, I'm not sure the original collector design had accounted for unmanaged AM 
in general case. (I think the collector is not even launched currently for 
unmanaged AM). A lot other details need to be freshed out. 

> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-01-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838885#comment-15838885
 ] 

Jian He commented on YARN-3053:
---

bq. Because for such clients we will not have a mechanism to pass the token 
when collector/NM restarts.
sorry, didn't get that. For such apps, won't the client still need to pass the 
new address to the AMs in app's own way. IIUC, it has no difference with 
passing the token.
Also, I'm not sure the original collector design had accounted for unmanaged AM 
in general case. (I think the collector is not even launched currently for 
unmanaged AM). A lot other details need to be freshed out. 

> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838872#comment-15838872
 ] 

Daniel Templeton commented on YARN-4975:


+1 pending Jenkins' approval.

> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 
>
> Key: YARN-4975
> URL: https://issues.apache.org/jira/browse/YARN-4975
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
> Attachments: YARN-4975.001.patch, YARN-4975.002.patch
>
>
> We upgraded our clusters to 2.7.2 from 2.4.1 and saw the following exception 
> in RM logs :
> {code}
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationConfigurationException:
>  Both  and type="parent" found for queue root.adhoc which is 
> unsupported
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:519)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:352)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1440)
> {code}
> From the exception, it looks like we've configured 'reservation', but we've 
> not. The issue is that AllocationFileLoaderService#loadQueue assumes that a 
> parent queue marked as 'type=parent' cannot have configured child queues. 
> That can be a problem in cases where we mark a queue as 'parent' which has no 
> configured child queues to start with, but we can add child queues later on.
> Also the exception message is kind of misleading since we haven't configured 
> 'reservation'. 
> How to reproduce:
> Run fair scheduler with following queue config:
> {code}
> 
> 10
> 300
> 
> 3
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6117) SharedCacheManager does not start up

2017-01-25 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838853#comment-15838853
 ] 

Chris Trezzo commented on YARN-6117:


Thanks [~sjlee0] for the review and commit!

> SharedCacheManager does not start up
> 
>
> Key: YARN-6117
> URL: https://issues.apache.org/jira/browse/YARN-6117
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6117-trunk.001.patch
>
>
> The webapp directory for the SharedCacheManager is missing and the SCM fails 
> to start up with the following:
> {noformat}
> 2017-01-22 00:14:25,162 INFO org.apache.hadoop.service.AbstractService: 
> Service SharedCacheManager failed in state STARTED; cause: 
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:330)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:373)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.webapp.SCMWebServer.serviceStart(SCMWebServer.java:65)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager.main(SharedCacheManager.java:157)
> Caused by: java.io.FileNotFoundException: webapps/sharedcache not found in 
> CLASSPATH
> at 
> org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:972)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:478)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
> at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:392)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:291)
> ... 7 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-25 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838851#comment-15838851
 ] 

Chris Trezzo commented on YARN-3637:


Thanks [~sjlee0] for the commit/review and thanks [~templedf] for the review!

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch, 
> YARN-3637-trunk.003.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-3637:
--
Hadoop Flags: Reviewed

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch, 
> YARN-3637-trunk.003.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838843#comment-15838843
 ] 

Yufei Gu commented on YARN-4975:


Thanks [~templedf] for the review. They totally make sense to me. Uploaded 
patch 002 for your comments.

> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 
>
> Key: YARN-4975
> URL: https://issues.apache.org/jira/browse/YARN-4975
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
> Attachments: YARN-4975.001.patch, YARN-4975.002.patch
>
>
> We upgraded our clusters to 2.7.2 from 2.4.1 and saw the following exception 
> in RM logs :
> {code}
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationConfigurationException:
>  Both  and type="parent" found for queue root.adhoc which is 
> unsupported
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:519)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:352)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1440)
> {code}
> From the exception, it looks like we've configured 'reservation', but we've 
> not. The issue is that AllocationFileLoaderService#loadQueue assumes that a 
> parent queue marked as 'type=parent' cannot have configured child queues. 
> That can be a problem in cases where we mark a queue as 'parent' which has no 
> configured child queues to start with, but we can add child queues later on.
> Also the exception message is kind of misleading since we haven't configured 
> 'reservation'. 
> How to reproduce:
> Run fair scheduler with following queue config:
> {code}
> 
> 10
> 300
> 
> 3
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4975:
---
Attachment: YARN-4975.002.patch

> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 
>
> Key: YARN-4975
> URL: https://issues.apache.org/jira/browse/YARN-4975
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
> Attachments: YARN-4975.001.patch, YARN-4975.002.patch
>
>
> We upgraded our clusters to 2.7.2 from 2.4.1 and saw the following exception 
> in RM logs :
> {code}
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationConfigurationException:
>  Both  and type="parent" found for queue root.adhoc which is 
> unsupported
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:519)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:352)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1440)
> {code}
> From the exception, it looks like we've configured 'reservation', but we've 
> not. The issue is that AllocationFileLoaderService#loadQueue assumes that a 
> parent queue marked as 'type=parent' cannot have configured child queues. 
> That can be a problem in cases where we mark a queue as 'parent' which has no 
> configured child queues to start with, but we can add child queues later on.
> Also the exception message is kind of misleading since we haven't configured 
> 'reservation'. 
> How to reproduce:
> Run fair scheduler with following queue config:
> {code}
> 
> 10
> 300
> 
> 3
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-01-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838838#comment-15838838
 ] 

Varun Saxena commented on YARN-3053:


By the way, I think we can still ensure recovery of tokens because we can 
provide an API at the client side to get delegation tokens explicitly (for 
non-AM / off-app clients in future) if they can do kerberos authentication with 
YARN ATS.
Because for such clients we will not have a mechanism to pass the token when 
collector/NM restarts.

We can however leave aside storing tokens granted to AMs' and regenerate them 
on restart.

> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6123) [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated when childQueues added or removed.

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838832#comment-15838832
 ] 

Hadoop QA commented on YARN-6123:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 123 unchanged - 0 fixed = 128 total (was 123) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849375/YARN-6123.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42a43b8cee8a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a7463b6 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14754/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14754/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14754/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Comment Edited] (YARN-6117) SharedCacheManager does not start up

2017-01-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838828#comment-15838828
 ] 

Sangjin Lee edited comment on YARN-6117 at 1/25/17 11:39 PM:
-

Ported it to branch-2 as well (2.9.0). Since the shared cache is not fully 
implemented in previous versions, I don't think we need to backport to those 
versions.


was (Author: sjlee0):
Ported it to branch-2 as well (2.9.0).

> SharedCacheManager does not start up
> 
>
> Key: YARN-6117
> URL: https://issues.apache.org/jira/browse/YARN-6117
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6117-trunk.001.patch
>
>
> The webapp directory for the SharedCacheManager is missing and the SCM fails 
> to start up with the following:
> {noformat}
> 2017-01-22 00:14:25,162 INFO org.apache.hadoop.service.AbstractService: 
> Service SharedCacheManager failed in state STARTED; cause: 
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:330)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:373)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.webapp.SCMWebServer.serviceStart(SCMWebServer.java:65)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager.main(SharedCacheManager.java:157)
> Caused by: java.io.FileNotFoundException: webapps/sharedcache not found in 
> CLASSPATH
> at 
> org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:972)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:478)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
> at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:392)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:291)
> ... 7 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6117) SharedCacheManager does not start up

2017-01-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-6117:
--
Fix Version/s: 2.9.0

Ported it to branch-2 as well (2.9.0).

> SharedCacheManager does not start up
> 
>
> Key: YARN-6117
> URL: https://issues.apache.org/jira/browse/YARN-6117
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6117-trunk.001.patch
>
>
> The webapp directory for the SharedCacheManager is missing and the SCM fails 
> to start up with the following:
> {noformat}
> 2017-01-22 00:14:25,162 INFO org.apache.hadoop.service.AbstractService: 
> Service SharedCacheManager failed in state STARTED; cause: 
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:330)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:373)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.webapp.SCMWebServer.serviceStart(SCMWebServer.java:65)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager.main(SharedCacheManager.java:157)
> Caused by: java.io.FileNotFoundException: webapps/sharedcache not found in 
> CLASSPATH
> at 
> org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:972)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:478)
> at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:117)
> at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:392)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:291)
> ... 7 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6124) Make CapacityScheduler Preemption Config can be enabled / disabled / updated without restarting RM

2017-01-25 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-6124:


 Summary: Make CapacityScheduler Preemption Config can be enabled / 
disabled / updated without restarting RM
 Key: YARN-6124
 URL: https://issues.apache.org/jira/browse/YARN-6124
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Wangda Tan
Assignee: Wangda Tan


Now enabled / disable / update CapacityScheduler preemption config requires 
restart RM. This is inconvenient when admin wants to make changes to preemption 
config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4975) Fair Scheduler: exception thrown when a parent queue marked 'parent' has configured child queues

2017-01-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838792#comment-15838792
 ] 

Daniel Templeton commented on YARN-4975:


Changes look good to me.  Couple of nits:

* This message: "Can't mark  to a parent queue: " could be 
clearer; maybe "The configuration settings for " + queue + " are invalid.  A 
queue element that contains child queue elements or that has the type="parent" 
attribute cannot also include a reservation element."
* I'm not a huge fan of expected exceptions in tests.  I'd rather you catch the 
exception and make sure the exception text is from the right exception.  With 
expected exceptions, you could get the exception for the wrong reason and still 
pass.
* In the last test, it would be nice to do a couple of basic asserts to confirm 
that the config was instantiated correctly, i.e. check that the parent and 
child queues exist.  It's redundant, but better to be safe.

> Fair Scheduler: exception thrown when a parent queue marked 'parent' has 
> configured child queues
> 
>
> Key: YARN-4975
> URL: https://issues.apache.org/jira/browse/YARN-4975
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
> Attachments: YARN-4975.001.patch
>
>
> We upgraded our clusters to 2.7.2 from 2.4.1 and saw the following exception 
> in RM logs :
> {code}
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationConfigurationException:
>  Both  and type="parent" found for queue root.adhoc which is 
> unsupported
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:519)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:352)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1440)
> {code}
> From the exception, it looks like we've configured 'reservation', but we've 
> not. The issue is that AllocationFileLoaderService#loadQueue assumes that a 
> parent queue marked as 'type=parent' cannot have configured child queues. 
> That can be a problem in cases where we mark a queue as 'parent' which has no 
> configured child queues to start with, but we can add child queues later on.
> Also the exception message is kind of misleading since we haven't configured 
> 'reservation'. 
> How to reproduce:
> Run fair scheduler with following queue config:
> {code}
> 
> 10
> 300
> 
> 3
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6112) fsOpDurations.addUpdateCallDuration() should be independent to LOG level

2017-01-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838762#comment-15838762
 ] 

Daniel Templeton commented on YARN-6112:


If [~kasha] agrees that change is a mistake, I'm +1 for the patch.

> fsOpDurations.addUpdateCallDuration() should be independent to LOG level
> 
>
> Key: YARN-6112
> URL: https://issues.apache.org/jira/browse/YARN-6112
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6112.001.patch
>
>
> In the update thread of Fair Scheduler, the 
> {{fsOpDurations.addUpdateCallDuration()}} records the duration of 
> {{update()}}, it should be independent to LOG level. YARN-4752 put the it 
> inside a {{LOG.isDebugEnabled()}} block. Not sure any particular reason to do 
> that. cc [~kasha]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6112) fsOpDurations.addUpdateCallDuration() should be independent to LOG level

2017-01-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838759#comment-15838759
 ] 

Daniel Templeton commented on YARN-6112:


[~kasha], the issue is that the call to 
{{fsOpDurations.addUpdateCallDuration()}} now (post 4752) happens inside the 
{{if (LOG.isDebugEnabled()}} block.  Before it was done whether debug was 
enabled or not, which appears to be the right thing.

> fsOpDurations.addUpdateCallDuration() should be independent to LOG level
> 
>
> Key: YARN-6112
> URL: https://issues.apache.org/jira/browse/YARN-6112
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6112.001.patch
>
>
> In the update thread of Fair Scheduler, the 
> {{fsOpDurations.addUpdateCallDuration()}} records the duration of 
> {{update()}}, it should be independent to LOG level. YARN-4752 put the it 
> inside a {{LOG.isDebugEnabled()}} block. Not sure any particular reason to do 
> that. cc [~kasha]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6123) [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated when childQueues added or removed.

2017-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6123:
-
Target Version/s: 2.9.0, 3.0.0-alpha3

> [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated 
> when childQueues added or removed.
> ---
>
> Key: YARN-6123
> URL: https://issues.apache.org/jira/browse/YARN-6123
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6123.001.patch
>
>
> YARN-5864 added queue ordering policy to ParentQueue, we need to make sure 
> queues of QueueOrderingPolicy will be updated when any changes made for child 
> queues.
> We need to add a test to make sure it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6123) [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated when childQueues added or removed.

2017-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6123:
-
Fix Version/s: (was: 3.0.0-alpha3)
   (was: 2.9.0)

> [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated 
> when childQueues added or removed.
> ---
>
> Key: YARN-6123
> URL: https://issues.apache.org/jira/browse/YARN-6123
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6123.001.patch
>
>
> YARN-5864 added queue ordering policy to ParentQueue, we need to make sure 
> queues of QueueOrderingPolicy will be updated when any changes made for child 
> queues.
> We need to add a test to make sure it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6123) [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated when childQueues added or removed.

2017-01-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6123:
-
Attachment: YARN-6123.001.patch

Attached ver.001 patch, [~sunilg] could you please review it? Thanks!

> [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated 
> when childQueues added or removed.
> ---
>
> Key: YARN-6123
> URL: https://issues.apache.org/jira/browse/YARN-6123
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6123.001.patch
>
>
> YARN-5864 added queue ordering policy to ParentQueue, we need to make sure 
> queues of QueueOrderingPolicy will be updated when any changes made for child 
> queues.
> We need to add a test to make sure it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6123) [YARN-5864] Add a test to make sure queues of orderingPolicy will be updated when childQueues added or removed.

2017-01-25 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-6123:


 Summary: [YARN-5864] Add a test to make sure queues of 
orderingPolicy will be updated when childQueues added or removed.
 Key: YARN-6123
 URL: https://issues.apache.org/jira/browse/YARN-6123
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


YARN-5864 added queue ordering policy to ParentQueue, we need to make sure 
queues of QueueOrderingPolicy will be updated when any changes made for child 
queues.

We need to add a test to make sure it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-25 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838708#comment-15838708
 ] 

Chris Trezzo commented on YARN-3637:


I would say trunk and branch-2 are sufficient. Thanks [~sjlee0]!

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch, 
> YARN-3637-trunk.003.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838706#comment-15838706
 ] 

Jason Lowe commented on YARN-574:
-

Thanks for updating the patch!

I don't think this while loop is desired:
{code}
while (currentActiveDownloads.get() >= downloadThreadCount) {
  pauseHeartbeat(cs);
}
{code}
Doing so will prevent the localizer from heartbeating at all to the NM for the 
duration of the active localizations.  That means it ends up doing unnecessary 
work if the container is killed during localization (i.e.: doesn't know it 
would receive a DIE request).  It would also be problematic if we ever 
implemented proper liveness detection for localizers (i.e.: they need to 
continue heartbeating to show they're alive while still localizing).

If we want to prevent the localizer from receiving more work when it's full 
then we should augment the localizer protocol to indicate that in the status, 
e.g.: a boolean indicating that it is 'full' of active localizations or maybe a 
count indicating how many localizations the localizer is ready to accept at the 
moment.  Doing a count has the advantage that the NM can internally loop during 
the localizer status processing and respond with all of the localizations in 
one response rather than making the localizer send N heartbeats to get N active 
downloads going.  Removes the whole messy 
sometimes-we-heartbeat-fast-sometimes-slow thing and excess RPC processing to 
get a lot of downloads going.

Speaking of counting downloads, we can eliminate the Atomic stuff and needing 
to wrap the download call by simply counting the incomplete Futures in the 
createStatus method.  It's already walking all of the pending downloads for 
every heartbeat, which means it would be trivial for it to update a member 
variable with the count of unfinished download Futures (i.e.: active 
downloads).  It would be a simpler approach, but the existing counting scheme 
should work as well.  I'll leave it up to you.

If the download wrapping stays, it should not be using a lambda expression if 
this is going into branch-2 since branch-2 does not require JDK8.  Either that 
or we need a separate patch for branch-2, and I'd rather keep them closer in 
sync to make future cherry-picks easier to do.

Nit: Javadoc that just enumerates the arguments and empty return tags provide 
no value.  Please remove them or add appropriate documentation to make them 
worthwhile.

The unit test failure is related, please investigate.

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838698#comment-15838698
 ] 

Hudson commented on YARN-5641:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11174 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11174/])
YARN-5641. Localizer leaves behind tarballs after container is complete. 
(jlowe: rev 9e19f758c1950cbcfcd1969461a8a910efca0767)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestContainerLocalizer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShell.java


> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch, YARN-5641.004.patch, YARN-5641.005.patch, 
> YARN-5641.006.patch, YARN-5641.007.patch, YARN-5641.008.patch, 
> YARN-5641.009.patch, YARN-5641.009.patch, YARN-5641.010.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6118) Add javadoc for Resources.isNone

2017-01-25 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838678#comment-15838678
 ] 

Andres Perez commented on YARN-6118:


Could I take this simple task?

> Add javadoc for Resources.isNone
> 
>
> Key: YARN-6118
> URL: https://issues.apache.org/jira/browse/YARN-6118
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-01-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838583#comment-15838583
 ] 

Haibo Chen commented on YARN-4985:
--

[~sjlee0] Can you elaborate a little more on why the schema creator can also be 
"client"? My understanding is that since we run "bin/hbase 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator", 
the hadoop dependencies will be provided by the hbase cluster and therefore 
their versions will be $hbase-compatible-hadoop.version}.

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Document FairScheduler 'allowPreemptionFrom' queue property

2017-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838576#comment-15838576
 ] 

Yufei Gu commented on YARN-6106:


FYI, YARN-6076 and YARN-5831 are in branch-2. [~rchiang], can you help to check 
this into branch-2? Thanks.

> Document FairScheduler 'allowPreemptionFrom' queue property
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-01-25 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-4985:


Assignee: Haibo Chen  (was: Vrushali C)

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5830) FairScheduler: Avoid preempting AM containers

2017-01-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838572#comment-15838572
 ] 

Yufei Gu commented on YARN-5830:


[~kasha], thanks a lot for the detailed reviews and commit!

> FairScheduler: Avoid preempting AM containers
> -
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-5830.001.patch, YARN-5830.002.patch, 
> YARN-5830.003.patch, YARN-5830.004.patch, YARN-5830.005.patch, 
> YARN-5830.006.patch, YARN-5830.007.patch
>
>
> While considering containers for preemption, avoid AM containers unless 
> absolutely necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5830) FairScheduler: Avoid preempting AM containers

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838560#comment-15838560
 ] 

Hudson commented on YARN-5830:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11172 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11172/])
YARN-5830. FairScheduler: Avoid preempting AM containers. (Yufei Gu via (kasha: 
rev abedb8a9d86b4593a37fd3d2313fbcb057c7846a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerPreemption.java


> FairScheduler: Avoid preempting AM containers
> -
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-5830.001.patch, YARN-5830.002.patch, 
> YARN-5830.003.patch, YARN-5830.004.patch, YARN-5830.005.patch, 
> YARN-5830.006.patch, YARN-5830.007.patch
>
>
> While considering containers for preemption, avoid AM containers unless 
> absolutely necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838551#comment-15838551
 ] 

Hadoop QA commented on YARN-6100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 39 unchanged - 1 fixed = 44 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  3s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
52s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  
org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices$2.write(OutputStream)
 might ignore java.lang.Exception  At NMWebServices.java:At 
NMWebServices.java:[line 421] |
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849341/YARN-6100.trunk.v3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-5831) Propagate allowPreemptionFrom flag all the way down to the app

2017-01-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838517#comment-15838517
 ] 

Karthik Kambatla commented on YARN-5831:


With YARN-4752 committed to branch-2, the patch applies cleanly to branch-2. 
Just committed it to branch-2 as well. 

Thanks for the contribution, [~yufeigu]. 

> Propagate allowPreemptionFrom flag all the way down to the app
> --
>
> Key: YARN-5831
> URL: https://issues.apache.org/jira/browse/YARN-5831
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5831.001.patch, YARN-5831.002.patch, 
> YARN-5831.003.patch, YARN-5831.004.patch, YARN-5831.005.patch
>
>
> FairScheduler allows disallowing preemption from a queue. When checking if 
> preemption for an application is allowed, the new preemption code recurses 
> all the way to the root queue to check this flag. 
> Propagating this information all the way to the app will be more efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5831) Propagate allowPreemptionFrom flag all the way down to the app

2017-01-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5831:
---
Fix Version/s: 2.9.0

> Propagate allowPreemptionFrom flag all the way down to the app
> --
>
> Key: YARN-5831
> URL: https://issues.apache.org/jira/browse/YARN-5831
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5831.001.patch, YARN-5831.002.patch, 
> YARN-5831.003.patch, YARN-5831.004.patch, YARN-5831.005.patch
>
>
> FairScheduler allows disallowing preemption from a queue. When checking if 
> preemption for an application is allowed, the new preemption code recurses 
> all the way to the root queue to check this flag. 
> Propagating this information all the way to the app will be more efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5830) FairScheduler: Avoid preempting AM containers

2017-01-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5830:
---
Summary: FairScheduler: Avoid preempting AM containers  (was: Avoid 
preempting AM containers)

> FairScheduler: Avoid preempting AM containers
> -
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5830.001.patch, YARN-5830.002.patch, 
> YARN-5830.003.patch, YARN-5830.004.patch, YARN-5830.005.patch, 
> YARN-5830.006.patch, YARN-5830.007.patch
>
>
> While considering containers for preemption, avoid AM containers unless 
> absolutely necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838502#comment-15838502
 ] 

Jason Lowe commented on YARN-5641:
--

+1 for the latest patch.  The unit test failures are unrelated.  Committing 
this.


> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch, YARN-5641.004.patch, YARN-5641.005.patch, 
> YARN-5641.006.patch, YARN-5641.007.patch, YARN-5641.008.patch, 
> YARN-5641.009.patch, YARN-5641.009.patch, YARN-5641.010.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838410#comment-15838410
 ] 

Hadoop QA commented on YARN-5641:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 5 new + 103 unchanged 
- 0 fixed = 108 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
30s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5641 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849153/YARN-5641.010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c8371f6d1622 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a56520 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14752/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14752/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14752/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Updated] (YARN-4752) FairScheduler should preempt for a ResourceRequest and all preempted containers should be on the same node

2017-01-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4752:
---
Attachment: yarn-6076-branch-2.1.patch

Attaching the branch-2 patch that was committed, Jenkins run on YARN-6076.

> FairScheduler should preempt for a ResourceRequest and all preempted 
> containers should be on the same node
> --
>
> Key: YARN-4752
> URL: https://issues.apache.org/jira/browse/YARN-4752
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: yarn-4752-1.patch, yarn-4752.2.patch, yarn-4752.3.patch, 
> yarn-4752.4.patch, yarn-4752.4.patch, 
> YARN-4752.FairSchedulerPreemptionOverhaul.pdf, yarn-6076-branch-2.1.patch
>
>
> A number of issues have been reported with respect to preemption in 
> FairScheduler along the lines of:
> # FairScheduler preempts resources from nodes even if the resultant free 
> resources cannot fit the incoming request.
> # Preemption doesn't preempt from sibling queues
> # Preemption doesn't preempt from sibling apps under the same queue that is 
> over its fairshare
> # ...
> Filing this umbrella JIRA to group all the issues together and think of a 
> comprehensive solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4752) FairScheduler should preempt for a ResourceRequest and all preempted containers should be on the same node

2017-01-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4752:
---
Fix Version/s: 2.9.0

> FairScheduler should preempt for a ResourceRequest and all preempted 
> containers should be on the same node
> --
>
> Key: YARN-4752
> URL: https://issues.apache.org/jira/browse/YARN-4752
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: yarn-4752-1.patch, yarn-4752.2.patch, yarn-4752.3.patch, 
> yarn-4752.4.patch, yarn-4752.4.patch, 
> YARN-4752.FairSchedulerPreemptionOverhaul.pdf, yarn-6076-branch-2.1.patch
>
>
> A number of issues have been reported with respect to preemption in 
> FairScheduler along the lines of:
> # FairScheduler preempts resources from nodes even if the resultant free 
> resources cannot fit the incoming request.
> # Preemption doesn't preempt from sibling queues
> # Preemption doesn't preempt from sibling apps under the same queue that is 
> over its fairshare
> # ...
> Filing this umbrella JIRA to group all the issues together and think of a 
> comprehensive solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6076) Backport YARN-4752 (FS preemption changes) to branch-2

2017-01-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838351#comment-15838351
 ] 

Karthik Kambatla commented on YARN-6076:


Thanks for the review, Daniel. Just committed this to branch-2. Marking this a 
duplicate of YARN-4752 and adding 2.9 as FixVersion there. 

> Backport YARN-4752 (FS preemption changes) to branch-2
> --
>
> Key: YARN-6076
> URL: https://issues.apache.org/jira/browse/YARN-6076
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-6076-branch-2.1.patch, yarn-6076-branch-2.1.patch
>
>
> YARN-4752 was merged to trunk a while ago, and has been stable. Creating this 
> JIRA to merge it branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6013) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-01-25 Thread Steven Rand (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838345#comment-15838345
 ] 

Steven Rand commented on YARN-6013:
---

A bit more information: I've isolated the problem to the private class 
{{Connection}} within {{Client}}, specifically the method {{sendRpcRequest}}. 
More specifically, this block of code:

{code}
  synchronized (sendRpcRequestLock) {
Future senderFuture = sendParamsExecutor.submit(new Runnable() {
  @Override
  public void run() {
try {
  synchronized (ipcStreams.out) {
if (shouldCloseConnection.get()) {
  return;
}
if (LOG.isDebugEnabled()) {
  LOG.debug(getName() + " sending #" + call.id);
}
// RpcRequestHeader + RpcRequest
ipcStreams.sendRequest(buf.toByteArray());
ipcStreams.flush();
  }
} catch (IOException e) {
  // exception at this point would leave the connection in an
  // unrecoverable state (eg half a call left on the wire).
  // So, close the connection, killing any outstanding calls
  markClosed(e);
} finally {
  //the buffer is just an in-memory buffer, but it is still polite 
to
  // close early
  IOUtils.closeStream(buf);
}
  }
});
{code}

I know that the IOException is being caught because it's clear that 
{{markClosed()}} is being called from the value of the {{closeException}} 
variable. That variable is {{null}} for RPC requests that do not have problems, 
but is a {{java.io.EOFException}} for the particular RPC call that this JIRA is 
regarding.

So the problem is almost certainly somewhere in this code:

{code}
  synchronized (ipcStreams.out) {
if (shouldCloseConnection.get()) {
  return;
}
if (LOG.isDebugEnabled()) {
  LOG.debug(getName() + " sending #" + call.id);
}
// RpcRequestHeader + RpcRequest
ipcStreams.sendRequest(buf.toByteArray());
ipcStreams.flush();
  }
{code}

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: YARN-6013
> URL: https://issues.apache.org/jira/browse/YARN-6013
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, yarn
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 

[jira] [Commented] (YARN-6076) Backport YARN-4752 (FS preemption changes) to branch-2

2017-01-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838341#comment-15838341
 ] 

Karthik Kambatla commented on YARN-6076:


Checking this in. 

> Backport YARN-4752 (FS preemption changes) to branch-2
> --
>
> Key: YARN-6076
> URL: https://issues.apache.org/jira/browse/YARN-6076
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-6076-branch-2.1.patch, yarn-6076-branch-2.1.patch
>
>
> YARN-4752 was merged to trunk a while ago, and has been stable. Creating this 
> JIRA to merge it branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6122) Add a service to fetch a given list of log files, to a single archive

2017-01-25 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-6122:
---

 Summary: Add a service to fetch a given list of log files, to a 
single archive
 Key: YARN-6122
 URL: https://issues.apache.org/jira/browse/YARN-6122
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-01-25 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838327#comment-15838327
 ] 

Xuan Gong commented on YARN-6100:
-

Uploaded new patches to fix check style and find bugs issues for both trunk and 
branch-2

> improve YARN webservice to output aggregated container logs
> ---
>
> Key: YARN-6100
> URL: https://issues.apache.org/jira/browse/YARN-6100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6100.1.patch, YARN-6100.2.patch, 
> YARN-6100.branch-2.v1.patch, YARN-6100.branch-2.v3.patch, 
> YARN-6100.trunk.2.patch, YARN-6100.trunk.v1.patch, YARN-6100.trunk.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-01-25 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6100:

Attachment: YARN-6100.branch-2.v3.patch

> improve YARN webservice to output aggregated container logs
> ---
>
> Key: YARN-6100
> URL: https://issues.apache.org/jira/browse/YARN-6100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6100.1.patch, YARN-6100.2.patch, 
> YARN-6100.branch-2.v1.patch, YARN-6100.branch-2.v3.patch, 
> YARN-6100.trunk.2.patch, YARN-6100.trunk.v1.patch, YARN-6100.trunk.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-01-25 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6100:

Attachment: YARN-6100.trunk.v3.patch

> improve YARN webservice to output aggregated container logs
> ---
>
> Key: YARN-6100
> URL: https://issues.apache.org/jira/browse/YARN-6100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6100.1.patch, YARN-6100.2.patch, 
> YARN-6100.branch-2.v1.patch, YARN-6100.branch-2.v3.patch, 
> YARN-6100.trunk.2.patch, YARN-6100.trunk.v1.patch, YARN-6100.trunk.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3588) Timeline entity uniqueness

2017-01-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838255#comment-15838255
 ] 

Rohith Sharma K S commented on YARN-3588:
-

As we are discussed about this  in the weekly call,  few folks asked the 
question of what makes entity uniqueness? I believe this is left to the 
framework publishers as they will be publishing the entities. Lets say, DAG 
entities and MR_JOB_ID entities will never be unique across the 
application/user/flow/flow-run/clusters. I think we should build a indexing 
table for  that helps a lot while reading entities for 
given EntityType. And need not to index all the entities. may be publisher can 
input that this entity should be indexed. 

I was looking few existing long running applications where it generates tons of 
entities which will be stored in ATSv2. All entities are not important but 
still they will keep it for future reference. Only set of entity types are 
important which are regularly queried. So, these important entities can be 
indexed on user demand. 

> Timeline entity uniqueness
> --
>
> Key: YARN-3588
> URL: https://issues.apache.org/jira/browse/YARN-3588
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> In YARN-3051, we have some discussion about how to uniquely identify an 
> entity. Sangjin and some other folks propose to only uniquely identify an 
> entity by  in the scope of a single app. This is different from 
> entity uniqueness in ATSv1, where  can globally identify an entity. 
> This is going to affect the way of fetching a single entity, and raise the 
> compatibility issue. Let's continue our discussion here to unblock YARN-3051.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-01-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838124#comment-15838124
 ] 

Sunil G commented on YARN-5889:
---

Yes. You are correct. I am revisiting the logic to get total active-users 
usage. Will update a patch in short-while.

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.0001.patch, 
> YARN-5889.0001.suggested.patchnotes, YARN-5889.0002.patch, 
> YARN-5889.0003.patch, YARN-5889.0004.patch, YARN-5889.0005.patch, 
> YARN-5889.v0.patch, YARN-5889.v1.patch, YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837763#comment-15837763
 ] 

Hadoop QA commented on YARN-574:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 393 unchanged - 3 fixed = 400 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 18s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestContainerLocalizer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux e78dd57b4d4d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a56520 |
| Default Java | 

[jira] [Comment Edited] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-25 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837584#comment-15837584
 ] 

Ajith S edited comment on YARN-574 at 1/25/17 11:21 AM:


[~jlowe] thanks for the clarification. Attaching patch with suggested approach 
of controlling multiple heartbeats using a atomic counter. Please review


was (Author: ajithshetty):
Attaching patch with suggested approach of controlling multiple heartbeats 
using a atomic counter. Please review

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-25 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated YARN-574:
-
Attachment: YARN-574.05.patch

Attaching patch with suggested approach of controlling multiple heartbeats 
using a atomic counter. Please review

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.05.patch, 
> YARN-574.1.patch, YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5626) Support long running apps handling multiple flows

2017-01-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837570#comment-15837570
 ] 

Varun Saxena commented on YARN-5626:


This is actually about passing different flow context information in 
TimelineEntity itself (despite these entities being published from the same 
app). 
But we need to then consider if we need to distribute the workload and let more 
than one collector handle an app.

> Support long running apps handling multiple flows
> -
>
> Key: YARN-5626
> URL: https://issues.apache.org/jira/browse/YARN-5626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>
> Many applications which can potentially use ATS have one or a few long 
> running AMs' which handle multiple tasks or serve multiple queries. As ATS 
> scopes everything within an app, its not possible for us to differentiate 
> different flows.
> Moreover, all entities will be written to one or very few node collectors as 
> writers are distributed based on app



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2017-01-25 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3884:
---
Attachment: YARN-3884.0008.patch

> RMContainerImpl transition from RESERVED to KILL apphistory status not updated
> --
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 
> used= cluster=
> 2015-07-02 20:45:32,143 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2015-07-02 20:45:32,174 INFO 
> 

[jira] [Commented] (YARN-6100) improve YARN webservice to output aggregated container logs

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837349#comment-15837349
 ] 

Hadoop QA commented on YARN-6100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 10 new + 39 unchanged - 1 fixed = 49 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 58s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
5s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
|  |  Redundant nullcheck of stream, which is known to be non-null in 
org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.sendStreamOutputResponse(ApplicationId,
 String, String, String, String, String, long, boolean)  Redundant null check 
at AHSWebServices.java:is known to be non-null in