[jira] [Commented] (YARN-6326) Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks app in SLS

2017-03-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931095#comment-15931095
 ] 

Yufei Gu commented on YARN-6326:


Thanks [~rkanter] for the review. Uploaded patch v5 for your comments.
# Fixed
# I don't think it is a good idea to add a method into both interface after 
offline discussions. I am not fan of current design of {{YarnScheduler}} and 
{{ResourceScheduerl}}. But I realize that we'd better fix it in another Jira if 
there is any issue in them. Comparing to incompatibility, downcast is not so 
terrible. So I downcast {{ResourceScheduler}} to {{AbstractYarnScheduler}} to 
get the method I need.

> Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks 
> app in SLS
> --
>
> Key: YARN-6326
> URL: https://issues.apache.org/jira/browse/YARN-6326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6326.001.patch, YARN-6326.002.patch, 
> YARN-6326.003.patch, YARN-6326.004.patch, YARN-6326.005.patch
>
>
> This causes a NPE issue. Beside the NPE, the metrics won't reflect the 
> different attempts. We should pass ApplicationId Instead of AppAttemptId. The 
> NPE caused by the issue:
> {code}
> 2017-03-13 20:43:39,153 INFO appmaster.AMSimulator: Submit a new application 
> application_1489463017173_0001
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getApplicationAttempt(AbstractYarnScheduler.java:327)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getSchedulerApp(FairScheduler.java:1028)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.FairSchedulerMetrics.trackApp(FairSchedulerMetrics.java:68)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.addTrackedApp(ResourceSchedulerWrapper.java:799)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.trackApp(AMSimulator.java:338)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.firstStep(AMSimulator.java:156)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "pool-6-thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:105)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6326) Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks app in SLS

2017-03-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6326:
---
Attachment: YARN-6326.005.patch

> Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks 
> app in SLS
> --
>
> Key: YARN-6326
> URL: https://issues.apache.org/jira/browse/YARN-6326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6326.001.patch, YARN-6326.002.patch, 
> YARN-6326.003.patch, YARN-6326.004.patch, YARN-6326.005.patch
>
>
> This causes a NPE issue. Beside the NPE, the metrics won't reflect the 
> different attempts. We should pass ApplicationId Instead of AppAttemptId. The 
> NPE caused by the issue:
> {code}
> 2017-03-13 20:43:39,153 INFO appmaster.AMSimulator: Submit a new application 
> application_1489463017173_0001
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getApplicationAttempt(AbstractYarnScheduler.java:327)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getSchedulerApp(FairScheduler.java:1028)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.FairSchedulerMetrics.trackApp(FairSchedulerMetrics.java:68)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.addTrackedApp(ResourceSchedulerWrapper.java:799)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.trackApp(AMSimulator.java:338)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.firstStep(AMSimulator.java:156)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "pool-6-thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:105)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931052#comment-15931052
 ] 

Hadoop QA commented on YARN-5331:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
15s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5331 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859401/YARN-5331.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c55c38b1fa7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ffa160d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15325/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15325/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend RLESparseResourceAllocation with period for supporting recurring 
> reservations in YARN ReservationSystem
> --
>
> Key: YARN-5331
> URL: https://issues.apache.org/jira/browse/YARN-5331
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee:

[jira] [Updated] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2017-03-17 Thread Sangeetha Abdu Jyothi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Abdu Jyothi updated YARN-5331:

Attachment: YARN-5331.005.patch

> Extend RLESparseResourceAllocation with period for supporting recurring 
> reservations in YARN ReservationSystem
> --
>
> Key: YARN-5331
> URL: https://issues.apache.org/jira/browse/YARN-5331
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
>  Labels: oct16-medium
> Attachments: YARN-5331.001.patch, YARN-5331.002.patch, 
> YARN-5331.003.patch, YARN-5331.004.patch, YARN-5331.005.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to add a 
> PeriodicRLESparseResourceAllocation. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931027#comment-15931027
 ] 

Hadoop QA commented on YARN-5331:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 43 unchanged - 0 fixed = 45 total (was 43) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 12s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5331 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859395/YARN-5331.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a803fcf263f4 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ffa160d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15324/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15324/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15324/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build

[jira] [Comment Edited] (YARN-5179) Issue of CPU usage of containers

2017-03-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931007#comment-15931007
 ] 

Miklos Szegedi edited comment on YARN-5179 at 3/18/17 2:32 AM:
---

[~maniraj...@gmail.com], could you help me, what am I missing?

I ran a test and the numbers look right to me. Here is the code:
{code}
  float cpuUsagePercentPerCore = pTree.getCpuUsagePercent();
  float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore /
  resourceCalculatorPlugin.getNumProcessors();

  // Multiply by 1000 to avoid losing data when converting to int
  int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000
  * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);
{code}
And here are the values. I have 4 processors, and we use about 3 processors 
with our job. This is 75% of total CPU resources, so the first three numbers 
look right. I chose 8192 vcores in yarn.nodemanager.resource.cpu-vcores to have 
a very different number than the 4 cores. Then we prorate 75% to 8192 vcores, 
so we get about 6141 number of vcores, which is 6140747 millivcores. That 
sounds right. What would you expect in this case?
{code}
resourceCalculatorPlugin.getNumProcessors() = 4
cpuUsagePercentPerCore = 299.84116
cpuUsageTotalCoresPercentage = 74.96029
maxVCoresAllottedForContainers = 8192
nodeCpuPercentageForYARN = 100
milliVcoresUsed = 6140747
{code}
Approach #1 above would give us 3000, which is millirealcores. Why would we 
multiply and then divide by resourceCalculatorPlugin.getNumVcoresUsed()?
It is an interesting question though what metric is the most useful and simple 
from the user point of view. I think I like the 75% the most.
I think we have a documentation bug in yarn-default.xml, where we mention CPU 
cores here instead of vcores: 
yarn.nodemanager.resource.cpu-vcores8   Number of CPU cores that can be 
allocated for containers.


was (Author: miklos.szeg...@cloudera.com):
[~maniraj...@gmail.com], could you help me, what am I missing?

I ran a test and the numbers look right to me. Here is the code:
{code}
  float cpuUsagePercentPerCore = pTree.getCpuUsagePercent();
  float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore /
  resourceCalculatorPlugin.getNumProcessors();

  // Multiply by 1000 to avoid losing data when converting to int
  int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000
  * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);
{code}
And here are the values. I have 4 processors, and we use about 3 processors 
with our job. This is 75% of total CPU resources, so the first three numbers 
look right. I chose 8192 vcores in yarn.nodemanager.resource.cpu-vcores to have 
a very different number than the 4 cores. Then we prorate 75% to 8192 vcores, 
so we get about 6141 number of vcores, which is 6140747 millivcores. That 
sounds right. What would you expect in this case?
{code}
resourceCalculatorPlugin.getNumProcessors() = 4
cpuUsagePercentPerCore = 299.84116
cpuUsageTotalCoresPercentage = 74.96029
maxVCoresAllottedForContainers = 8192
nodeCpuPercentageForYARN = 100
milliVcoresUsed = 6140747
{code}
Approach #1 above would give us 3000, which is millirealcores. Why would we 
multiply and then divide by resourceCalculatorPlugin.getNumVcoresUsed()?
It is an interesting question though what metric is the most useful and simple 
from the user point of view. I think I like the 75% the most.

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931013#comment-15931013
 ] 

Hadoop QA commented on YARN-6359:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859390/YARN-6359.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c35ddd6a89f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1a9980 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15323/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15323/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15323/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
> 
>
> Key: YARN-6359
> URL: h

[jira] [Comment Edited] (YARN-5179) Issue of CPU usage of containers

2017-03-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931007#comment-15931007
 ] 

Miklos Szegedi edited comment on YARN-5179 at 3/18/17 2:16 AM:
---

[~maniraj...@gmail.com], could you help me, what am I missing?

I ran a test and the numbers look right to me. Here is the code:
{code}
  float cpuUsagePercentPerCore = pTree.getCpuUsagePercent();
  float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore /
  resourceCalculatorPlugin.getNumProcessors();

  // Multiply by 1000 to avoid losing data when converting to int
  int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000
  * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);
{code}
And here are the values. I have 4 processors, and we use about 3 processors 
with our job. This is 75% of total CPU resources, so the first three numbers 
look right. I chose 8192 vcores in yarn.nodemanager.resource.cpu-vcores to have 
a very different number than the 4 cores. Then we prorate 75% to 8192 vcores, 
so we get about 6141 number of vcores, which is 6140747 millivcores. That 
sounds right. What would you expect in this case?
{code}
resourceCalculatorPlugin.getNumProcessors() = 4
cpuUsagePercentPerCore = 299.84116
cpuUsageTotalCoresPercentage = 74.96029
maxVCoresAllottedForContainers = 8192
nodeCpuPercentageForYARN = 100
milliVcoresUsed = 6140747
{code}
Approach #1 above would give us 3000, which is millirealcores. Why would we 
multiply and then divide by resourceCalculatorPlugin.getNumVcoresUsed()?
It is an interesting question though what metric is the most useful and simple 
from the user point of view. I think I like the 75% the most.


was (Author: miklos.szeg...@cloudera.com):
[~maniraj...@gmail.com], could you help me, what am I missing?

I ran a test and the numbers look right to me. Here is the code:
{code}
  float cpuUsagePercentPerCore = pTree.getCpuUsagePercent();
  float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore /
  resourceCalculatorPlugin.getNumProcessors();

  // Multiply by 1000 to avoid losing data when converting to int
  int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000
  * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);
{code}
And here are the values. I have 4 processors, and we use about 3 processors 
with our job. This is 75% of total CPU resources, so the first three numbers 
look right. I chose 8192 vcores in yarn.nodemanager.resource.cpu-vcores to have 
a very different number than the 4 cores. Then we prorate 75% to 8192 vcores, 
so we get about 6141 number of vcores, which is 6140747 millivcores. That 
sounds right. What would you expect in this case?
{code}
resourceCalculatorPlugin.getNumProcessors() = 4
cpuUsagePercentPerCore = 299.84116
cpuUsageTotalCoresPercentage = 74.96029
maxVCoresAllottedForContainers = 8192
nodeCpuPercentageForYARN = 100
milliVcoresUsed = 6140747
{code}
Approach #1 above would give us 3000, which is millirealcores. Why would we 
multiply and then subtract with resourceCalculatorPlugin.getNumVcoresUsed()?
It is an interesting question though what metric is the most useful and simple 
from the user point of view. I think I like the 75% the most.

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-03-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931007#comment-15931007
 ] 

Miklos Szegedi commented on YARN-5179:
--

[~maniraj...@gmail.com], could you help me, what am I missing?

I ran a test and the numbers look right to me. Here is the code:
{code}
  float cpuUsagePercentPerCore = pTree.getCpuUsagePercent();
  float cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore /
  resourceCalculatorPlugin.getNumProcessors();

  // Multiply by 1000 to avoid losing data when converting to int
  int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000
  * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN);
{code}
And here are the values. I have 4 processors, and we use about 3 processors 
with our job. This is 75% of total CPU resources, so the first three numbers 
look right. I chose 8192 vcores in yarn.nodemanager.resource.cpu-vcores to have 
a very different number than the 4 cores. Then we prorate 75% to 8192 vcores, 
so we get about 6141 number of vcores, which is 6140747 millivcores. That 
sounds right. What would you expect in this case?
{code}
resourceCalculatorPlugin.getNumProcessors() = 4
cpuUsagePercentPerCore = 299.84116
cpuUsageTotalCoresPercentage = 74.96029
maxVCoresAllottedForContainers = 8192
nodeCpuPercentageForYARN = 100
milliVcoresUsed = 6140747
{code}
Approach #1 above would give us 3000, which is millirealcores. Why would we 
multiply and then subtract with resourceCalculatorPlugin.getNumVcoresUsed()?
It is an interesting question though what metric is the most useful and simple 
from the user point of view. I think I like the 75% the most.

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2017-03-17 Thread Sangeetha Abdu Jyothi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Abdu Jyothi updated YARN-5331:

Attachment: YARN-5331.004.patch

> Extend RLESparseResourceAllocation with period for supporting recurring 
> reservations in YARN ReservationSystem
> --
>
> Key: YARN-5331
> URL: https://issues.apache.org/jira/browse/YARN-5331
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
>  Labels: oct16-medium
> Attachments: YARN-5331.001.patch, YARN-5331.002.patch, 
> YARN-5331.003.patch, YARN-5331.004.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to add a 
> PeriodicRLESparseResourceAllocation. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6353) Clean up OrderingPolicy javadoc

2017-03-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930976#comment-15930976
 ] 

Daniel Templeton commented on YARN-6353:


Unit test failures are unrelated, and checkpoint issues are bogus.

> Clean up OrderingPolicy javadoc
> ---
>
> Key: YARN-6353
> URL: https://issues.apache.org/jira/browse/YARN-6353
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
>  Labels: javadoc
> Attachments: YARN-6353.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6359:

Attachment: YARN-6359.002.patch

Thanks for the review.

Oops, I totally missed that the test has a 60sec timeout.  And I had thought 
there was a {{waitFor}} somewhere, but couldn't find it for some reason so I 
went and did this.  I didn't think we needed to check the timeout after the 
loop because we check the metric, which would have failed if it was wrong 
anyway.

In any case, the 002 patch addresses the timeout and the {{waitFor}}.  

> TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
> 
>
> Key: YARN-6359
> URL: https://issues.apache.org/jira/browse/YARN-6359
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6359.001.patch, YARN-6359.002.patch
>
>
> We've seen (very rarely) a test failure in 
> {{TestRM#testApplicationKillAtAcceptedState}}
> {noformat}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testApplicationKillAtAcceptedState(TestRM.java:645)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6363) Extending SLS: Synthetic Load Generator

2017-03-17 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-6363:
--

 Summary: Extending SLS: Synthetic Load Generator
 Key: YARN-6363
 URL: https://issues.apache.org/jira/browse/YARN-6363
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Carlo Curino
Assignee: Carlo Curino


This JIRA tracks the introduction of a synthetic load generator in the SLS. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930863#comment-15930863
 ] 

Hadoop QA commented on YARN-6357:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
10s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859365/YARN-6357.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32d1abca7430 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4a8e304 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15322/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/15322/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15322/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.

[jira] [Comment Edited] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930832#comment-15930832
 ] 

Haibo Chen edited comment on YARN-6357 at 3/17/17 10:26 PM:


Upload an initial patch for review. I did not call putEntitiesAsync() in 
putEntities because this will require putEntitiesAsync() to return a response, 
which I think is weird for an async method. Plus, the debug logging for the 
sync and async putEntities call can also be awkward. 


was (Author: haibochen):
Upload an initial patch for review

> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6357.01.patch
>
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call putEntitiesAsync and then after that call 
> writer.flush()
> The fact that we flush on close and we flush periodically should be more of a 
> concern of avoiding data loss; close in case sync is never called and the 
> periodic flush to guard against having data from slow writers get buffered 
> for a long time and expose us to risk of loss in case the collector crashes 
> with data in its buffers. Size-based flush is a different concern to avoid 
> blowing up memory footprint.
> The spooling behavior is also somewhat separate.
> We have two separate methods on our API putEntities and putEntitiesAsync and 
> they should have different behavior beyond waiting for the request to be sent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6357:
-
Attachment: YARN-6357.01.patch

Upload an initial patch for review

> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6357.01.patch
>
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call putEntitiesAsync and then after that call 
> writer.flush()
> The fact that we flush on close and we flush periodically should be more of a 
> concern of avoiding data loss; close in case sync is never called and the 
> periodic flush to guard against having data from slow writers get buffered 
> for a long time and expose us to risk of loss in case the collector crashes 
> with data in its buffers. Size-based flush is a different concern to avoid 
> blowing up memory footprint.
> The spooling behavior is also somewhat separate.
> We have two separate methods on our API putEntities and putEntitiesAsync and 
> they should have different behavior beyond waiting for the request to be sent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4599) Set OOM control for memory cgroups

2017-03-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930826#comment-15930826
 ] 

Miklos Szegedi commented on YARN-4599:
--

Thank you, [~sandflee] for the reply. The patch looks good to me in general, if 
others also like it.
Indeed LCE is a new process, maybe we could just poll the memory.oom_control 
file? I am not against JNI but this is the first time it is used in the node 
manager.

> Set OOM control for memory cgroups
> --
>
> Key: YARN-4599
> URL: https://issues.apache.org/jira/browse/YARN-4599
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: yarn-4599-not-so-useful.patch, YARN-4599.sandflee.patch
>
>
> YARN-1856 adds memory cgroups enforcing support. We should also explicitly 
> set OOM control so that containers are not killed as soon as they go over 
> their usage. Today, one could set the swappiness to control this, but 
> clusters with swap turned off exist.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6342) Issues in async API of TimelineClient

2017-03-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-6342:


Assignee: Haibo Chen

> Issues in async API of TimelineClient
> -
>
> Key: YARN-6342
> URL: https://issues.apache.org/jira/browse/YARN-6342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
>
> Found these with [~rohithsharma] while browsing the code
> - In stop: it calls shutdownNow which doens't wait for pending tasks, should 
> it use shutdown instead ?
> {code}
> public void stop() {
>   LOG.info("Stopping TimelineClient.");
>   executor.shutdownNow();
>   try {
> executor.awaitTermination(DRAIN_TIME_PERIOD, TimeUnit.MILLISECONDS);
>   } catch (InterruptedException e) {
> {code}
> - In TimelineClientImpl#createRunnable:
> If any exception happens when publish one entity 
> (publishWithoutBlockingOnQueue), the thread exists. I think it should try 
> best effort to continue publishing the timeline entities, one failure should 
> not cause all followup entities not published.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6326) Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks app in SLS

2017-03-17 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930777#comment-15930777
 ] 

Robert Kanter commented on YARN-6326:
-

Two things:
# I like the way you made the metrics enums, but instead of
{code:java}
for (Metric metric: Metric.values()) {
   appTrackedMetrics.add(metric.value + ".memory");
   appTrackedMetrics.add(metric.value + ".vcores");
}

for (Metric metric: Metric.values()) {
   queueTrackedMetrics.add(metric.value + ".memory");
   queueTrackedMetrics.add(metric.value + ".vcores");
}
{code}
we can just do
{code:java}
for (Metric metric : Metric.values()) {
   appTrackedMetrics.add(metric.value + ".memory");
   appTrackedMetrics.add(metric.value + ".vcores");
   queueTrackedMetrics.add(metric.value + ".memory");
   queueTrackedMetrics.add(metric.value + ".vcores");
}
{code}
# I'm not sure if we should add {{getSchedulerApplication}} to 
{{YarnScheduler}}.  Putting it in {{ResourceScheduler}} looks like it might be 
enough.  In any case, instead of {{\@LimitedPrivate("yarn")}}, you can just do 
{{\@Private}}.

> Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks 
> app in SLS
> --
>
> Key: YARN-6326
> URL: https://issues.apache.org/jira/browse/YARN-6326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6326.001.patch, YARN-6326.002.patch, 
> YARN-6326.003.patch, YARN-6326.004.patch
>
>
> This causes a NPE issue. Beside the NPE, the metrics won't reflect the 
> different attempts. We should pass ApplicationId Instead of AppAttemptId. The 
> NPE caused by the issue:
> {code}
> 2017-03-13 20:43:39,153 INFO appmaster.AMSimulator: Submit a new application 
> application_1489463017173_0001
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getApplicationAttempt(AbstractYarnScheduler.java:327)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getSchedulerApp(FairScheduler.java:1028)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.FairSchedulerMetrics.trackApp(FairSchedulerMetrics.java:68)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.addTrackedApp(ResourceSchedulerWrapper.java:799)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.trackApp(AMSimulator.java:338)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.firstStep(AMSimulator.java:156)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "pool-6-thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:105)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6050) AMs can't be scheduled on racks or nodes

2017-03-17 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930752#comment-15930752
 ] 

Karthik Kambatla commented on YARN-6050:


Thanks for the tiny patch, [~rkanter]. 

I haven't looked too closely. Patch looks mostly good, but for the following 
minor comments:
# Instead of making changes to NodeLabels, can we just fetch the set of NodeIds 
that match the label as you mentioned in your earlier comment. We could use a 
(new) helper method in RMServerUtils to prune out those with wildcard port. 
# The changes to YarnScheduler seem unnecessary. If needed, it should be okay 
to add getNodeTracker to ResourceScheduler instead. 

If you want me to take a closer look and nit-pick as well, mind posting a PR 
for review convenience? 

> AMs can't be scheduled on racks or nodes
> 
>
> Key: YARN-6050
> URL: https://issues.apache.org/jira/browse/YARN-6050
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6050.001.patch, YARN-6050.002.patch, 
> YARN-6050.003.patch, YARN-6050.004.patch, YARN-6050.005.patch, 
> YARN-6050.006.patch, YARN-6050.007.patch, YARN-6050.008.patch, 
> YARN-6050.009.patch, YARN-6050.010.patch, YARN-6050.011.patch
>
>
> Yarn itself supports rack/node aware scheduling for AMs; however, there 
> currently are two problems:
> # To specify hard or soft rack/node requests, you have to specify more than 
> one {{ResourceRequest}}.  For example, if you want to schedule an AM only on 
> "rackA", you have to create two {{ResourceRequest}}, like this:
> {code}
> ResourceRequest.newInstance(PRIORITY, ANY, CAPABILITY, NUM_CONTAINERS, false);
> ResourceRequest.newInstance(PRIORITY, "rackA", CAPABILITY, NUM_CONTAINERS, 
> true);
> {code}
> The problem is that the Yarn API doesn't actually allow you to specify more 
> than one {{ResourceRequest}} in the {{ApplicationSubmissionContext}}.  The 
> current behavior is to either build one from {{getResource}} or directly from 
> {{getAMContainerResourceRequest}}, depending on if 
> {{getAMContainerResourceRequest}} is null or not.  We'll need to add a third 
> method, say {{getAMContainerResourceRequests}}, which takes a list of 
> {{ResourceRequest}} so that clients can specify the multiple resource 
> requests.
> # There are some places where things are hardcoded to overwrite what the 
> client specifies.  These are pretty straightforward to fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930730#comment-15930730
 ] 

Hadoop QA commented on YARN-6335:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 112 new or modified 
test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core
 generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 14 new + 279 unchanged - 1 fixed = 293 total (was 280) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859331/YARN-6335-yarn-native-services.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ea32b6205705 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 39ef50c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15321/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15321/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-sl

[jira] [Commented] (YARN-1547) Prevent DoS of ApplicationMasterProtocol by putting in limits

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930646#comment-15930646
 ] 

Hadoop QA commented on YARN-1547:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 96 new + 205 unchanged - 0 fixed = 301 total (was 205) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 182 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 13 new + 231 unchanged - 0 fixed = 244 total (was 231) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit te

[jira] [Updated] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6355:
-
Labels: amrmproxy resourcemanager  (was: )

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930581#comment-15930581
 ] 

Arun Suresh commented on YARN-6355:
---

Yup.. Thanks..
I was planning of trying to reuse the same framework. Needs some modification 
though, since there are different Contexts involved and the AMRMProxy 
interceptors are initialized per app.. I was hoping this would be more of a 
general interceptor where we can enforce policies across apps for example.

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930580#comment-15930580
 ] 

Miklos Szegedi commented on YARN-6217:
--

Thank you for the commit, [~jlowe], and for the review [~yufeigu]!

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-17 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930569#comment-15930569
 ] 

Subru Krishnan commented on YARN-6355:
--

[~asuresh], it should be fairly straightforward to refactor the interceptor 
chain introduced in YARN-2884 so that it can be used in both {{AMRMProxy}} and 
{{RM}} as it depends mostly only on the {{ApplicationMasterProtocol}}. 

This should also be useful to plugin the work we are doing with DDoS prevention 
(YARN-1547) directly to the RM as suggested by [~vinodkv].

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6355:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: YARN-5468)

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930566#comment-15930566
 ] 

Hudson commented on YARN-6217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11422 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11422/])
YARN-6217. TestLocalCacheDirectoryManager test timeout is too (jlowe: rev 
4a8e3045027036afebbcb80f23b7a2886e56c255)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalCacheDirectoryManager.java


> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-17 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6355:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5468

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930513#comment-15930513
 ] 

Jason Lowe commented on YARN-6217:
--

+1 lgtm.  Committing this.

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930483#comment-15930483
 ] 

Hadoop QA commented on YARN-6217:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
4s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859332/YARN-6217.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4a24adfb4549 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 86035c1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15319/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15319/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStat

[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930458#comment-15930458
 ] 

Yufei Gu commented on YARN-6217:


+1 (non-binding)

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930446#comment-15930446
 ] 

Haibo Chen commented on YARN-6146:
--

The findbug warning is known IIRC, the checkstyles are existing issues.

> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146.03.patch, YARN-6146-YARN-5355.01.patch, 
> YARN-6146-YARN-5355.02.patch, YARN-6146-YARN-5355.03.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6217:
-
Attachment: YARN-6217.001.patch

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch, YARN-6217.001.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-03-17 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6335:
-
Attachment: YARN-6335-yarn-native-services.003.patch

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5924) Resource Manager fails to load state with InvalidProtocolBufferException

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930391#comment-15930391
 ] 

Hadoop QA commented on YARN-5924:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
21s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5924 |
| GITHUB PR | https://github.com/apache/hadoop/pull/164 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d80e1a22f30 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/15315/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15315/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15315/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Resource Manager fails to load state with InvalidProtocolBufferException
> 
>
> Key: YARN-5924
> URL: https://issues.apache.org/jira/browse/YARN-5924
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resour

[jira] [Commented] (YARN-6319) race condition between deleting app dir and deleting container dir

2017-03-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930366#comment-15930366
 ] 

Haibo Chen commented on YARN-6319:
--

By linearizing container cleanup and app cleanup, I mean that application 
cleanup has to wait for all container cleanup to finish before it can start, 
i.e., application cleanup can only happen after the last container cleanup 
finishes, not to say that container cleanups need to be done one after another. 
In cases where deletion threads are occupied/delayed, it can take some time to 
finish the last container cleanup task. Again, I don't think this is a 
dependency that we need to have. Even though we may potentially need to change 
two containerExecutors for option 1, the change should be fairly self-contained 
and does not change the rest of the flow. BTW, can you please set the affect 
version just so that we are talking about the same version?

> race condition between deleting app dir and deleting container dir
> --
>
> Key: YARN-6319
> URL: https://issues.apache.org/jira/browse/YARN-6319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>
> Last container (on one node) of one app complete
> |--> triggers async deletion of container dir (container cleanup)
> |--> triggers async deletion of app dir (app cleanup)
> For LCE, deletion is done by container-executor. The "app cleanup" lists 
> sub-dir (step 1), and then unlink items one by one(step 2).   If a file is 
> deleted by "container cleanup" between step 1 and step2, it'll report below 
> error and breaks the deletion.
> {code}
> ContainerExecutor: Couldn't delete file 
> $LOCAL/usercache/$USER/appcache/application_1481785469354_353539/container_1481785469354_353539_01_28/$FILE
>  - No such file or directory
> {code}
> This app dir then escape the cleanup. And that's why we always have many app 
> dirs left there.
> solution 1: just ignore the error without breaking in 
> container-executor.c::delete_path()
> solution 2: use a lock to serialize the cleanup of same app dir.
> solution 3: backoff and retry on error
> Comments are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930361#comment-15930361
 ] 

Hadoop QA commented on YARN-6146:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 5 new + 
32 unchanged - 5 fixed = 37 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
7s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859323/YARN-6146.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 72c1b2956fe8 3.13.0-107-g

[jira] [Commented] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930358#comment-15930358
 ] 

Jason Lowe commented on YARN-6359:
--

Thanks for the report and patch!

The timeout in the loop is 80 seconds, but there's a 60 second timeout for the 
entire test which seems weird.  Is that why the loop doesn't check if the 
timeout occurred after it completes?  It'd be nice to use 
GenericTestUtils#waitFor to have it check for timeouts, do the stacktrace if it 
does timeout, etc.

> TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
> 
>
> Key: YARN-6359
> URL: https://issues.apache.org/jira/browse/YARN-6359
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6359.001.patch
>
>
> We've seen (very rarely) a test failure in 
> {{TestRM#testApplicationKillAtAcceptedState}}
> {noformat}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testApplicationKillAtAcceptedState(TestRM.java:645)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930355#comment-15930355
 ] 

Hadoop QA commented on YARN-6335:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 112 new or modified 
test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core
 generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 49 new + 279 unchanged - 1 fixed = 328 total (was 280) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
56s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859322/YARN-6335-yarn-native-services.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 918623297dd2 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 39ef50c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15316/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15316/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-s

[jira] [Commented] (YARN-3767) Yarn Scheduler Load Simulator does not work

2017-03-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930352#comment-15930352
 ] 

Yufei Gu commented on YARN-3767:


Yes, agree.

> Yarn Scheduler Load Simulator does not work
> ---
>
> Key: YARN-3767
> URL: https://issues.apache.org/jira/browse/YARN-3767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
> Environment: OS X 10.10.  JDK 1.7
>Reporter: David Kjerrumgaard
>
> Running the SLS, as per the instructions on the web results in a 
> NullPointerException being thrown.
> Steps followed to create error:
> 1) Download Apache Hadoop 2.7.0 tarball from Apache site
> 2) Untar 2.7.0 tarball into /opt directory
> 3) Execute the following command: 
> /opt/hadoop-2.7.0/share/hadoop/tools/sls//bin/slsrun.sh 
> --input-rumen=/opt/hadoop-2.7.0/share/hadoop/tools/sls/sample-data/2jobs2min-rumen-jh.json
>  --output-dir=/tmp
> Results in the following error:
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2118.smile.com:2 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2118.smile.com:2 clusterResource: 
> 15/06/04 10:25:41 INFO util.RackResolver: Resolved a2115.smile.com to 
> /default-rack
> 15/06/04 10:25:41 INFO resourcemanager.ResourceTrackerService: NodeManager 
> from node a2115.smile.com(cmPort: 3 httpPort: 80) registered with capability: 
> , assigned nodeId a2115.smile.com:3
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2115.smile.com:3 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2115.smile.com:3 clusterResource: 
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
>   at 
> org.apache.hadoop.yarn.sls.SLSRunner.startAMFromRumenTraces(SLSRunner.java:398)
>   at org.apache.hadoop.yarn.sls.SLSRunner.startAM(SLSRunner.java:250)
>   at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:145)
>   at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
> Caused by: java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3767) Yarn Scheduler Load Simulator does not work

2017-03-17 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930347#comment-15930347
 ] 

Carlo Curino commented on YARN-3767:


Makes sense. Is it ok to leave this JIRA closed, since we are following up in 
YARN-5065? (Just trying to spring clean a bit :-))

> Yarn Scheduler Load Simulator does not work
> ---
>
> Key: YARN-3767
> URL: https://issues.apache.org/jira/browse/YARN-3767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
> Environment: OS X 10.10.  JDK 1.7
>Reporter: David Kjerrumgaard
>
> Running the SLS, as per the instructions on the web results in a 
> NullPointerException being thrown.
> Steps followed to create error:
> 1) Download Apache Hadoop 2.7.0 tarball from Apache site
> 2) Untar 2.7.0 tarball into /opt directory
> 3) Execute the following command: 
> /opt/hadoop-2.7.0/share/hadoop/tools/sls//bin/slsrun.sh 
> --input-rumen=/opt/hadoop-2.7.0/share/hadoop/tools/sls/sample-data/2jobs2min-rumen-jh.json
>  --output-dir=/tmp
> Results in the following error:
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2118.smile.com:2 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2118.smile.com:2 clusterResource: 
> 15/06/04 10:25:41 INFO util.RackResolver: Resolved a2115.smile.com to 
> /default-rack
> 15/06/04 10:25:41 INFO resourcemanager.ResourceTrackerService: NodeManager 
> from node a2115.smile.com(cmPort: 3 httpPort: 80) registered with capability: 
> , assigned nodeId a2115.smile.com:3
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2115.smile.com:3 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2115.smile.com:3 clusterResource: 
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
>   at 
> org.apache.hadoop.yarn.sls.SLSRunner.startAMFromRumenTraces(SLSRunner.java:398)
>   at org.apache.hadoop.yarn.sls.SLSRunner.startAM(SLSRunner.java:250)
>   at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:145)
>   at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
> Caused by: java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930346#comment-15930346
 ] 

Jason Lowe commented on YARN-6217:
--

I tend to agree.  Originally there was an edict to put a timeout on each test 
because the build wasn't doing a good job of handling tests that timed out.  
However that has since been fixed.  An explicit test timeout still makes a lot 
of sense when the test has a good chance of deadlocking when broken (e.g.: 
needing to carefully synchronize a number of threads, wait for barriers, doing 
a polling loop, etc.), but I don't think that's the case with the tests here.

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-03-17 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930339#comment-15930339
 ] 

Eric Payne commented on YARN-2113:
--

Hi [~sunilg]. Thanks for this patch. I'm excited to look at it.

There does seem to be a problem with preempting when it shouldn't. If 2 apps 
are in a queue, both are asking for resources, and one is over and one is under 
user limit, the app that is under its user limit will get preempted.

Adding the he following unit test to 
{{TestProportionalCapacityPreemptionPolicyIntraQueueUserLimit}} should 
demonstrate this. I have also observed this behavior in my manual testing.
{code}
  public void testIntraQueuePreemptionWithTwoRequstingUsers()
  throws IOException {

// Set max preemption limit as 50%.
conf.setFloat(CapacitySchedulerConfiguration.
INTRAQUEUE_PREEMPTION_MAX_ALLOWABLE_LIMIT,
(float) 0.5);

String labelsConfig = "=100,true;";
String nodesConfig = // n1 has no label
"n1= res=100";
String queuesConfig =
// guaranteed,max,used,pending,reserved
"root(=[100 100 100 20 0]);" + // root
"-a(=[100 100 100 20 0])"; // b

String appsConfig =
// queueName\t(priority,resource,host,expression,#repeat,reserved,pending)
"a\t" // app1 in a
+ "(1,1,n1,,100,false,10,user1);" + // app1 a
"a\t" // app2 in a
+ "(1,1,n1,,40,false,10,user2)";

buildEnv(labelsConfig, nodesConfig, queuesConfig, appsConfig);
policy.editSchedule();

// app2 needs more resource and its well under its user-limit. Hence preempt
// resources from app1.
verify(mDisp, times(0)).handle(argThat(
new TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor(
getAppAttemptId(2;
  }
{code}

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930333#comment-15930333
 ] 

Hadoop QA commented on YARN-6146:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 5 new + 
38 unchanged - 6 fixed = 43 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-6146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859318/YARN-6146-YARN-5355.03.patch
 |

[jira] [Commented] (YARN-6362) Investigate correct version of frontend-maven-plugin for yarn-ui

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930318#comment-15930318
 ] 

Sunil G commented on YARN-6362:
---

Updating title description accordingly for the problem analysis.

> Investigate correct version of frontend-maven-plugin for yarn-ui
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6362) Investigate correct version of frontend-maven-plugin for yarn-ui

2017-03-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6362:
--
Summary: Investigate correct version of frontend-maven-plugin for yarn-ui  
(was: Build failure of yarn-ui profile)

> Investigate correct version of frontend-maven-plugin for yarn-ui
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6146:
-
Attachment: YARN-6146.03.patch

> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146.03.patch, YARN-6146-YARN-5355.01.patch, 
> YARN-6146-YARN-5355.02.patch, YARN-6146-YARN-5355.03.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-03-17 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6335:
-
Attachment: YARN-6335-yarn-native-services.002.patch

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930295#comment-15930295
 ] 

Sunil G commented on YARN-5892:
---

Sorry for late entry. I am also reviewing and will share my thoughts at 
earliest.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6146:
-
Attachment: YARN-6146-YARN-5355.03.patch

Upload a new patch to address Varun's comments

> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146-YARN-5355.01.patch, YARN-6146-YARN-5355.02.patch, 
> YARN-6146-YARN-5355.03.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5924) Resource Manager fails to load state with InvalidProtocolBufferException

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930285#comment-15930285
 ] 

Hadoop QA commented on YARN-5924:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
0s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5924 |
| GITHUB PR | https://github.com/apache/hadoop/pull/164 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9c9a1f772bd2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/15313/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15313/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15313/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Resource Manager fails to load state with InvalidProtocolBufferException
> 
>
> Key: YARN-5924
> URL: https://issues.apache.org/jira/browse/YARN-5924
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resour

[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930280#comment-15930280
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 546 unchanged - 8 fixed = 562 total (was 554) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859309/YARN-5892.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea1fbcfeaef8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git r

[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-03-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930205#comment-15930205
 ] 

Jason Lowe commented on YARN-6315:
--

I tried to run this in an end-to-end test and found it doesn't work in 
practice.  I was under the mistaken impression that the size specified in the 
LocalResourceRequest was used to verify the correct file was being localized, 
but that's not the case.  It only uses the _timestamp_ to verify the correct 
version of the file is being downloaded.  The size is ignored.  In my case the 
request actually contained the value -1 for the size, so it always thought the 
size mismatched and would re-localize the file.  That's not good.

I thought we could pivot from the (now untrustworthy) size in the request to 
the size in the LocalizedResource. That's a value the NM computes directly 
during localization, so that will be correct.  However this is the size of the 
entire directory containing the localized resource (whether that's a file, 
archive, or directory), so it includes extra things like the .crc file from 
LocalFileSystem, etc.  In order to match the sizes we'd have to do the same 
logic being done by the localizer which is a DU of the directory.  That's going 
to be too expensive to do for every local resource lookup by a container launch.


> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch, YARN-6315.004.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930149#comment-15930149
 ] 

Sunil G commented on YARN-6362:
---

No.. Thanks very much for reporting. You are correct. We still have failures in 
local.

We checked and found that it is caused because of frontend-maven-plugin. We 
never mentioned any version for this plugin, hence in ubuntu or mac, they 
pulled different versions. This caused the failure in one and not in other.
{{1.1}}
This also has to be added to pom.xml along with your change.

But this has dependency to use 3.1 version of maven. But hadoop needs only a 
minimum of 3.0 maven. I ll check in detail

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-03-17 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.005.patch

The previous patch ({{004}}) only updated users' weights when each user was 
added. This patch ({{005}}) will update users' weights when the queues are 
refreshed.

bq. 2) I'm not sure if following logic suggested can simplify the code:
[~leftnoteasy], I have not forgotten your suggestion. I am investigating.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930098#comment-15930098
 ] 

Kai Sasaki commented on YARN-6362:
--

[~sunilg] Sorry I should have checked more detail. And thank you so much for 
detail investigation!

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930058#comment-15930058
 ] 

Hadoop QA commented on YARN-6362:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
24s{color} | {color:red} hadoop-yarn-ui in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 24s{color} 
| {color:red} hadoop-yarn-ui in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 10s{color} 
| {color:red} hadoop-yarn-ui in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859295/YARN-6362.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ed145858f606 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
| Default Java | 1.8.0_121 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/15310/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15310/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15310/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15310/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15310/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli

[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930056#comment-15930056
 ] 

Sunil G commented on YARN-6362:
---

{noformat}


 trunk compilation: patch




cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
-DskipTests -Pnative -Drequire.libwebhdfs -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test-compile 
-DskipTests=true > 
/testptch/hadoop/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
 2>&1
Elapsed:   2m 24s

hadoop-yarn-ui in the patch failed.
{noformat}

Here you can see that trunk compilation is failing WITH patch.

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930053#comment-15930053
 ] 

Sunil G commented on YARN-6362:
---

Interesting piece, I was monitoring the current jenkins which got triggered 
with your patch.

{noformat}




cd /testptch/hadoop
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
-DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
-Dcheckstyle.skip=true -Dfindbugs.skip=true > 
/testptch/hadoop/patchprocess/branch-mvninstall-root.txt 2>&1
Elapsed:  12m 38s




   trunk compilation: pre-patch




cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
-DskipTests -Pnative -Drequire.libwebhdfs -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test-compile 
-DskipTests=true > 
/testptch/hadoop/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
 2>&1
Elapsed:   3m 39s
{noformat}

As you can see, compilation is success in JENKINS machine *without the patch*.

However I agree it fails in my mac. In ubuntu its success. Looks like we have 
an issue in *node4.4.5* which is downloaded for Ubuntu vs Mac.
I am now locally trying to up the node version to avoid this.. One good news is 
that, hadoop build is NOT broken. But its broken in a machine where node is 
installed already. I ll update by diagnosis with new version of node/npm soon.

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930033#comment-15930033
 ] 

Sunil G commented on YARN-6362:
---

Lets wait for jenkins. In my ubuntu machine, i cannot reproduce this.

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930018#comment-15930018
 ] 

Kai Sasaki commented on YARN-6362:
--

[~sunilg] Yes I built on current trunk. The HEAD is 
[75368150395901f65a4698e84be4e7bbdcba94fa|https://github.com/apache/hadoop/commit/75368150395901f65a4698e84be4e7bbdcba94fa].

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-6362:
-
Attachment: YARN-6362.01.patch

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-6362.01.patch
>
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5956) Refactor ClientRMService

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930009#comment-15930009
 ] 

Hadoop QA commented on YARN-5956:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 57 unchanged - 5 fixed = 57 total (was 62) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
33s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859284/YARN-5956.15.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed1d275a59f4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15309/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15309/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  

[jira] [Commented] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930007#comment-15930007
 ] 

Sunil G commented on YARN-6362:
---

We have fixed a build issue in YARN-6336. Are u getting this in current latest 
trunk?

> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-6362:
-
Description: 
Building yarn-ui module fails due to invalid npm-cli.js path.


{code}
$ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
{code}

Failure of {{exec-maven-plugin}} in yarn-ui profile.
{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
module.js:327
throw err;
^

Error: Cannot find module 
'/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Function.Module.runMain (module.js:441:10)
{code}

  was:
Building yarn-ui module fails due to invalid npm-cli.js path.

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
module.js:327
throw err;
^

Error: Cannot find module 
'/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Function.Module.runMain (module.js:441:10)
{code}


> Build failure of yarn-ui profile
> 
>
> Key: YARN-6362
> URL: https://issues.apache.org/jira/browse/YARN-6362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> Building yarn-ui module fails due to invalid npm-cli.js path.
> {code}
> $ mvn clean install -DskipTests -Dtar -Pdist  -Pyarn-ui
> {code}
> Failure of {{exec-maven-plugin}} in yarn-ui profile.
> {code}
> [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
> module.js:327
> throw err;
> ^
> Error: Cannot find module 
> '/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
> at Function.Module._resolveFilename (module.js:325:15)
> at Function.Module._load (module.js:276:25)
> at Function.Module.runMain (module.js:441:10)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-6362:


 Summary: Build failure of yarn-ui profile
 Key: YARN-6362
 URL: https://issues.apache.org/jira/browse/YARN-6362
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki


Building yarn-ui module fails due to invalid npm-cli.js path.

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
module.js:327
throw err;
^

Error: Cannot find module 
'/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Function.Module.runMain (module.js:441:10)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5956) Refactor ClientRMService

2017-03-17 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5956:
-
Attachment: YARN-5956.15.patch

Rebased on trunk.

> Refactor ClientRMService
> 
>
> Key: YARN-5956
> URL: https://issues.apache.org/jira/browse/YARN-5956
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-5956.01.patch, YARN-5956.02.patch, 
> YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, 
> YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, 
> YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, 
> YARN-5956.12.patch, YARN-5956.13.patch, YARN-5956.14.patch, YARN-5956.15.patch
>
>
> Some refactoring can be done in {{ClientRMService}}.
> - Remove redundant variable declaration
> - Fill in missing javadocs
> - Proper variable access modifier
> - Fix some typos in method name and exception messages



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-17 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929783#comment-15929783
 ] 

Ayappan commented on YARN-6141:
---

Any update on this ?

> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.patch
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6352:
---
Target Version/s: 2.9.0, 2.8.1  (was: 2.9.0, 2.8.1, 3.0.0-alpha2)

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png, YARN-6352.001.patch
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6279) Scheduler rest api JSON is not providing all child queues names

2017-03-17 Thread Ashish Doneriya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929662#comment-15929662
 ] 

Ashish Doneriya commented on YARN-6279:
---

Sorry Guys, It seems like there was already a bug filed 
https://issues.apache.org/jira/browse/YARN-2336. In the json file there are 
duplicate keys ('childQueues') 'root.Engineering' that should not be 

childQueues : [
"queueName":"root.Engineering",
"childQueues":{ 
..
"queueName":"root.Engineering.Development"
..
},
"childQueues":{ 
..
"queueName":"root.Engineering.Testing"
..
}

]

I tested this on apache 2.4.1 and cloudera 5.9

> Scheduler rest api JSON is not providing all child queues names
> ---
>
> Key: YARN-6279
> URL: https://issues.apache.org/jira/browse/YARN-6279
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, fairscheduler, scheduler
>Affects Versions: 2.4.1
> Environment: Ubuntu 14.04, 7.7 GiB, i5, 3.4GHz x 4, 64-bit
>Reporter: Ashish Doneriya
>
> When I hit rest api /ws/v1/cluster/scheduler to get the JSON file. Its gave 
> me all child queues information, But it didn't gave me all information about 
> child queues of child queues. It displays information of only one sub child 
> queue. While in xml format there is no such problem.
> I'm providing the xml and json outputs..
> 
> {"scheduler":{"schedulerInfo":{"type":"fairScheduler","rootQueue":{"maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":8192,"vCores":8},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root","schedulingPolicy":"fair","childQueues":[{"maxApps":20,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":5283,"vCores":2},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":5283,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering","schedulingPolicy":"fair","childQueues":{"type":["fairSchedulerLeafQueueInfo"],"maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.Development","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0},"childQueues":{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.TESTING","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}},{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2909,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.default","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}]
> 
> 
>   http://www.w3.org/2001/XMLSchema-instance"; 
> xsi:type="fairScheduler">
>   
>   2147483647
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   8192
>   8
>   
>   root
>   fair
>   
>   20
>   
>   1024
>   1
>   
>   
>   5283
>   2
>   
>   
>   0
>   0
>   
>   
>   5283
>   0
>   
>   
>   8192
>   8
>   
>   root.Engineering
>

[jira] [Issue Comment Deleted] (YARN-6279) Scheduler rest api JSON is not providing all child queues names

2017-03-17 Thread Ashish Doneriya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Doneriya updated YARN-6279:
--
Comment: was deleted

(was: There are two child queues of 'root.Engineering' that are 
'root.Engineering.Development' and 'root.Engineering.TESTING'. XML version is 
showing both these queues but json version is showing only one 
'root.Engineering.TESTING'.)

> Scheduler rest api JSON is not providing all child queues names
> ---
>
> Key: YARN-6279
> URL: https://issues.apache.org/jira/browse/YARN-6279
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, fairscheduler, scheduler
>Affects Versions: 2.4.1
> Environment: Ubuntu 14.04, 7.7 GiB, i5, 3.4GHz x 4, 64-bit
>Reporter: Ashish Doneriya
>
> When I hit rest api /ws/v1/cluster/scheduler to get the JSON file. Its gave 
> me all child queues information, But it didn't gave me all information about 
> child queues of child queues. It displays information of only one sub child 
> queue. While in xml format there is no such problem.
> I'm providing the xml and json outputs..
> 
> {"scheduler":{"schedulerInfo":{"type":"fairScheduler","rootQueue":{"maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":8192,"vCores":8},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root","schedulingPolicy":"fair","childQueues":[{"maxApps":20,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":5283,"vCores":2},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":5283,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering","schedulingPolicy":"fair","childQueues":{"type":["fairSchedulerLeafQueueInfo"],"maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.Development","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0},"childQueues":{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.TESTING","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}},{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2909,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.default","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}]
> 
> 
>   http://www.w3.org/2001/XMLSchema-instance"; 
> xsi:type="fairScheduler">
>   
>   2147483647
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   8192
>   8
>   
>   root
>   fair
>   
>   20
>   
>   1024
>   1
>   
>   
>   5283
>   2
>   
>   
>   0
>   0
>   
>   
>   5283
>   0
>   
>   
>   8192
>   8
>   
>   root.Engineering
>   fair
>xsi:type="fairSchedulerLeafQueueInfo">
>   2147483647
>   
>   

[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929649#comment-15929649
 ] 

Varun Saxena commented on YARN-6146:


bq. This is due to the method call in TimelineReaderWebServices.getFlows()
Ohh right. range.dateStart and dateEnd are both not strings.

> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146-YARN-5355.01.patch, YARN-6146-YARN-5355.02.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6279) Scheduler rest api JSON is not providing all child queues names

2017-03-17 Thread Ashish Doneriya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929634#comment-15929634
 ] 

Ashish Doneriya commented on YARN-6279:
---

There are two child queues of 'root.Engineering' that are 
'root.Engineering.Development' and 'root.Engineering.TESTING'. XML version is 
showing both these queues but json version is showing only one 
'root.Engineering.TESTING'.

> Scheduler rest api JSON is not providing all child queues names
> ---
>
> Key: YARN-6279
> URL: https://issues.apache.org/jira/browse/YARN-6279
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, fairscheduler, scheduler
>Affects Versions: 2.4.1
> Environment: Ubuntu 14.04, 7.7 GiB, i5, 3.4GHz x 4, 64-bit
>Reporter: Ashish Doneriya
>
> When I hit rest api /ws/v1/cluster/scheduler to get the JSON file. Its gave 
> me all child queues information, But it didn't gave me all information about 
> child queues of child queues. It displays information of only one sub child 
> queue. While in xml format there is no such problem.
> I'm providing the xml and json outputs..
> 
> {"scheduler":{"schedulerInfo":{"type":"fairScheduler","rootQueue":{"maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":8192,"vCores":8},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root","schedulingPolicy":"fair","childQueues":[{"maxApps":20,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":5283,"vCores":2},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":5283,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering","schedulingPolicy":"fair","childQueues":{"type":["fairSchedulerLeafQueueInfo"],"maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.Development","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0},"childQueues":{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.TESTING","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}},{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2909,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.default","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}]
> 
> 
>   http://www.w3.org/2001/XMLSchema-instance"; 
> xsi:type="fairScheduler">
>   
>   2147483647
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   8192
>   8
>   
>   root
>   fair
>   
>   20
>   
>   1024
>   1
>   
>   
>   5283
>   2
>   
>   
>   0
>   0
>   
>   
>   5283
>   0
>   
>   
>   8192
>   8
>   
>   root.Engineering
>   fair
>xsi:type="fairSchedulerLeafQueueInfo">
>   2147483647
>   
>  

[jira] [Updated] (YARN-5068) Expose scheduler queue to application master

2017-03-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5068:
-
Fix Version/s: (was: 2.8.0)

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 3.0.0-alpha1
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068.1.patch, 
> YARN-5068.2.patch, YARN-5068-branch-2.1.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2017-03-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3126:
-
Fix Version/s: (was: 2.8.0)

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>Assignee: Yufei Gu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit.patch, 
> resourcelimit-test.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6360) Prevent FS state dump logger from cramming other log files

2017-03-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929522#comment-15929522
 ] 

Hadoop QA commented on YARN-6360:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6360 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859234/YARN-6360.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  |
| uname | Linux 01dc0e5771a0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15308/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15308/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Prevent FS state dump logger from cramming other log files
> --
>
> Key: YARN-6360
> URL: https://issues.apache.org/jira/browse/YARN-6360
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6360.001.patch
>
>
> FS could dump states to multiple files if its logger inherit parents' 
> appender. We should prevent that in case the state dump logger may cram other 
> log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929509#comment-15929509
 ] 

Yufei Gu commented on YARN-6217:


If we remove the timeout in this test, may be consistent to remove timeout for 
other two tests, {{testHierarchicalSubDirectoryCreation}} and 
{{testMinimumPerDirectoryFileLimit}}.

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org