[jira] [Commented] (YARN-2162) Fair Scheduler :ability to optionally configure minResources and maxResources in terms of percentage

2017-08-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109622#comment-16109622
 ] 

Yufei Gu commented on YARN-2162:


The test failures are unrelated.

> Fair Scheduler :ability to optionally configure minResources and maxResources 
> in terms of percentage
> 
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6895) [FairScheduler] Preemption reservation may cause regular reservation leaks

2017-08-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109616#comment-16109616
 ] 

Yufei Gu commented on YARN-6895:


Thanks [~miklos.szeg...@cloudera.com] for the patch. One question, if a node 
without any preemption reservation release some resources smaller than 
preemption resource request, scheduler still does the normal reservation? 

I was wondering would be easier and cleaner if we put 
{{resourcesPreemptedForApp}}, {{appIdToAppMap}} and {{totalResourcesPreempted}} 
into one single class? That case, we may get rid of {{appIdToAppMap}} and 
{{totalResourcesPreempted}} as well, and handle lock nicely.

Some nits:
- Need to expand this line {{import static org.junit.Assert.*;}}
- Extra space on this line {{return resourcesPreemptedForApp.containsKey(app);}}
- Comment "Reserve only, if not reserved for preempted resources," seems 
confusing to me, can you rewrite this comment block?


> [FairScheduler] Preemption reservation may cause regular reservation leaks
> --
>
> Key: YARN-6895
> URL: https://issues.apache.org/jira/browse/YARN-6895
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: YARN-6895.000.patch
>
>
> We found a limitation in the implementation of YARN-6432. If the container 
> released is smaller than the preemption request, a node reservation is 
> created that is never deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109563#comment-16109563
 ] 

Hadoop QA commented on YARN-6872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 69 unchanged - 0 fixed = 70 total (was 69) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879882/YARN-6872-addendum.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7f79a5e72546 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 91f120f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16648/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16648/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16648/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109558#comment-16109558
 ] 

Varun Saxena commented on YARN-6130:


Seems that patch wont be picked up by Hadoop QA for YARN-5355-branch-2. Any 
idea how to name it so that its picked up?

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) Fair Scheduler :ability to optionally configure minResources and maxResources in terms of percentage

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109550#comment-16109550
 ] 

Hadoop QA commented on YARN-2162:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 357 unchanged - 5 fixed = 357 total (was 362) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-2162 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879876/YARN-2162.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4d2ae4dbffa3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109546#comment-16109546
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-6130 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879895/YARN-6130-YARN-5355-branch-2.01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16652/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109539#comment-16109539
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-6130 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879895/YARN-6130-YARN-5355-branch-2.01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16651/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-01 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Attachment: YARN-6130-YARN-5355-branch-2.01.patch

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355-branch-2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-08-01 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6102:
---
Attachment: YARN-6102-YARN-5355-branch-2.addendum.patch

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: eventOrder.JPG, YARN-6102.01.patch, YARN-6102.02.patch, 
> YARN-6102.03.patch, YARN-6102.04.patch, YARN-6102.05.patch, 
> YARN-6102.06.patch, YARN-6102.07.patch, YARN-6102-branch-2.001.patch, 
> YARN-6102-branch-2.002-addednum.patch, YARN-6102-branch-2.002.patch, 
> YARN-6102-YARN-5355-branch-2.addendum.patch
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-08-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109510#comment-16109510
 ] 

Varun Saxena commented on YARN-6102:


Cherry picked this to YARN-5355-branch-2 too  (along with branch-2 addendum). 
But another addendum was required for YARN-5355-branch-2.
Updating the addendum patch

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: eventOrder.JPG, YARN-6102.01.patch, YARN-6102.02.patch, 
> YARN-6102.03.patch, YARN-6102.04.patch, YARN-6102.05.patch, 
> YARN-6102.06.patch, YARN-6102.07.patch, YARN-6102-branch-2.001.patch, 
> YARN-6102-branch-2.002-addednum.patch, YARN-6102-branch-2.002.patch
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-01 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-6900:
--
Attachment: YARN-6900-YARN-2915-001.patch

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Inigo Goiri
> Attachments: YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-01 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-6900:
--
Attachment: (was: YARN-6900-YARN-2915-001.patch)

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Inigo Goiri
> Attachments: YARN-6900-YARN-2915-000.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: YARN-6903.yarn-native-services.02.patch

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: (was: YARN-6903.yarn-native-services.02.patch)

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: YARN-6903.yarn-native-services.02.patch

v2 fix some jenkins issues

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6788) Improve performance of resource profile branch

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6788:
--
Attachment: YARN-6788-YARN-3926.019.patch

Quickly updating patch after addressing lock related comments from [~templedf]

Please help to check latest patch

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch, 
> YARN-6788-YARN-3926.014.patch, YARN-6788-YARN-3926.015.patch, 
> YARN-6788-YARN-3926.016.patch, YARN-6788-YARN-3926.017.patch, 
> YARN-6788-YARN-3926.018.patch, YARN-6788-YARN-3926.019.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore

2017-08-01 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-6900:
--
Attachment: YARN-6900-YARN-2915-001.patch

> ZooKeeper based implementation of the FederationStateStore
> --
>
> Key: YARN-6900
> URL: https://issues.apache.org/jira/browse/YARN-6900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Inigo Goiri
> Attachments: YARN-6900-YARN-2915-000.patch, 
> YARN-6900-YARN-2915-001.patch
>
>
> YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only 
> support SQL based stores, this JIRA tracks adding a ZooKeeper based 
> implementation for simplifying deployment as it's already popularly used for 
> {{RMStateStore}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8

2017-08-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109445#comment-16109445
 ] 

Wangda Tan commented on YARN-6901:
--

[~jlowe],

You're right, I checked the code of the problematic cluster, it is a little bit 
different from branch-2.8. When it allocates reserve container, leafqueue's 
synchronized lock isn't acquired properly. So I think the deadlock should not 
be existed in branch-2.8. 

However, I think the fix attached in the JIRA should be get into branch-2.8 in 
any case, since it fixes bad pattern which acquires lock of upper-level 
component while holding lock of lower-level component, this could likely cause 
deadlock in the future.

Downgrade priority to critical since this is more like a preventing fix.



> A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 
> 
>
> Key: YARN-6901
> URL: https://issues.apache.org/jira/browse/YARN-6901
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6901.branch-2.8.001.patch
>
>
> Stacktrace:
> {code}
> Thread 22068: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent()
>  @bci=0, line=185 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath()
>  @bci=8, line=262 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=183, line=80 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=204, line=747 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=16, line=49 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=61, line=468 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode)
>  @bci=148, line=876 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode)
>  @bci=157, line=1149 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent)
>  @bci=266, line=1277 (Compiled frame)
> 
>  Thread 22124: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers()
>  @bci=0, line=336 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.preemptFrom(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp,
>  org.apache.hadoop.yarn.api.records.Resource, 

[jira] [Updated] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8

2017-08-01 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6901:
-
Priority: Critical  (was: Blocker)

> A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 
> 
>
> Key: YARN-6901
> URL: https://issues.apache.org/jira/browse/YARN-6901
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-6901.branch-2.8.001.patch
>
>
> Stacktrace:
> {code}
> Thread 22068: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent()
>  @bci=0, line=185 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath()
>  @bci=8, line=262 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=183, line=80 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=204, line=747 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=16, line=49 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=61, line=468 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode)
>  @bci=148, line=876 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode)
>  @bci=157, line=1149 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent)
>  @bci=266, line=1277 (Compiled frame)
> 
>  Thread 22124: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers()
>  @bci=0, line=336 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.preemptFrom(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp,
>  org.apache.hadoop.yarn.api.records.Resource, java.util.Map, java.util.List, 
> org.apache.hadoop.yarn.api.records.Resource, java.util.Map, 
> org.apache.hadoop.yarn.api.records.Resource) @bci=61, line=277 (Compiled 
> frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.selectCandidates(java.util.Map,
>  org.apache.hadoop.yarn.api.records.Resource, 
> org.apache.hadoop.yarn.api.records.Resource) @bci=374, line=138 (Compiled 
> frame)
>  - 
> 

[jira] [Assigned] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-6874:


Assignee: Vrushali C

> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-01 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109434#comment-16109434
 ] 

Vrushali C commented on YARN-6874:
--

This seems to  be happening often enough now. Also saw this in the build report 
on YARN-6820

> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6872:
--
Attachment: YARN-6872-addendum.001.patch

when we use non-exclusive node labels also we could get same issue. Updating an 
addendum patch to cover that scenario as well.
Thanks [~leftnoteasy] and [~jianhe]

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch, YARN-6872-addendum.001.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109423#comment-16109423
 ] 

Hadoop QA commented on YARN-6876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
32s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 
127 unchanged - 7 fixed = 127 total (was 134) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 517 unchanged - 10 fixed = 531 total (was 527) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
14s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 43s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestNMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879863/YARN-6876-trunk.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux aa990f6faca8 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (YARN-5349) TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart fail intermittently

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109399#comment-16109399
 ] 

Hadoop QA commented on YARN-5349:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5349 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879866/YARN-5349.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0f46f6ccaf09 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b38a1ee |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16646/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16646/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16646/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart  fail 
> intermittently
> 

[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109380#comment-16109380
 ] 

Hudson commented on YARN-6872:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12090 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12090/])
YARN-6872. Ensure apps could run given NodeLabels are disabled post RM (jianhe: 
rev 91f120f743662c6e037e8f21b1792e81d58ac664)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java


> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2162) Fair Scheduler :ability to optionally configure minResources and maxResources in terms of percentage

2017-08-01 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-2162:
---
Attachment: YARN-2162.002.patch

Uploaded patch v2 to address style issues and the unit test failure.

> Fair Scheduler :ability to optionally configure minResources and maxResources 
> in terms of percentage
> 
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-08-01 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109356#comment-16109356
 ] 

Jonathan Hung commented on YARN-6322:
-

Thanks Xuan!

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-6322-YARN-5734.001.patch, 
> YARN-6322-YARN-5734.002.patch, YARN-6322-YARN-5734.003.patch, 
> YARN-6322-YARN-5734.004.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109343#comment-16109343
 ] 

Vrushali C commented on YARN-6820:
--

Will add the license to the new unit test file I added in this patch. Also will 
fix the checkstyle comments. The unit test failure is unrelated to the patch 
and is being tracked in YARN-6874

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109281#comment-16109281
 ] 

Haibo Chen commented on YARN-6920:
--

Sure. Will take a look.

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109277#comment-16109277
 ] 

Sunil G commented on YARN-6872:
---

Test case failures are known.

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6921) Allow resource request to opt out of oversubscription

2017-08-01 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6921:
-
Issue Type: Sub-task  (was: Task)
Parent: YARN-1011

> Allow resource request to opt out of oversubscription
> -
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6921) Allow resource request to opt out of oversubscription

2017-08-01 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6921:


 Summary: Allow resource request to opt out of oversubscription
 Key: YARN-6921
 URL: https://issues.apache.org/jira/browse/YARN-6921
 Project: Hadoop YARN
  Issue Type: Task
  Components: scheduler
Affects Versions: 3.0.0-alpha3
Reporter: Haibo Chen
Assignee: Haibo Chen


Guaranteed container requests, enforce tag true or not, are by default eligible 
for oversubscription, and thus can get OPPORTUNISTIC container allocations. We 
should allow them to opt out if their enforce tag is set to true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6861) Reader API for sub application entities

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109260#comment-16109260
 ] 

Hadoop QA commented on YARN-6861:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 8 new + 
34 unchanged - 0 fixed = 42 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 70 new + 0 unchanged - 0 fixed = 70 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
22s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6861 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6802) Add Max AM Resource and AM Resource Usage to Leaf Queue View in FairScheduler WebUI

2017-08-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109246#comment-16109246
 ] 

Yufei Gu commented on YARN-6802:


Hi [~daemon], it doesn't apply since HADOOP-1187 is not in branch-2.

> Add Max AM Resource and AM Resource Usage to Leaf Queue View in FairScheduler 
> WebUI
> ---
>
> Key: YARN-6802
> URL: https://issues.apache.org/jira/browse/YARN-6802
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> YARN-6802.001.patch, YARN-6802.002.patch, YARN-6802.003.patch
>
>
> RM Web ui should support view leaf queue am resource usage. 
> !screenshot-2.png!
> I will upload my patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109197#comment-16109197
 ] 

Arun Suresh edited comment on YARN-6920 at 8/1/17 4:31 PM:
---

It looks like its been failing after YARN-6706. [~haibochen] Can you take a 
look ?


was (Author: asuresh):
It looks like its been failing after YARN-6706. [~haibo.chen] Can you take a 
look ?

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5349) TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart fail intermittently

2017-08-01 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5349:
-
Attachment: YARN-5349.001.patch

Attaching a patch that accounts for containers potentially being allocated 
immediately or across multiple allocate calls.  I also lowered the sleep 
durations a bit when polling.

> TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart  fail 
> intermittently
> -
>
> Key: YARN-5349
> URL: https://issues.apache.org/jira/browse/YARN-5349
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 2.8.1
>Reporter: sandflee
>Priority: Minor
> Attachments: YARN-5349.001.patch
>
>
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart.testUAMRecoveryOnRMWorkPreservingRestart(TestWorkPreservingRMRestart.java:1463)
> {noformat}
> https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestWorkPreservingRMRestart/testUAMRecoveryOnRMWorkPreservingRestart/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5349) TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart fail intermittently

2017-08-01 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-5349:


 Assignee: Jason Lowe
Affects Version/s: 2.8.1
 Target Version/s: 2.8.2

> TestWorkPreservingRMRestart#testUAMRecoveryOnRMWorkPreservingRestart  fail 
> intermittently
> -
>
> Key: YARN-5349
> URL: https://issues.apache.org/jira/browse/YARN-5349
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 2.8.1
>Reporter: sandflee
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-5349.001.patch
>
>
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart.testUAMRecoveryOnRMWorkPreservingRestart(TestWorkPreservingRMRestart.java:1463)
> {noformat}
> https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestWorkPreservingRMRestart/testUAMRecoveryOnRMWorkPreservingRestart/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6909) The performance advantages of YARN-6679 are lost when resource types are used

2017-08-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109213#comment-16109213
 ] 

Daniel Templeton commented on YARN-6909:


Either way is fine with me.

> The performance advantages of YARN-6679 are lost when resource types are used
> -
>
> Key: YARN-6909
> URL: https://issues.apache.org/jira/browse/YARN-6909
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Priority: Critical
>  Labels: newbie++
>
> YARN-6679 added the {{SimpleResource}} as a lightweight replacement for 
> {{ResourcePBImpl}} when a protobuf isn't needed.  With resource types enabled 
> and anything other than memory and CPU defined, {{ResourcePBImpl}} will 
> always be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6920) fix TestNMClient failure due to YARN-6706

2017-08-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6920:
--
Summary: fix TestNMClient failure due to YARN-6706  (was: fix TestNMClient 
failure)

> fix TestNMClient failure due to YARN-6706
> -
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6920) fix TestNMClient failure

2017-08-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109197#comment-16109197
 ] 

Arun Suresh commented on YARN-6920:
---

It looks like its been failing after YARN-6706. [~haibo.chen] Can you take a 
look ?

> fix TestNMClient failure
> 
>
> Key: YARN-6920
> URL: https://issues.apache.org/jira/browse/YARN-6920
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Haibo Chen
>
> Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA 
> to track the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6920) fix TestNMClient failure

2017-08-01 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6920:
-

 Summary: fix TestNMClient failure
 Key: YARN-6920
 URL: https://issues.apache.org/jira/browse/YARN-6920
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Haibo Chen


Looks like {{TestNMClient}} has been failing for a while. Opening this JIRA to 
track the fix.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6876) Create an abstract log writer for extendability

2017-08-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6876:

Attachment: YARN-6876-trunk.002.patch

> Create an abstract log writer for extendability
> ---
>
> Key: YARN-6876
> URL: https://issues.apache.org/jira/browse/YARN-6876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, 
> YARN-6876-trunk.002.patch
>
>
> Currently, TFile log writer is used to aggregate log in YARN. We need to add 
> an abstract layer, and pick up the correct log writer based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6875) New aggregated log file format for YARN log aggregation.

2017-08-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109148#comment-16109148
 ] 

Xuan Gong commented on YARN-6875:
-

Thanks for the suggestion. [~leftnoteasy]
The approach looks fine. But it would introduce extra complexity if we enable 
the compression for the log aggregation. Instead of appending UUID + block_id 
for every fixed N bytes, we would append them for every aggregated log which 
might be easier given that we do the compression for every log type separately.

> New aggregated log file format for YARN log aggregation.
> 
>
> Key: YARN-6875
> URL: https://issues.apache.org/jira/browse/YARN-6875
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6875-NewLogAggregationFormat-design-doc.pdf
>
>
> T-file is the underlying log format for the aggregated logs in YARN. We have 
> seen several performance issues, especially for very large log files.
> We will introduce a new log format which have better performance for large 
> log files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6322) Disable queue refresh when configuration mutation is enabled

2017-08-01 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109134#comment-16109134
 ] 

Xuan Gong commented on YARN-6322:
-

Committed into YARN-5734 branch. Thanks, Jonathan!

> Disable queue refresh when configuration mutation is enabled
> 
>
> Key: YARN-6322
> URL: https://issues.apache.org/jira/browse/YARN-6322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-6322-YARN-5734.001.patch, 
> YARN-6322-YARN-5734.002.patch, YARN-6322-YARN-5734.003.patch, 
> YARN-6322-YARN-5734.004.patch
>
>
> When configuration mutation is enabled, the configuration store is the source 
> of truth. Calling {{-refreshQueues}} won't work as intended, so we should 
> just disable this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6861) Reader API for sub application entities

2017-08-01 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6861:

Attachment: YARN-6861-YARN-5355.002.patch

As per discussion offline with Varun and Vrushali, I have updated patch with 
following change. 
# Modified REST interface to supplement user id i.e
#*  /users/$userid/entities/$entitytype/$entityid
#*  /users/$userid/entities/$entitytype
# Added a test 
# Refactored SubApplicationEntityReader. 

> Reader API for sub application entities
> ---
>
> Key: YARN-6861
> URL: https://issues.apache.org/jira/browse/YARN-6861
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6861-YARN-5355.001.patch, 
> YARN-6861-YARN-5355.002.patch
>
>
> YARN-6733 and YARN-6734 writes data into sub application table. There should 
> be a way to read those entities.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-08-01 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109100#comment-16109100
 ] 

Konstantinos Karanasos commented on YARN-6593:
--

Thanks for the feedback, [~jianhe].

bq. should we have an example class to demonstrate how to use the APIs for 
different scenarios as mentioned in the document? I think that's useful for 
users.
I have some first examples in {{PlacementConstraints}}. Do you think we should 
add them in a different class?

bq. looks like the allocationTag is modeled as a single key, I think a 
key/value pair will be more flexible ? Like I want to associate different 
dimensions of informations to the container.
We talked a lot about this with [~leftnoteasy] and [~arun.sur...@gmail.com], 
and decided it would be easier for the users to add the tags as a value with 
null key, given that tags do not have values. Still we can add multiple tags to 
a container. Can you give an example that we cannot support with the current 
proposal?

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, 
> YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6494) add mounting of HDFS Short-Circuit path for docker containers

2017-08-01 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109066#comment-16109066
 ] 

Eric Badger commented on YARN-6494:
---

bq. It seems there are several use cases where a default bind list would be 
useful, so I'd say let's pursue that route.
Filed YARN-6919

> add mounting of HDFS Short-Circuit path for docker containers
> -
>
> Key: YARN-6494
> URL: https://issues.apache.org/jira/browse/YARN-6494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jaeboo Jeong
>Assignee: Jaeboo Jeong
> Attachments: YARN-6494.001.patch, YARN-6494.002.patch
>
>
> Currently there is a error message about HDFS short-circuit when docker 
> container start.
> {code}
> WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error 
> creating DomainSocket
> java.net.ConnectException: connect(2) error: No such file or directory when 
> trying to connect to ‘xxx’
> at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
> at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
> at 
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:752)
> ...
> {code}
> if dfs.client.read.shortcircuit is true and dfs.domain.socket.path isn't 
> equal “”, we need to mount volume for short-circuit path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6494) add mounting of HDFS Short-Circuit path for docker containers

2017-08-01 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109065#comment-16109065
 ] 

Shane Kumpf commented on YARN-6494:
---

[~ebadger] thanks for the feedback and I agree with your points. With 
administrator involvement, we reduce the chance of surprise, which I believe is 
important. 

It seems there are several use cases where a default bind list would be useful, 
so I'd say let's pursue that route.

> add mounting of HDFS Short-Circuit path for docker containers
> -
>
> Key: YARN-6494
> URL: https://issues.apache.org/jira/browse/YARN-6494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jaeboo Jeong
>Assignee: Jaeboo Jeong
> Attachments: YARN-6494.001.patch, YARN-6494.002.patch
>
>
> Currently there is a error message about HDFS short-circuit when docker 
> container start.
> {code}
> WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error 
> creating DomainSocket
> java.net.ConnectException: connect(2) error: No such file or directory when 
> trying to connect to ‘xxx’
> at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
> at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
> at 
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:752)
> ...
> {code}
> if dfs.client.read.shortcircuit is true and dfs.domain.socket.path isn't 
> equal “”, we need to mount volume for short-circuit path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6919) Add default volume mount list

2017-08-01 Thread Eric Badger (JIRA)
Eric Badger created YARN-6919:
-

 Summary: Add default volume mount list
 Key: YARN-6919
 URL: https://issues.apache.org/jira/browse/YARN-6919
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Reporter: Eric Badger
Assignee: Eric Badger


Piggybacking on YARN-5534, we should create a default list that bind mounts 
selected volumes into all docker containers. This list will be empty by default 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6789) new api to get all supported resources from RM

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109057#comment-16109057
 ] 

Sunil G commented on YARN-6789:
---

cc/ [~leftnoteasy]

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6789-YARN-3926.001.patch
>
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6918) YarnAuthorizationProvider remove permission on queue delete

2017-08-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6918:
---
Description: On queue removal during refresh acls need to removed from Yarn

> YarnAuthorizationProvider remove permission on queue delete
> ---
>
> Key: YARN-6918
> URL: https://issues.apache.org/jira/browse/YARN-6918
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> On queue removal during refresh acls need to removed from Yarn



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6918) YarnAuthorizationProvider remove permission on queue delete

2017-08-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6918:
---
Description: On queue removal during refresh acl for deleted queue need to 
removed from allAcls to avoid leak  (was: On queue removal during refresh acls 
need to removed from Yarn)

> YarnAuthorizationProvider remove permission on queue delete
> ---
>
> Key: YARN-6918
> URL: https://issues.apache.org/jira/browse/YARN-6918
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> On queue removal during refresh acl for deleted queue need to removed from 
> allAcls to avoid leak



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread abhishek bharani (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109048#comment-16109048
 ] 

abhishek bharani commented on YARN-6914:


Below is the information from NM Logs :

2017-08-01 10:19:50,510 ERROR org.apache.spark.network.util.LevelDBProvider: 
error opening leveldb file 
/usr/local/hadoop/tmp/nm-local-dir/registeredExecutors.ldb.  Creating new file, 
will not be able to recover state for existing applications
org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: 
/usr/local/hadoop/tmp/nm-local-dir/registeredExecutors.ldb/LOCK: No such file 
or directory
at 
org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at 
org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:48)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.(ExternalShuffleBlockResolver.java:116)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.(ExternalShuffleBlockResolver.java:94)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.(ExternalShuffleBlockHandler.java:65)
at 
org.apache.spark.network.yarn.YarnShuffleService.serviceInit(YarnShuffleService.java:166)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:143)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:245)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:261)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:495)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543)
2017-08-01 10:19:50,511 WARN org.apache.spark.network.util.LevelDBProvider: 
error deleting /usr/local/hadoop/tmp/nm-local-dir/registeredExecutors.ldb
2017-08-01 10:19:50,511 INFO org.apache.hadoop.service.AbstractService: Service 
spark_shuffle failed in state INITED; cause: java.io.IOException: Unable to 
create state store
java.io.IOException: Unable to create state store
at 
org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:77)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.(ExternalShuffleBlockResolver.java:116)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.(ExternalShuffleBlockResolver.java:94)
at 
org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.(ExternalShuffleBlockHandler.java:65)
at 
org.apache.spark.network.yarn.YarnShuffleService.serviceInit(YarnShuffleService.java:166)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:143)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:245)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:261)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:495)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: 
/usr/local/hadoop/tmp/nm-local-dir/registeredExecutors.ldb/LOCK: No such file 
or directory
at 
org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at 

[jira] [Created] (YARN-6918) YarnAuthorizationProvider remove permission on queue delete

2017-08-01 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-6918:
--

 Summary: YarnAuthorizationProvider remove permission on queue 
delete
 Key: YARN-6918
 URL: https://issues.apache.org/jira/browse/YARN-6918
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109045#comment-16109045
 ] 

Sunil G commented on YARN-6788:
---

Test case failures are not related

cc/ [~leftnoteasy] [~templedf]

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch, 
> YARN-6788-YARN-3926.014.patch, YARN-6788-YARN-3926.015.patch, 
> YARN-6788-YARN-3926.016.patch, YARN-6788-YARN-3926.017.patch, 
> YARN-6788-YARN-3926.018.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6789) new api to get all supported resources from RM

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6789:
--
Attachment: YARN-6789-YARN-3926.001.patch

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6789-YARN-3926.001.patch
>
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6789) new api to get all supported resources from RM

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6789:
--
Attachment: (was: YARN-6789.001.patch)

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6789) new api to get all supported resources from RM

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6789:
--
Attachment: YARN-6789.001.patch

Attaching an initial version of patch.

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109033#comment-16109033
 ] 

Hadoop QA commented on YARN-6788:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
52s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3926 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 193 unchanged - 17 fixed = 204 total (was 210) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 58s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The 

[jira] [Commented] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109001#comment-16109001
 ] 

Bibin A Chundatt commented on YARN-6916:


[~ajisakaa]
Could you rebase the patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108993#comment-16108993
 ] 

Bibin A Chundatt commented on YARN-6872:


{quote}
During recovery of containers from node manager, if the recovered container has 
label and node label is disabled in cluster, we can consider that container to 
default label. This help to handle metrics issue correctly.
{quote}
I dont see any issue with this change.

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread abhishek bharani (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108990#comment-16108990
 ] 

abhishek bharani commented on YARN-6914:


/Users/abhishekbharani/Desktop/nodemanager.log

> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6914) Application application_1501553373419_0001 failed 2 times due to AM Container for appattempt_1501553373419_0001_000002 exited with exitCode: -1000

2017-08-01 Thread abhishek bharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

abhishek bharani updated YARN-6914:
---
Comment: was deleted

(was: /Users/abhishekbharani/Desktop/nodemanager.log)

> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> --
>
> Key: YARN-6914
> URL: https://issues.apache.org/jira/browse/YARN-6914
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.3
> Environment: Mac OS
>Reporter: abhishek bharani
>Priority: Critical
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am getting below error while running 
> spark-shell --master yarn
> Application application_1501553373419_0001 failed 2 times due to AM Container 
> for appattempt_1501553373419_0001_02 exited with exitCode: -1000
> For more detailed output, check application tracking 
> page:http://abhisheks-mbp:8088/cluster/app/application_1501553373419_0001Then,
>  click on links to logs of each attempt.
> Diagnostics: null
> Failing this attempt. Failing the application.
> Below are the contents of yarn-site.xml :
> 
> 
> 
>     yarn.nodemanager.aux-services
> mapreduce_shuffle
>     
>
>     
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> 
>     yarn.nodemanager.aux-services.spark_shuffle.class
> 
> org.apache.spark.network.yarn.YarnShuffleService
>     
> 
> yarn.log-aggregation-enable
> true
> 
> 
> 
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
> 3600
> 
> 
> yarn.resourcemanager.hostname
> localhost
> 
> 
> 
> yarn.resourcemanager.resourcetracker.address
> ${yarn.resourcemanager.hostname}:8025
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.scheduler.address
> ${yarn.resourcemanager.hostname}:8035
> Enter your ResourceManager 
> hostname.
> 
> 
> yarn.resourcemanager.address
> ${yarn.resourcemanager.hostname}:8055
> Enter your ResourceManager 
> hostname.
> 
> 
> The http address of the RM web 
> application.
> yarn.resourcemanager.webapp.address
> ${yarn.resourcemanager.hostname}:8088
> 
> I tried many solutions but none of them is working :
> 1.Added property 
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 
> to yarn-site.xml with value as 98.5
> 2.added below property to yarn-site.xml 
> yarn.nodemanager.aux-services.spark_shuffle.class 
> org.apache.spark.network.yarn.YarnShuffleService  
> 3.Added property in spark-defaults.conf 
> spark.yarn.jars=hdfs://localhost:50010/users/spark/jars/*.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6494) add mounting of HDFS Short-Circuit path for docker containers

2017-08-01 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108978#comment-16108978
 ] 

Eric Badger commented on YARN-6494:
---

bq. At a minimum, we'd need the ability to turn this off for containers that 
don't need the HDFS socket, but I feel that it would be better to have a more 
holistic approach, which is what I hope YARN-5534 can become. The reason I 
believe we shouldn't hard code mounts is that not every container will require 
that mount.
I generally agree with this approach, but adding the volume whitelist doesn't 
fix the potential security issue. It only sort of mitigates it. If the 
administrator allows for the socket to be in the whitelist, then any container 
can ask for it. So yes, existing containers and/or containers that the attacker 
does not control upon startup will not have this bind-mounted in. However, if 
the attacker is the one submitting the job, they'll just ask for the socket to 
be bind-mounted and will be granted that request. Basically what I'm trying to 
get at is that if the administrator allows short-circuit reads, they are taking 
the potential security risk. At that point, I'm not sure if it matters whether 
all containers have the socket or just the ones that asked for it, especially 
when the attacker can explicitly ask for it. 

I think both points can be resolved by letting the administrator decide their 
destiny here. We can use YARN-5534 to create a whitelist of volumes that the 
jobs can specify. Then, as we touched upon in [this 
comment|https://issues.apache.org/jira/browse/YARN-5534?focusedCommentId=16093026=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16093026]
 in YARN-5534, we can create a default bind list, which is empty by default. If 
the administrator wants the short-circuit socket for all containers, they can 
add it to the default list. If they only want it for certain containers, they 
can add it to the whitelist and let users ask for it. 

> add mounting of HDFS Short-Circuit path for docker containers
> -
>
> Key: YARN-6494
> URL: https://issues.apache.org/jira/browse/YARN-6494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jaeboo Jeong
>Assignee: Jaeboo Jeong
> Attachments: YARN-6494.001.patch, YARN-6494.002.patch
>
>
> Currently there is a error message about HDFS short-circuit when docker 
> container start.
> {code}
> WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error 
> creating DomainSocket
> java.net.ConnectException: connect(2) error: No such file or directory when 
> trying to connect to ‘xxx’
> at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
> at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
> at 
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:752)
> ...
> {code}
> if dfs.client.read.shortcircuit is true and dfs.domain.socket.path isn't 
> equal “”, we need to mount volume for short-circuit path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6917) Queue path is recomputed from scratch on every allocation

2017-08-01 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-6917:


 Summary: Queue path is recomputed from scratch on every allocation
 Key: YARN-6917
 URL: https://issues.apache.org/jira/browse/YARN-6917
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.8.1
Reporter: Jason Lowe
Priority: Minor


As part of the discussion in YARN-6901 I noticed that we are recomputing a 
queue's path for every allocation.  Currently getting the queue's path involves 
calling getQueuePath on the parent then building onto that string with the 
basename of the queue.  In turn the parent's getQueuePath method does the same, 
so we end up spending time recomputing a string that will never change until a 
reconfiguration.

Ideally the queue path should be computed once during queue initialization 
rather than on-demand.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8

2017-08-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108904#comment-16108904
 ] 

Jason Lowe commented on YARN-6901:
--

bq. This may not be possible, ideally moving an app from one queue to another 
need to lock queue, and assign container need to lock queue as well. It should 
be safe.

I'm not sure I understand.  Could you elaborate on how the deadlock is 
happening?  Looking at the branch-2.8 code, I don't see how LeafQueue is 
calling assignContainers on the app without holding a lock on the queue.  
Therefore I'm confused why the app's assignContainers is blocking on a queue 
lock it should already have.

> A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 
> 
>
> Key: YARN-6901
> URL: https://issues.apache.org/jira/browse/YARN-6901
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-6901.branch-2.8.001.patch
>
>
> Stacktrace:
> {code}
> Thread 22068: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent()
>  @bci=0, line=185 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath()
>  @bci=8, line=262 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=183, line=80 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=204, line=747 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=16, line=49 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode,
>  org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) 
> @bci=61, line=468 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource,
>  
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode,
>  org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode)
>  @bci=148, line=876 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode)
>  @bci=157, line=1149 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent)
>  @bci=266, line=1277 (Compiled frame)
> 
>  Thread 22124: (state = BLOCKED)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers()
>  @bci=0, line=336 (Compiled frame)
>  - 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.preemptFrom(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp,
>  org.apache.hadoop.yarn.api.records.Resource, java.util.Map, java.util.List, 
> org.apache.hadoop.yarn.api.records.Resource, java.util.Map, 
> 

[jira] [Commented] (YARN-5648) [ATSv2 Security] Client side changes for authentication

2017-08-01 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108893#comment-16108893
 ] 

Varun Saxena commented on YARN-5648:


Had to add an addendum for YARN-5355-branch-2.
Attaching the patch committed.

> [ATSv2 Security] Client side changes for authentication
> ---
>
> Key: YARN-5648
> URL: https://issues.apache.org/jira/browse/YARN-5648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-5648-YARN-5355.02.patch, 
> YARN-5648-YARN-5355.03.patch, YARN-5648-YARN-5355.04.patch, 
> YARN-5648-YARN-5355-branch-2.addendum.01.patch, 
> YARN-5648-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5648) [ATSv2 Security] Client side changes for authentication

2017-08-01 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5648:
---
Attachment: YARN-5648-YARN-5355-branch-2.addendum.01.patch

> [ATSv2 Security] Client side changes for authentication
> ---
>
> Key: YARN-5648
> URL: https://issues.apache.org/jira/browse/YARN-5648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-5648-YARN-5355.02.patch, 
> YARN-5648-YARN-5355.03.patch, YARN-5648-YARN-5355.04.patch, 
> YARN-5648-YARN-5355-branch-2.addendum.01.patch, 
> YARN-5648-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2017-08-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108883#comment-16108883
 ] 

Gergely Novák commented on YARN-6894:
-

It's kind of logical, that if you define the _queue_ parameter you wish to see 
the applications actually in the queue. As I see it, there might be several 
possible solutions:
# Update the documentation to something like this: "queue - applications that 
are currently in this queue"
# Return all the applications that were submitted to that queue 
# Introduce another query parameter that returns all the applications that were 
submitted to a queue

Any thoughts?

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Priority: Minor
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108810#comment-16108810
 ] 

Hadoop QA commented on YARN-6872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879808/YARN-6872.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1163462ee16e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b38a1ee |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16640/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16640/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16640/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ensure apps could run given NodeLabels are disabled post RM 

[jira] [Commented] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108807#comment-16108807
 ] 

Hadoop QA commented on YARN-5219:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 7 new + 119 unchanged - 0 fixed = 126 total (was 119) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879810/YARN-5219.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d036fda1a5a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b38a1ee |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16641/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16641/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16641/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16641/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT 

[jira] [Updated] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5219:
--
Attachment: YARN-5219.007.patch

Thanks [~suma.shivaprasad]
Updating new patch as per the comments

> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Hitesh Shah
>Assignee: Sunil G
> Attachments: YARN-5219.001.patch, YARN-5219.003.patch, 
> YARN-5219.004.patch, YARN-5219.005.patch, YARN-5219.006.patch, 
> YARN-5219.007.patch, YARN-5219-branch-2.001.patch
>
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108730#comment-16108730
 ] 

Sunil G commented on YARN-5742:
---

Kicking this jira again. 

For YARN UI, there is an ongoing work for showing logs in a better format and 
easier for developers. For this, we were using below apis from AHS
# AHSWebServices.getLogs
# AHSWebServices.getContainerLogsInfo

As mentioned by [~vinodkv] in above comment, its better to host servlet in side 
reader server. This work could help to handle the log viewer module in new YARN 
UI.
cc/[~rohithsharma] [~varun_saxena] [~vrushalic] [~jrottinghuis]

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
> Attachments: YARN-5742-POC-v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6898) RM node labels page should display total used resources of each label.

2017-08-01 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R resolved YARN-6898.
-
Resolution: Won't Fix

As discussed above, this information is already available in the scheduler page 
and also optimizations are coming in the new ui, So at present additional 
effort is not required unless some information is blocked.
thanks for the active discussion [~daemon] & [~sunilg],

> RM node labels page should display total used resources of each label.
> --
>
> Key: YARN-6898
> URL: https://issues.apache.org/jira/browse/YARN-6898
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
>
> The RM node labels page only show *Label Name*、*Label Type*、*Num Of Active 
> NMs*、*Total Resource*
> information of each node label, but there isn't any place for us to see the 
> total used resource of the node label.
> The total used resource of the node label is very important, because we can 
> use it to check the overall load for this
> label. We will implement it. Any suggestion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6872:
--
Attachment: YARN-6872.003.patch

During recovery of containers from node manager, if the recovered container has 
label and node label is disabled in cluster, we can consider that container to 
default label. This help to handle metrics issue correctly.

cc/[~jianhe] [~leftnoteasy] [~rohithsharma]

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108700#comment-16108700
 ] 

Hadoop QA commented on YARN-6872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879794/YARN-6872.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cc007828ceaf 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b38a1ee |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16639/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16639/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16639/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: 

[jira] [Updated] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-01 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6872:
--
Attachment: YARN-6872.002.patch

Attaching new patch addressing Jian's comments

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6505) Define the strings used in SLS JSON input file format

2017-08-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108614#comment-16108614
 ] 

Gergely Novák commented on YARN-6505:
-

Thanks [~yufeigu] for your comments, I addressed them all:
# IMO for consistency we have to include the prefix (container, or task, see 
point 3) as all the other parameters (start.ms, end.ms, etc.) has it. But you 
are right, this is an incompatible change, so I added that label.
# Fixed
# Changed all the task related input configurations to "task." and "TASK_XXX". 
I'm not sure if you suggested to change only the variable name or the JSON as 
well, I went with the latter since it is already an incompatible change and - 
as you suggested - I find it cleaner and more logical. 

The unit tests passed locally and on Jenkins, too.

> Define the strings used in SLS JSON input file format
> -
>
> Key: YARN-6505
> URL: https://issues.apache.org/jira/browse/YARN-6505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>  Labels: incompatible, newbie
> Attachments: YARN-6505.001.patch, YARN-6505.002.patch, 
> YARN-6505.003.patch
>
>
> We could put them in a Java file like what YarnConfiguration does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6505) Define the strings used in SLS JSON input file format

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108606#comment-16108606
 ] 

Hadoop QA commented on YARN-6505:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6505 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879790/YARN-6505.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cacd97b725e5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b38a1ee |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16638/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16638/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Define the strings used in SLS JSON input file format
> -
>
> Key: YARN-6505
> URL: https://issues.apache.org/jira/browse/YARN-6505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>  Labels: incompatible, newbie
> Attachments: YARN-6505.001.patch, YARN-6505.002.patch, 
> YARN-6505.003.patch
>
>
> We could put them in a Java file like what YarnConfiguration does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (YARN-6802) Add Max AM Resource and AM Resource Usage to Leaf Queue View in FairScheduler WebUI

2017-08-01 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108575#comment-16108575
 ] 

YunFan Zhou commented on YARN-6802:
---

[~yufeigu] Hi Yufei, I tested it and found that my patch seems to be working on 
branch2?

> Add Max AM Resource and AM Resource Usage to Leaf Queue View in FairScheduler 
> WebUI
> ---
>
> Key: YARN-6802
> URL: https://issues.apache.org/jira/browse/YARN-6802
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> YARN-6802.001.patch, YARN-6802.002.patch, YARN-6802.003.patch
>
>
> RM Web ui should support view leaf queue am resource usage. 
> !screenshot-2.png!
> I will upload my patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6505) Define the strings used in SLS JSON input file format

2017-08-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-6505:

Labels: incompatible newbie  (was: newbie)

> Define the strings used in SLS JSON input file format
> -
>
> Key: YARN-6505
> URL: https://issues.apache.org/jira/browse/YARN-6505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>  Labels: incompatible, newbie
> Attachments: YARN-6505.001.patch, YARN-6505.002.patch, 
> YARN-6505.003.patch
>
>
> We could put them in a Java file like what YarnConfiguration does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6505) Define the strings used in SLS JSON input file format

2017-08-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-6505:

Attachment: YARN-6505.003.patch

> Define the strings used in SLS JSON input file format
> -
>
> Key: YARN-6505
> URL: https://issues.apache.org/jira/browse/YARN-6505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>  Labels: incompatible, newbie
> Attachments: YARN-6505.001.patch, YARN-6505.002.patch, 
> YARN-6505.003.patch
>
>
> We could put them in a Java file like what YarnConfiguration does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108544#comment-16108544
 ] 

Hadoop QA commented on YARN-6820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
6s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 231 unchanged - 0 fixed = 235 total (was 231) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 29s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| 

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108522#comment-16108522
 ] 

Hadoop QA commented on YARN-6820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 231 unchanged - 0 fixed = 236 total (was 231) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 40s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| 

[jira] [Commented] (YARN-5977) ContainerManagementProtocol changes to support change of container ExecutionType

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108515#comment-16108515
 ] 

Hadoop QA commented on YARN-5977:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 28s{color} 
| {color:red} root generated 1 new + 1338 unchanged - 0 fixed = 1339 total (was 
1338) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 5 new + 368 unchanged 
- 1 fixed = 373 total (was 369) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
13s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 
36s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 33s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
38s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108488#comment-16108488
 ] 

Hadoop QA commented on YARN-6916:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6916 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879784/YARN-6712.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16637/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6916:

Attachment: YARN-6712.01.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6916
> URL: https://issues.apache.org/jira/browse/YARN-6916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6916) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-6916:
---

 Summary: Moving logging APIs over to slf4j in 
hadoop-yarn-server-common
 Key: YARN-6916
 URL: https://issues.apache.org/jira/browse/YARN-6916
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6842) Implement a new access type for queue

2017-08-01 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108487#comment-16108487
 ] 

YunFan Zhou commented on YARN-6842:
---

[~Naganarasimha] Thank Naganarasimha G R.

Maybe I do need a sufficiently rich set of scenarios to get people interested 
in this feature. Thank you very much. I will continue to work on this and make 
it attractive enough.

> Implement a new access type for queue
> -
>
> Key: YARN-6842
> URL: https://issues.apache.org/jira/browse/YARN-6842
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Attachments: YARN-6842.001.patch, YARN-6842.002.patch, 
> YARN-6842.003.patch
>
>
> When we want to access applications of a queue,  only we can do is become the 
> administer of the queue at present.
> But sometimes we only want  authorize someone view applications of a queue 
> but not modify operation.
> In our current mechanism there isn't any way to meet it, so I will implement 
> a new access type for queue to solve
> this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn

2017-08-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned YARN-6712:
---

Assignee: (was: Akira Ajisaka)
 Summary: Moving logging APIs over to slf4j in hadoop-yarn  (was: Moving 
logging APIs over to slf4j in hadoop-yarn-server-common)

> Moving logging APIs over to slf4j in hadoop-yarn
> 
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-08-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108482#comment-16108482
 ] 

Akira Ajisaka commented on YARN-6712:
-

Given YARN-6873 is a sub-task of this issue, I'll make this issue to umbrella 
jira.

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6898) RM node labels page should display total used resources of each label.

2017-08-01 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108479#comment-16108479
 ] 

YunFan Zhou commented on YARN-6898:
---

[~sunilg] [~Naganarasimha] Thanks a lot. According to your suggestion, this 
JIRA doesn't really have any value. 
But anyway,  thank you for your attention and good advice on this JIRA.
Please help me close this JIRA,  thanks!

> RM node labels page should display total used resources of each label.
> --
>
> Key: YARN-6898
> URL: https://issues.apache.org/jira/browse/YARN-6898
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
>
> The RM node labels page only show *Label Name*、*Label Type*、*Num Of Active 
> NMs*、*Total Resource*
> information of each node label, but there isn't any place for us to see the 
> total used resource of the node label.
> The total used resource of the node label is very important, because we can 
> use it to check the overall load for this
> label. We will implement it. Any suggestion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.0001.patch

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: (was: YARN-6820-YARN-5355.0001.patch)

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-08-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108472#comment-16108472
 ] 

Sunil G commented on YARN-3254:
---

In {{DirectoryCollection.isDiskUsageOverPercentageLimit()}}, we use 
{{File#getUsableSpace}} and {{File#getTotalSpace}} to get a disk usage 
information and mark disks as FULL if used size is over a threshold. So its 
mostly about disk space related for now.

In latest patch, new variables are renamed to {{diskFullLocalDirsList}} etc.

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch, 
> YARN-3254-003.patch, YARN-3254-004.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.0001.patch

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: (was: YARN-6888-YARN-5355.0001.patch)

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-01 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6888-YARN-5355.0001.patch

Attaching v 001.

Looking for early feedback on the implementation.  This patch adds in an api to 
the reader. I have added in some tests for the HBaseTimelineReaderImpl in the 
context of this patch. 

TODO:
I am yet to complete the documentation updates. 
I would also like to add in a web services test, figuring out how to pass in 
the remote user to the URL connection in the test case. 


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6888-YARN-5355.0001.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6873) Moving logging APIs over to slf4j in hadoop-yarn-server-applicationhistoryservice

2017-08-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108460#comment-16108460
 ] 

Akira Ajisaka commented on YARN-6873:
-

+1 for adding a helper method. Thanks Wenxin.

> Moving logging APIs over to slf4j in 
> hadoop-yarn-server-applicationhistoryservice
> -
>
> Key: YARN-6873
> URL: https://issues.apache.org/jira/browse/YARN-6873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6873.001.patch, YARN-6873.002.patch, 
> YARN-6873.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6873) Moving logging APIs over to slf4j in hadoop-yarn-server-applicationhistoryservice

2017-08-01 Thread Wenxin He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108458#comment-16108458
 ] 

Wenxin He commented on YARN-6873:
-

Is it a good idea to add a helper method in hadoop-common to determine whether 
the LOG is Log4j implement like this?

{noformat}
  public static boolean isLog4jLogger(Class clazz) {
if (clazz == null) {
  return false;
}
Logger log = LoggerFactory.getLogger(clazz);
return log instanceof Log4jLoggerAdapter;
  }
{noformat}

In this way, we do not have to care of which adapter we use and prevent us from 
such bugs.

> Moving logging APIs over to slf4j in 
> hadoop-yarn-server-applicationhistoryservice
> -
>
> Key: YARN-6873
> URL: https://issues.apache.org/jira/browse/YARN-6873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6873.001.patch, YARN-6873.002.patch, 
> YARN-6873.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



<    1   2   3   >