[jira] [Commented] (YARN-7620) Allow partition filters on Queues page

2017-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285673#comment-16285673
 ] 

ASF GitHub Bot commented on YARN-7620:
--

Github user skmvasu commented on the issue:

https://github.com/apache/hadoop/pull/310
  
https://user-images.githubusercontent.com/567228/33823736-2448d88e-de82-11e7-82f6-c62a64cf5190.png";>
https://user-images.githubusercontent.com/567228/33823737-2489c998-de82-11e7-9b07-21e68b91b855.png";>

Makes the nodelabel dropdown searchable


> Allow partition filters on Queues page
> --
>
> Key: YARN-7620
> URL: https://issues.apache.org/jira/browse/YARN-7620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>
> Allow users their queues based on node labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285682#comment-16285682
 ] 

genericqa commented on YARN-7632:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 16 new + 19 unchanged - 2 fixed = 35 total (was 21) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901458/YARN-7632.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux de623631352e 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a2edc4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18860/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop

[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285734#comment-16285734
 ] 

Sunil G commented on YARN-7632:
---

Hi [~suma.shivaprasad]

Patch seems fine. But checkstyle issue could be fixable as many are indentation 
issues.

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285746#comment-16285746
 ] 

Sunil G commented on YARN-7632:
---

Sorry. I missed couple of more things and we have to fix checkstyle, hence 
sharing same.

# Could we move toSet to a util. Many class copy this
# NODE1_MEMORY is 16 and its same for all nodes. We could have only one 
variable.

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7632:
---
Attachment: YARN-7632.3.patch

Fixed review comments. Reusing toSet from TestUtils 

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch, YARN-7632.3.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285766#comment-16285766
 ] 

Suma Shivaprasad edited comment on YARN-7632 at 12/11/17 11:06 AM:
---

Thanks [~sunilg] Fixed review comments. Reusing toSet from TestUtils 


was (Author: suma.shivaprasad):
Fixed review comments. Reusing toSet from TestUtils 

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch, YARN-7632.3.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7556) Fair scheduler configuration should allow resource types in the minResources and maxResources properties

2017-12-11 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285787#comment-16285787
 ] 

Wilfred Spiegelenburg commented on YARN-7556:
-

You have fixed the points I raised, that is looking good. No further comments 
on that, +1 on the code changes

One last nit is that the documentation for {{maxChildResources}} is out sync 
with the {{minResources}} and {{maxResources}}: the child resources also use 
the new resource specification not just the old ones I assume.

> Fair scheduler configuration should allow resource types in the minResources 
> and maxResources properties
> 
>
> Key: YARN-7556
> URL: https://issues.apache.org/jira/browse/YARN-7556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7556.001.patch, YARN-7556.002.patch, 
> YARN-7556.003.patch, YARN-7556.004.patch, YARN-7556.005.patch, 
> YARN-7556.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7636) Re-reservation count may overflow when cluster resource exhausted for a long time

2017-12-11 Thread Tao Yang (JIRA)
Tao Yang created YARN-7636:
--

 Summary: Re-reservation count may overflow when cluster resource 
exhausted for a long time 
 Key: YARN-7636
 URL: https://issues.apache.org/jira/browse/YARN-7636
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0-alpha4, 2.9.1
Reporter: Tao Yang
Assignee: Tao Yang


Exception stack:
{noformat}
java.lang.IllegalArgumentException: Overflow adding 1 occurrences to a count of 
2147483647
        at 
com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:246)
        at 
com.google.common.collect.AbstractMultiset.add(AbstractMultiset.java:80)
        at 
com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:51)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.addReReservation(SchedulerApplicationAttempt.java:406)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.reserve(SchedulerApplicationAttempt.java:555)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1076)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
        at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
{noformat}
We can add check condition {{getReReservations(schedulerKey) < 
Integer.MAX_VALUE}} before  addReReservation to avoid this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7636) Re-reservation count may overflow when cluster resource exhausted for a long time

2017-12-11 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-7636:
---
Attachment: YARN-7636.001.patch

> Re-reservation count may overflow when cluster resource exhausted for a long 
> time 
> --
>
> Key: YARN-7636
> URL: https://issues.apache.org/jira/browse/YARN-7636
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7636.001.patch
>
>
> Exception stack:
> {noformat}
> java.lang.IllegalArgumentException: Overflow adding 1 occurrences to a count 
> of 2147483647
>         at 
> com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:246)
>         at 
> com.google.common.collect.AbstractMultiset.add(AbstractMultiset.java:80)
>         at 
> com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:51)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.addReReservation(SchedulerApplicationAttempt.java:406)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.reserve(SchedulerApplicationAttempt.java:555)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1076)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> We can add check condition {{getReReservations(schedulerKey) < 
> Integer.MAX_VALUE}} before  addReReservation to avoid this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2017-12-11 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-7585:

Attachment: YARN-7585.002.patch

Updated fix for checkstyle and junit test failures

YARN-7629 covers the TestContainerLaunch failure, not related to this change.

> NodeManager should go unhealthy when state store throws DBException 
> 
>
> Key: YARN-7585
> URL: https://issues.apache.org/jira/browse/YARN-7585
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7585.001.patch, YARN-7585.002.patch
>
>
> If work preserving recover is enabled the NM will not start up if the state 
> store does not initialise. However if the state store becomes unavailable 
> after that for any reason the NM will not go unhealthy. 
> Since the state store is not available new containers can not be started any 
> more and the NM should become unhealthy:
> {code}
> AMLauncher: Error launching appattempt_1508806289867_268617_01. Got 
> exception: org.apache.hadoop.yarn.exceptions.YarnException: 
> java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at o.a.h.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:721)
> ...
> Caused by: java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at 
> o.a.h.y.s.n.r.NMLeveldbStateStoreService.storeApplication(NMLeveldbStateStoreService.java:374)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:848)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:712)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7534) Fair scheduler assign resources may exceed maxResources

2017-12-11 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg resolved YARN-7534.
-
Resolution: Cannot Reproduce

No issue found the code shows that we check the queue size in the FS and we 
have no logs that show this is not working

> Fair scheduler assign resources may exceed maxResources
> ---
>
> Key: YARN-7534
> URL: https://issues.apache.org/jira/browse/YARN-7534
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: YunFan Zhou
>Assignee: Wilfred Spiegelenburg
>
> The logic we're scheduling now is to check whether the resources used by the 
> queue has exceeded *maxResources* before assigning the container. This will 
> leads to the fact that after assigning this container the queue uses more 
> resources than *maxResources*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285854#comment-16285854
 ] 

Sunil G commented on YARN-7634:
---

Thanks [~suma.shivaprasad]

Couple of minor nits.
# {{((CapacityScheduler)newMockRM.getResourceScheduler())}}, please extract 
this to a local variable for better readability in test cases.
# Few indentation issues in test class and in all patch. Pls check checkstyle 
and help to fix the same.

One doubt:
Currently we assume that parent queue's acl could be checked when queue is null 
and placementContext is available. Here i have 2 doubts.
# Can this same condition hit in other cases like normal user-mapping etc.? or 
*queue is null and placementContext is available* is the signature for auto 
create leaf queue feature?
# If in future, when leaf queue template also have ACLs, how this could be 
changed? or is there any plan for same?

> Queue ACL validations should validate parent queue ACLs before auto-creating 
> leaf queues
> 
>
> Key: YARN-7634
> URL: https://issues.apache.org/jira/browse/YARN-7634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7634.1.patch
>
>
> RMAppManager currently validates only leaf queue ACLs and if leaf queue 
> doesnt exist which is the case in auto-created leaf queues, queue mapping may 
> return a parent queue. However Parent queue ACLs are not validated. This 
> needs to be validated before auto-creating leaf queues for the mapped parent 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285857#comment-16285857
 ] 

Sunil G commented on YARN-7632:
---

Committing shortly if no objections and pending jenkins.

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch, YARN-7632.3.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285859#comment-16285859
 ] 

genericqa commented on YARN-7632:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 62 unchanged - 2 fixed = 72 total (was 64) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901480/YARN-7632.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff4eba2ce599 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a2edc4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18861/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_had

[jira] [Commented] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285871#comment-16285871
 ] 

genericqa commented on YARN-7585:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 37s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7585 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901491/YARN-7585.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8064cc49d79a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a2edc4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18862/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18862/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-y

[jira] [Updated] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7634:
---
Attachment: YARN-7634.2.patch

Thanks [~sunilg] Fixed review comments.

> Queue ACL validations should validate parent queue ACLs before auto-creating 
> leaf queues
> 
>
> Key: YARN-7634
> URL: https://issues.apache.org/jira/browse/YARN-7634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7634.1.patch, YARN-7634.2.patch
>
>
> RMAppManager currently validates only leaf queue ACLs and if leaf queue 
> doesnt exist which is the case in auto-created leaf queues, queue mapping may 
> return a parent queue. However Parent queue ACLs are not validated. This 
> needs to be validated before auto-creating leaf queues for the mapped parent 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7632:
--
Fix Version/s: 3.1.0

> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 3.1.0
>
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch, YARN-7632.3.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285937#comment-16285937
 ] 

Suma Shivaprasad commented on YARN-7634:


{quote} Can this same condition hit in other cases like normal user-mapping 
etc.? or queue is null and placementContext is available is the signature for 
auto create leaf queue feature? {quote}

Yes , placementContext is set for other leaf queues as well but 
placementContext.parentQueue is set inly for au-created queues.

{quote}
If in future, when leaf queue template also have ACLs, how this could be 
changed? or is there any plan for same?{quote}
We might need to support separate ACLs for auto-creation and will be addressed 
in a separate jira. Currently since ManagedParentQueue supports only 
auto-created leaf queues and cannot have other pre-configured leaf queues, the 
ACLs set on the parent queues will be inherited by the leaf queues.

> Queue ACL validations should validate parent queue ACLs before auto-creating 
> leaf queues
> 
>
> Key: YARN-7634
> URL: https://issues.apache.org/jira/browse/YARN-7634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7634.1.patch, YARN-7634.2.patch
>
>
> RMAppManager currently validates only leaf queue ACLs and if leaf queue 
> doesnt exist which is the case in auto-created leaf queues, queue mapping may 
> return a parent queue. However Parent queue ACLs are not validated. This 
> needs to be validated before auto-creating leaf queues for the mapped parent 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7632) Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management

2017-12-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285956#comment-16285956
 ] 

Hudson commented on YARN-7632:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13353 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13353/])
YARN-7632. Effective min and max resource need to be set for auto (sunilg: rev 
312ceebde8ef8881fc43d82a096fb852f833a206)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAutoQueueCreation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AutoCreatedLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAutoCreatedQueueBase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestAbsoluteResourceConfiguration.java


> Effective min and max resource need to be set for auto created leaf queues 
> upon creation and capacity management
> 
>
> Key: YARN-7632
> URL: https://issues.apache.org/jira/browse/YARN-7632
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 3.1.0
>
> Attachments: YARN-7632.1.patch, YARN-7632.2.patch, YARN-7632.3.patch
>
>
> YARN-5881 introduced the notion of configuring queues with Absolute resource 
> specifications instead of percentage. As part of that , each leaf queue has 
> an effective min/max capacity that needs to be set when queue is created and 
> whenever queue capacity is changed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285966#comment-16285966
 ] 

Sunil G commented on YARN-7634:
---

Latest patch seems fine. Pending jenkins.
Thanks [~suma.shivaprasad]

> Queue ACL validations should validate parent queue ACLs before auto-creating 
> leaf queues
> 
>
> Key: YARN-7634
> URL: https://issues.apache.org/jira/browse/YARN-7634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7634.1.patch, YARN-7634.2.patch
>
>
> RMAppManager currently validates only leaf queue ACLs and if leaf queue 
> doesnt exist which is the case in auto-created leaf queues, queue mapping may 
> return a parent queue. However Parent queue ACLs are not validated. This 
> needs to be validated before auto-creating leaf queues for the mapped parent 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2017-12-11 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.3.patch

Attaching rebased patch for review

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-11 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: YARN-7622.002.patch

Fixed findbugs issues

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM

2017-12-11 Thread Sunil G (JIRA)
Sunil G created YARN-7637:
-

 Summary: GPU volume creation command fails when work preserving is 
disabled at NM
 Key: YARN-7637
 URL: https://issues.apache.org/jira/browse/YARN-7637
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 3.1.0
Reporter: Sunil G
Assignee: Zian Chen


When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence 
resource mappings related to GPU wont be saved at Container.

This has to  be rechecked and store accordingly.

cc/ [~leftnoteasy] and [~Zian Chen]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM

2017-12-11 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7637:
--
Priority: Critical  (was: Major)

> GPU volume creation command fails when work preserving is disabled at NM
> 
>
> Key: YARN-7637
> URL: https://issues.apache.org/jira/browse/YARN-7637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Zian Chen
>Priority: Critical
>
> When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence 
> resource mappings related to GPU wont be saved at Container.
> This has to  be rechecked and store accordingly.
> cc/ [~leftnoteasy] and [~Zian Chen]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286035#comment-16286035
 ] 

Jason Lowe commented on YARN-7625:
--

Thanks for updating the patch!  It looks a lot better.

Just one small nit, as I'm not a fan of explicit sleeps in unit tests.  Rather 
than sleeping for a full duration and then checking I'd rather see the tests 
leverage Mockito's verify-with-timeout feature, e.g.:
{code}
  Mockito.verify(spyContext, timeout(500)).getNodeManagerMetrics();
{code}

That way the test doesn't have to burn the full timeout every time.

> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286084#comment-16286084
 ] 

genericqa commented on YARN-7634:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 91 unchanged - 0 fixed = 94 total (was 91) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901506/YARN-7634.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 37b441d56a2c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18863/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hado

[jira] [Commented] (YARN-7634) Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues

2017-12-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286099#comment-16286099
 ] 

Sunil G commented on YARN-7634:
---

Committing shortly.

> Queue ACL validations should validate parent queue ACLs before auto-creating 
> leaf queues
> 
>
> Key: YARN-7634
> URL: https://issues.apache.org/jira/browse/YARN-7634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7634.1.patch, YARN-7634.2.patch
>
>
> RMAppManager currently validates only leaf queue ACLs and if leaf queue 
> doesnt exist which is the case in auto-created leaf queues, queue mapping may 
> return a parent queue. However Parent queue ACLs are not validated. This 
> needs to be validated before auto-creating leaf queues for the mapped parent 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7576) Findbug warning for Resource exposing internal representation

2017-12-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286131#comment-16286131
 ] 

Jason Lowe commented on YARN-7576:
--

Sorry this fell off my radar and now the patch doesn't apply.  I'm +1 for the 
findbugs exclude change.  The import cleanup is what's conflicting, so if we 
drop that from the patch then I think we're good to go.


> Findbug warning for Resource exposing internal representation
> -
>
> Key: YARN-7576
> URL: https://issues.apache.org/jira/browse/YARN-7576
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Wangda Tan
> Attachments: YARN-7576.001.patch
>
>
> Precommit builds are complaining about a findbugs warning:
> {noformat}
> EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
> internal representation by returning Resource.resources
>   
> Bug type EI_EXPOSE_REP (click for details)
> In class org.apache.hadoop.yarn.api.records.Resource
> In method org.apache.hadoop.yarn.api.records.Resource.getResources()
> Field org.apache.hadoop.yarn.api.records.Resource.resources
> At Resource.java:[line 213]
> Returning a reference to a mutable object value stored in one of the object's 
> fields exposes the internal representation of the object.  If instances are 
> accessed by untrusted code, and unchecked changes to the mutable object would 
> compromise security or other important properties, you will need to do 
> something different. Returning a new copy of the object is better approach in 
> many situations.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286194#comment-16286194
 ] 

genericqa commented on YARN-7622:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 11 new + 39 unchanged - 2 fixed = 50 total (was 41) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7622 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901514/YARN-7622.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8e04227603a9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18864/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hado

[jira] [Updated] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-12-11 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-6315:
--
Attachment: YARN-6315.005.patch

Updated patch with a revised approach to keep track of actual size of the file 
via downloadSize. Changes were also made to YARNRunner and LocalResourceProto 
for this added field. If download size is not updated and is -1 (it could be 
changed to a constant to indicate that the value was not set at any point), we 
ignore the file attribute mismatch. Would appreciate any initial 
comments/modifications on the approach. Thanks a lot!

> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch, YARN-6315.004.patch, YARN-6315.005.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7638) Add unit tests for Preemption and Recovery

2017-12-11 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created YARN-7638:
--

 Summary: Add unit tests for Preemption and Recovery
 Key: YARN-7638
 URL: https://issues.apache.org/jira/browse/YARN-7638
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


Add unit tests to test inter leaf-queue pre-emption based on utilization and 
work preserving start/recovery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286235#comment-16286235
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 63 new + 130 unchanged - 5 fixed = 193 total (was 135) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901513/YARN-7574.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4acd62c1f15 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18865/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_had

[jira] [Updated] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-11 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7633:
---
Attachment: YARN-7633.1.patch

Attaching patch for review.

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7639) Queue Management scheduling edit policy class needs to be configured dynamically

2017-12-11 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created YARN-7639:
--

 Summary: Queue Management scheduling edit policy class needs to be 
configured dynamically
 Key: YARN-7639
 URL: https://issues.apache.org/jira/browse/YARN-7639
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


This needs to be configured dynamically for 
yarn.resourcemanager.monitor.capacity.queue-management.monitoring-interval 
whenever auto leaf queue creation is enabled for a parent queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-11 Thread Greg Phillips (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286331#comment-16286331
 ] 

Greg Phillips commented on YARN-7622:
-

Thanks for the review [~wilfreds]. I moved the two mixed sync fields into the 
thread to remedy the issue. 

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-11 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-7595:
--
Attachment: YARN-7595.002.patch

Thanks for the comments.  I have uploaded a new patch that addresses these.


> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286436#comment-16286436
 ] 

genericqa commented on YARN-6315:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
22s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 4 new + 244 unchanged 
- 1 fixed = 248 total (was 245) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
19s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 34s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  9s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6315 |
| JIRA Patch URL | 
https://is

[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286450#comment-16286450
 ] 

genericqa commented on YARN-7595:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 70 unchanged - 0 fixed = 71 total (was 70) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 41s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901539/YARN-7595.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ea3dcb4b5a89 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18869/artifact/out/diff-checkstyle-hadoop-yarn-project

[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2017-12-11 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286461#comment-16286461
 ] 

Miklos Szegedi commented on YARN-7064:
--

The unit test error is YARN-7629. TestRaceWhenRelogin does not repro to me. I 
did not change any dependencies.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for untrusted docker image

2017-12-11 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286462#comment-16286462
 ] 

Vinod Kumar Vavilapalli commented on YARN-7516:
---

I very much like this proposal as a simple way to block untrusted images.

Minor comments
 - By default, no images are trusted? From security perspective, this is good. 
But if we go this way, we should add in site documentation as one of the 
required settings.
 - Rename the option to 
{{yarn.nodemanager.runtime.linux.docker.trusted-registry}}?
 - Add your three category examples in yarn-default.xml description of the 
property and in site documentation?
 - DockerLinuxContainerRuntime: Read the value of the config only once in the 
object initialization?
 - Testing
--  We are also allowing privileged containers also only from trusted 
registry - this is good. Add an explicit test?
-- I may or may not be reading this correctly, but there doesn't seem to be 
a negative test which shows that if an image is not trusted, we verify that 
behavior.

> Security check for untrusted docker image
> -
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7516.001.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-11 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7577:
-
Attachment: YARN-7577.004.patch

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-11 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-7595:
--
Attachment: YARN-7595.003.patch

Updating a new patch that addresses the checkstyle issues.


> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch, 
> YARN-7595.003.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286352#comment-16286352
 ] 

genericqa commented on YARN-7633:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7633 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901527/YARN-7633.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux d977b8d6ddc9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/18868/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18868/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286684#comment-16286684
 ] 

genericqa commented on YARN-7577:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 36 unchanged - 4 fixed = 36 total (was 40) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901553/YARN-7577.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 00180577d41d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 00129c5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18870/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-serve

[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286700#comment-16286700
 ] 

genericqa commented on YARN-7595:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 69 unchanged - 1 fixed = 69 total (was 70) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  1s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901562/YARN-7595.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 253737d61493 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2316f52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18871/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
http

[jira] [Comment Edited] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-11 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286746#comment-16286746
 ] 

Chandni Singh edited comment on YARN-7565 at 12/11/17 11:33 PM:


bq. yarn.service.container.expiry-interval-ms -> 
yarn.service.am-recovery.timeout ?
How about yarn.service.container-recovery.timeout? "am-recovery.timeout" gives 
an impression that there is a timeout for AM recovery.


was (Author: csingh):
> yarn.service.container.expiry-interval-ms -> yarn.service.am-recovery.timeout 
> ?
How about yarn.service.container-recovery.timeout? "am-recovery.timeout" gives 
an impression that there is a timeout for AM recovery.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-11 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286746#comment-16286746
 ] 

Chandni Singh commented on YARN-7565:
-

> yarn.service.container.expiry-interval-ms -> yarn.service.am-recovery.timeout 
> ?
How about yarn.service.container-recovery.timeout? "am-recovery.timeout" gives 
an impression that there is a timeout for AM recovery.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7600) Yarn NODE_LOCAL request downgraded to RACK_LOCAL didn't cancel the original NODE_LOCAL request

2017-12-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286580#comment-16286580
 ] 

Robert Kanter commented on YARN-7600:
-

{quote}However, my problem is that if the *allowRelaxity* for RACK_LOCAL and 
OFF_SWITCH is true,{quote}
[~wuchang1989], I assume by "allowRelaxity" you mean "relaxLocality"?

I'm not sure.  I haven't looked to much at that part of the code - it may also 
depend on which Scheduler is used.
[~wuchang1989], are you actually seeing this behavior?  Or is this speculation?

> Yarn NODE_LOCAL request downgraded to RACK_LOCAL  didn't cancel the original 
> NODE_LOCAL request
> ---
>
> Key: YARN-7600
> URL: https://issues.apache.org/jira/browse/YARN-7600
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: wuchang
>
> I know, when AM making container request, if the request container is 
> NODE_LOCAL, then AM will also send out RACK_LOCAL and OFF_SWITCH requests. On 
> the ResourceManager side, if RM successfully assigned a NODE_LOCAL container, 
> I saw that RM canceled RACK_LOCAL and OFF_SWITCH requests because it was a 
> duplicated request and did not need to be allocated any more. 
> However, my problem is that if  the **allowRelaxity** for RACK_LOCAL and 
> OFF_SWITCH is true,  NODE_LOCAL request is downgraded to RACK_LOCAL , thus 
> NODE_LOCAL's request is allocated with RACK_LOCAL locality, then OFF_SWITCH's 
> duplicated request will be canceled of cource, however I did not see RM 
> cancel the duplicated NODE_LOCAL request, so, won't it lead to NODE_LOCAL 
> request still exist and then be scheduled in the next-round scheduling?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7543:
-

Assignee: Jian He

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7543:
--
Attachment: YARN-7543.01.patch

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286722#comment-16286722
 ] 

Robert Kanter commented on YARN-7577:
-

Given that the patch only changes {{TestAMRestart}}, I'd say that the test 
failures are unrelated.  One last minor thing:
- Instead of {{scheduler instanceof FairScheduler}}, you can do 
{{getSchedulerType.equals(SchedulerType.FAIR).  Also, the 
{{ParameterizedSchedulerTestBase}} does not run FIFO, only CAPACITY and FAIR, 
so no need to worry about {{FifoScheduler}}.

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-11 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286708#comment-16286708
 ] 

Jim Brennan commented on YARN-7595:
---

The unit test failure is the same one as before.
The checkstyle issues are fixed and the review comments have been addressed.

I think this is ready for review.

> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch, 
> YARN-7595.003.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7005) Skip unnecessary sorting and iterating process for child queues without pending resource to optimize schedule performance

2017-12-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286868#comment-16286868
 ] 

Wangda Tan commented on YARN-7005:
--

[~Tao Yang], apologize for my late responses, a better way to test might be by 
using TestCapacitySchedulerPerf. Which is portable and easy for other folks to 
reproduce your results. SLS could be also used when more comprehensive tests 
are required. 

> Skip unnecessary sorting and iterating process for child queues without 
> pending resource to optimize schedule performance
> -
>
> Key: YARN-7005
> URL: https://issues.apache.org/jira/browse/YARN-7005
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Tao Yang
> Attachments: YARN-7005.001.patch
>
>
> Nowadays even if there is only one pending app in a queue, the scheduling 
> process will go through all queues anyway and costs most of time on sorting 
> and iterating child queues in ParentQueue#assignContainersToChildQueues. 
> IIUIC, queues that have no pending resource can be skipped for sorting and 
> iterating process to reduce time cost, obviously for a cluster with many 
> queues. Please feel free to correct me if I ignore something else. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2017-12-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286883#comment-16286883
 ] 

Wangda Tan commented on YARN-4606:
--

[~jlowe], as we discussed offline, this is a (known) potential issue of fair 
ordering policy. Please let me know if you have any thoughts on this.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7005) Skip unnecessary sorting and iterating process for child queues without pending resource to optimize schedule performance

2017-12-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286868#comment-16286868
 ] 

Wangda Tan edited comment on YARN-7005 at 12/12/17 12:41 AM:
-

[~Tao Yang], apologize for my late responses, a better way to test might be by 
adding unit test to TestCapacitySchedulerPerf. Which is portable and easy for 
other folks to reproduce your results. SLS could be also used when more 
comprehensive tests are required. 


was (Author: leftnoteasy):
[~Tao Yang], apologize for my late responses, a better way to test might be by 
using TestCapacitySchedulerPerf. Which is portable and easy for other folks to 
reproduce your results. SLS could be also used when more comprehensive tests 
are required. 

> Skip unnecessary sorting and iterating process for child queues without 
> pending resource to optimize schedule performance
> -
>
> Key: YARN-7005
> URL: https://issues.apache.org/jira/browse/YARN-7005
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Tao Yang
> Attachments: YARN-7005.001.patch
>
>
> Nowadays even if there is only one pending app in a queue, the scheduling 
> process will go through all queues anyway and costs most of time on sorting 
> and iterating child queues in ParentQueue#assignContainersToChildQueues. 
> IIUIC, queues that have no pending resource can be skipped for sorting and 
> iterating process to reduce time cost, obviously for a cluster with many 
> queues. Please feel free to correct me if I ignore something else. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-11 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7577:
-
Attachment: YARN-7577.005.patch

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch, 
> YARN-7577.005.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286802#comment-16286802
 ] 

Jian He commented on YARN-7543:
---

Changes:
- continue if the jar file doesn't exist
- a side change that checks if the specified cpu resource is within the max cpu 
limit

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@h

[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286850#comment-16286850
 ] 

Gour Saha commented on YARN-7543:
-

I think you should make the code change to add cpu checks in 
ServiceApiUtil.java in a separate jira. Otherwise the patch looks good.

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@

[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286806#comment-16286806
 ] 

Jian He commented on YARN-7543:
---

[~billie.rinaldi], [~gsaha], can you review 

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor and planner framework

2017-12-11 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.003.patch

Updating patch (v003) based on offline discussions. [~leftnoteasy], do take a 
look to see it captures everything discussed.

* Included a constraint.spi package that includes common interfaces (Algorithm, 
SchedulingRequestHandler, SchedulingResponseHandler and 
SchedulingProposalCollector)
* Moved the processor to constraint.processor package. The Processor and the 
Dispatcher now implement the above interfaces.


> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286925#comment-16286925
 ] 

genericqa commented on YARN-7543:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7543 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901575/YARN-7543.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6460443d22ca 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5cd1056 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18872/testReport/ |
| Max. process+thread count | 609 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18872/console |
| 

[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-11 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7565:

Attachment: YARN-7565.004.patch

Patch 4:
- Addressed [~jianhe] comments.
- Fixed the test so that it verifies that a new container is assigned to a comp 
instance which gets added back to pendingInstances.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286957#comment-16286957
 ] 

genericqa commented on YARN-7612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  4m 13s{color} 
| {color:red} YARN-7612 does not apply to YARN-6592. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901585/YARN-7612-YARN-6592.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18874/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16286953#comment-16286953
 ] 

Weiwei Yang commented on YARN-7625:
---

Thanks [~jlowe], that makes sense to me. V4 patch updated.

> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch, YARN-7625.004.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-11 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7625:
--
Attachment: YARN-7625.004.patch

> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch, YARN-7625.004.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287005#comment-16287005
 ] 

genericqa commented on YARN-7625:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 63 unchanged - 1 fixed = 63 total (was 64) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 18s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901589/YARN-7625.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9ceee48124a4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 55fc2d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18876/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18876/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
|

[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287034#comment-16287034
 ] 

genericqa commented on YARN-7565:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 97 unchanged - 1 fixed = 98 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 39s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.a

[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-11 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287060#comment-16287060
 ] 

Wilfred Spiegelenburg commented on YARN-7622:
-

Thank you for the update.
You can not move the two values inside the thread because the method that you 
removed them from and which needs them is a public method which is called when 
you do a command line refresh of the configuration. The variables must be 
accessible from the reloader thread and from the {{reloadAllocations}} method.

You also have introduced a large number of new checkstyle issues most are 
unused imports please check those.
The junit test failures look unrelated but please confirm that they are.

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor and planner framework

2017-12-11 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.004.patch

Rebased patch with latest YARN-6592

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287052#comment-16287052
 ] 

genericqa commented on YARN-7577:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 36 unchanged - 4 fixed = 38 total (was 40) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901582/YARN-7577.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f34b62dbca63 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 55fc2d6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18873/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_ha

[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287191#comment-16287191
 ] 

genericqa commented on YARN-7612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 45 new + 700 unchanged - 4 fixed = 745 total (was 704) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:r

[jira] [Commented] (YARN-7516) Security check for untrusted docker image

2017-12-11 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287195#comment-16287195
 ] 

Eric Yang commented on YARN-7516:
-

[~vinodkv] Thank you for the review.  Images are untrusted by default 
configuration.  It is important to configure docker trusted registry config, 
conversely, I prefer to give it a more prominent namespace 
{{yarn.docker.trusted.registry}} instead of 
{{yarn.nodemanager.runtime.linux.docker.trusted-registry}}.  I will update 
document to include the usage of the configuration.  All existing tests have 
been updated as positive test cases for trusted registry.  We don't need 
another test to verify positive behavior.  We have negative test in 
testDockerTrustedRegistry test case.  Where the trusted registry is 
abc.example.com:1234, and image is from: docker.example.com:5000.  The 
read/write mount point has been blocked because image is not from trusted 
registry, and we verify the mount point behavior.

> Security check for untrusted docker image
> -
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7516.001.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5636) Support reserving resources on certain nodes for certain applications

2017-12-11 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287229#comment-16287229
 ] 

Jiandan Yang  commented on YARN-5636:
-

[~Tao Jie] I think you solution is good. You can provide a patch to review.

> Support reserving resources on certain nodes for certain applications
> -
>
> Key: YARN-5636
> URL: https://issues.apache.org/jira/browse/YARN-5636
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Tao Jie
>
> We have met such circumstance:
> We are trying to run storm&kafka on yarn by Slider, and Storm&Kafka writes 
> data to local disk on node. If some containers or the application fails, we 
> expect that those containers would restart on the same node as they run 
> before, otherwise data written on local would lost.
> For slider, it will trying to ensure restarted container on same nodes as 
> before. However for yarn, resource may be assigned to other applications when 
> former long-running application is down.
> As a result we'd better to have a mechanism that reserve some resource for 
> certain long-running applications on certain nodes for a period of time. Does 
> it make sense?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7612) Add Placement Processor and planner framework

2017-12-11 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7612:
--
Attachment: YARN-7612-YARN-6592.005.patch

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org