[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-06 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314934#comment-16314934
 ] 

Konstantinos Karanasos commented on YARN-6599:
--

bq. different implementations can choose to support subset of the 
SchedulingRequest, but we should not change semantics of the API unless 
discussed
Fair point. Let's discuss to see what is the right balance between usability 
and complicating the code for the first version.

bq. Filed YARN-7709 for this.
bq. but let's discuss before taking action on this. I need to think a bit more 
about it – it might not be just syntactic sugar given we have decided to not 
include the source tags in the constraint itself.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314907#comment-16314907
 ] 

Wangda Tan commented on YARN-6599:
--

[~kkaranasos], thanks for comments.

bq. I think we should not introduce node partitions in this JIRA. Not only for 
the size of the patch, but because semantically it belongs to a different JIRA.
I can move it to a separate JIRA, but we need to support node partition in 
first released version. 

bq. What we did in YARN-7613 is to restrict the scope of a constraint to the 
application that is defining it for the first version, so that we do not have 
to deal with prefixes
This is not correct to me, different implementations can choose to support 
*subset* of the SchedulingRequest, but we should not change semantics of the 
API unless discussed. So a client should expect same result (not have to be 
exactly same placement, but the constraint should be strictly respected) of a 
SinglePlacementConstraint handled by PlacementProcessor or logic from this 
patch. We should revisit changes done by YARN-7613 and follow what we have 
decided in design doc. 

bq. I can be convinced to remove the SELF target 
Filed YARN-7709 for this.


> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-06 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314893#comment-16314893
 ] 

Konstantinos Karanasos commented on YARN-6599:
--

[~wangda] and [~asuresh]:
* I think we should not introduce node partitions in this JIRA. Not only for 
the size of the patch, but because semantically it belongs to a different JIRA.
In general, this is a complicated feature we are adding. Let's get a first 
version with node and rack, and once we are convinced this is working, we can 
add the node partitions, after we agree on how we do this properly and in a 
clean way.
* What we did in YARN-7613 is to restrict the scope of a constraint to the 
application that is defining it for the first version, so that we do not have 
to deal with prefixes. Again, we will eventually need something like this, but 
I don't think it should be introduced in this JIRA.
* I can be convinced to remove the SELF target, but let's discuss before taking 
action on this. I need to think a bit more about it -- it might not be just 
syntactic sugar given have decided to not include the source tags in the 
constraint itself.

I think we should chat offline to converge faster on this couple of things.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7709) Remove SELF from TargetExpression type .

2018-01-06 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7709:


 Summary: Remove SELF from TargetExpression type .
 Key: YARN-7709
 URL: https://issues.apache.org/jira/browse/YARN-7709
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Priority: Blocker


As mentioned by [~asuresh], SELF means target allocation tag same as allocation 
tag of the scheduling request itself. So this is not a new type for sure, it is 
still ALLOCATION_TAG type.

If we really want this functionality, we can build this in 
PlacementConstraints, but I'm doubtful about this since copying allocation tags 
from source is just a trivial work.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314876#comment-16314876
 ] 

Wangda Tan commented on YARN-6599:
--

Thanks [~asuresh],

bq. Wangda Tan, apologize, but I think we really should split this part.

Let me explain a bit, it may not be as hard as you thought about. 

Following changes are related to partition:

1) Added nodePartition to PlacementConstraints 
2) Renamed acceptNodePartition to precheckNode so we can blacklist nodes in 
AppPlaementAllocator.
3) Check node partition inside new AppPlaementAllocator implementation.
4) Added partition check in PlacementConstraintsUtil.
5) Unit test related changes.

#1/#3 is very straightforward. #2 is needed anyway since we want to plug logics 
to check SchedulingRequest while allocating/validating container allocation, 
regardless of splitting logic. To me, #3 is also not hard. Please let me know 
your thoughts on this. It may take me several hours to split the logic, I'm not 
sure if it worths since we're pushing the branch merge soon.

bq. It just means source tag == target expression tag.
This looks not correct, first of all, it is not a new type, it's more like a 
syntactic sugar which should be built inside PlacementConstraints, and I'm 
doubt if it is really needed. We can discuss it in a separate JIRA.

bq. As I mentioned in the previous comment. There is is this issue of 
application priority
To me, this is not different from affinity/anti-affinity to different tags. For 
example a Tensorflow "worker" affinity to "parameter server" which is in 
different priority. A wrong priority setting could still cause deadlock issue. 
And in order to specify intra-app affinity/anti-affinity, we have to include 
application id inside allocation tags since by default allocation tag query 
should include all apps.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314506#comment-16314506
 ] 

genericqa commented on YARN-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 136 unchanged - 0 fixed = 137 total (was 136) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904931/YARN-7663_6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a7616d07fc5e 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Comment Edited] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-06 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314450#comment-16314450
 ] 

lujie edited comment on YARN-7663 at 1/6/18 8:25 AM:
-

different from YARN-7663_5.patch
1. Replace fooTestAppNewKill with testAppStartAfterKilled
2. fix checkstyle error


was (Author: xiaoheipangzi):
different from YARN-7663_6.patch
1. Replace fooTestAppNewKill with testAppStartAfterKilled
2. fix checkstyle error

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Assignee: lujie
>Priority: Minor
>  Labels: patch
> Attachments: YARN-7663_1.patch, YARN-7663_2.patch, YARN-7663_3.patch, 
> YARN-7663_4.patch, YARN-7663_5.patch, YARN-7663_6.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-06 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: YARN-7663_6.patch

different from YARN-7663_6.patch
1. Replace fooTestAppNewKill with testAppStartAfterKilled
2. fix checkstyle error

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Assignee: lujie
>Priority: Minor
>  Labels: patch
> Attachments: YARN-7663_1.patch, YARN-7663_2.patch, YARN-7663_3.patch, 
> YARN-7663_4.patch, YARN-7663_5.patch, YARN-7663_6.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-06 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314450#comment-16314450
 ] 

lujie edited comment on YARN-7663 at 1/6/18 8:25 AM:
-

different from YARN-7663_5.patch
1. Replace fooTestAppNewKill with testAppStartAfterKilled at line 61
2. fix checkstyle error


was (Author: xiaoheipangzi):
different from YARN-7663_5.patch
1. Replace fooTestAppNewKill with testAppStartAfterKilled
2. fix checkstyle error

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Assignee: lujie
>Priority: Minor
>  Labels: patch
> Attachments: YARN-7663_1.patch, YARN-7663_2.patch, YARN-7663_3.patch, 
> YARN-7663_4.patch, YARN-7663_5.patch, YARN-7663_6.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314442#comment-16314442
 ] 

genericqa commented on YARN-7663:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 136 unchanged - 0 fixed = 137 total (was 136) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904926/YARN-7663_5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2eb3b15d6976 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 836e3c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19134/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19134/testReport/ |
| Max. process+thread count | 878 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: