[jira] [Commented] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309245#comment-16309245
 ] 

stefanlee commented on YARN-7695:
-

i have a simple fix .in *RMActiveServices.serviceInit*
{code:java}
  // Initialize the scheduler
  if (scheduler == null) {
scheduler = createScheduler();
  }
{code}
[~yufeigu]  [~templedf]please have a look.

> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309071#comment-16309071
 ] 

stefanlee edited comment on YARN-7695 at 1/3/18 7:27 AM:
-

[~templedf] please have a look.


was (Author: imstefanlee):
[~dan...@cloudera.com] please have a look.

> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309112#comment-16309112
 ] 

stefanlee edited comment on YARN-7695 at 1/3/18 7:27 AM:
-

i think this problem occured in 
*transitionToStandby->createAndInitActiveServices->RMActiveServices.serviceInit->scheduler.reinitialize(conf,
 rmContext)*, the *scheduler* is a new object?please correct me if i am 
wrong.[~templedf]


was (Author: imstefanlee):
i think this problem occured in 
*transitionToStandby->createAndInitActiveServices->RMActiveServices.serviceInit->scheduler.reinitialize(conf,
 rmContext)*, the *scheduler* is a new object?please correct me if i am 
wrong.[~dan...@cloudera.com]

> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309227#comment-16309227
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

bq. but did not get why we need to assert that minScopeCardinality <= 
maxScopeCardinality.
We don’t really need to. I thought of it just as a sanity check that the user 
has not messed up with multiple tags, that is, to make sure that the max of 
mins is not larger than the min of maxs. If the constraint is right, this 
should not be the case. 

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7691) Add Unit Tests for ContainersLauncher

2018-01-02 Thread Sampada Dehankar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309218#comment-16309218
 ] 

Sampada Dehankar commented on YARN-7691:


Thanks for the review and commit [~asuresh].

> Add Unit Tests for ContainersLauncher
> -
>
> Key: YARN-7691
> URL: https://issues.apache.org/jira/browse/YARN-7691
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.9.1
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7691.001.patch, YARN-7691.002.patch
>
>
> We need to add more test in the recovry path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309217#comment-16309217
 ] 

Arun Suresh commented on YARN-7682:
---

[~kkaranasos], I understand why it might be better to swap Long::min / max as 
the op - and yes, it makses sense, but did not get why we need to assert that 
minScopeCardinality <= maxScopeCardinality. Can you give an example perhaps ? 
Since those two values are just the max and min cardinally of the set of tags 
for that scope at that given moment anyway.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309205#comment-16309205
 ] 

Konstantinos Karanasos edited comment on YARN-7682 at 1/3/18 6:33 AM:
--

Thanks for the patch, [~pgaref]. Two main comments:
* I think that the functions you push in the getNodeCardinalityByOp should be 
reversed. That is, I think when you have multiple tags, you should look at the 
highest cardinality one in the node/rack, and your min cardinality of the 
constraint should be below that. Same for the max cardinality of the constraint 
(should be above the min of the actual cardinalities of the given tags). So 
essentially you would define your minScopeCardinality using the Long::max, and 
your maxScopeCardinality using the Long::min. Then everything goes as is. You 
can also make a check that your minScopeCardinality <= maxScopeCardinality, 
just to be on the safe side. Does it make sense?
* Do we need the line right after the comment “// Make sure Anti-affinity 
satisfies hard upper limit”?

Nits:
* Let’s fill out the javadoc comments in canSatisfyConstraints
* In canSatisfySingleConstraintExpression the javadoc is a bit ambiguous. "The 
node or rack should satisfy the constraints that are enabled by the given 
allocation tags". Also let’s add javadoc comments for its parameters (I know it 
is private, but it helps).
* In both methods you mention Node in the javadoc but you actually support RACK 
too.

If the above are fixed, +1 from me.


was (Author: kkaranasos):
Thanks for the patch, [~pgaref]. Two main comments:
* I think that the functions you push in the getNodeCardinalityByOp should be 
reversed. That is, I think when you have multiple tags, you should look at the 
highest cardinality one in the node/rack, and your min cardinality of the 
constraint should be below that. Same for the max cardinality of the constraint 
(should be above the min of the actual cardinalities of the given tags). So 
essentially you would define your minScopeCardinality using the Long::max, and 
your maxScopeCardinality using the Long::min. Then everything goes as is. You 
can also make a check that your minScopeCardinality <= maxScopeCardinality, 
just to be on the safe side. Does it make sense?
* Do we need the line right after the comment “// Make sure Anti-affinity 
satisfies hard upper limit”?

Nits:
* Let’s fill out the javadoc comments in canSatisfyConstraints
* In canSatisfySingleConstraintExpression the javadoc is a bit ambiguous. "The 
node or rack should satisfy the constraints that are enabled by the given 
allocation tags". Also let’s add javadoc comments for parameters its parameters 
(I know it is private, but it helps).
* In both methods you mention Node in the javadoc but you actually support RACK 
too.

If the above are fixed, +1 from me.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309205#comment-16309205
 ] 

Konstantinos Karanasos commented on YARN-7682:
--

Thanks for the patch, [~pgaref]. Two main comments:
* I think that the functions you push in the getNodeCardinalityByOp should be 
reversed. That is, I think when you have multiple tags, you should look at the 
highest cardinality one in the node/rack, and your min cardinality of the 
constraint should be below that. Same for the max cardinality of the constraint 
(should be above the min of the actual cardinalities of the given tags). So 
essentially you would define your minScopeCardinality using the Long::max, and 
your maxScopeCardinality using the Long::min. Then everything goes as is. You 
can also make a check that your minScopeCardinality <= maxScopeCardinality, 
just to be on the safe side. Does it make sense?
* Do we need the line right after the comment “// Make sure Anti-affinity 
satisfies hard upper limit”?

Nits:
* Let’s fill out the javadoc comments in canSatisfyConstraints
* In canSatisfySingleConstraintExpression the javadoc is a bit ambiguous. "The 
node or rack should satisfy the constraints that are enabled by the given 
allocation tags". Also let’s add javadoc comments for parameters its parameters 
(I know it is private, but it helps).
* In both methods you mention Node in the javadoc but you actually support RACK 
too.

If the above are fixed, +1 from me.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7691) Add Unit Tests for ContainersLauncher

2018-01-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309199#comment-16309199
 ] 

Hudson commented on YARN-7691:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13440 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13440/])
YARN-7691. Add Unit Tests for ContainersLauncher. (Sampada Dehankar via (arun 
suresh: rev c0c7cce81d5609f6347bff67929d5026d5893d75)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java


> Add Unit Tests for ContainersLauncher
> -
>
> Key: YARN-7691
> URL: https://issues.apache.org/jira/browse/YARN-7691
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.9.1
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7691.001.patch, YARN-7691.002.patch
>
>
> We need to add more test in the recovry path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7691) Add Unit Tests for ContainersLauncher

2018-01-02 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7691:
--
Summary: Add Unit Tests for ContainersLauncher  (was: Add Unit Tests for 
Containers Launcher)

> Add Unit Tests for ContainersLauncher
> -
>
> Key: YARN-7691
> URL: https://issues.apache.org/jira/browse/YARN-7691
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.9.1
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Attachments: YARN-7691.001.patch, YARN-7691.002.patch
>
>
> We need to add more test in the recovry path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7693) ContainersMonitor support configurable

2018-01-02 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309129#comment-16309129
 ] 

Jiandan Yang  commented on YARN-7693:
-

[~miklos.szeg...@cloudera.com] Thanks for your attention. This jira does not 
conflict with YARN-7064. I file this jira because currently 
ContainersMonitorImpl has some problems:
1. online service may be crash due to high system resource utilization.
ContainersMonitorImpl only check pmem and vmem of every container,  and did not 
check the overall system utilization. This may be impact online service when 
offline task and online service run on the Yarn at the same time. For example, 
each container's memory did not exceed the limit, but the system's total memory 
utilization may be 100% because of oversubscription, and the decision of 
killing container by RM may not be timely enough, then it will affect the 
online service.
2. Directly kill Opportunistic container is too violent. Dynamically adjusting 
Opportunistic container resources may be a better choice.
So I proposal to:
1) Seperate containers into two different group Opportunistic_Group and 
Guaranteed_Group under *hadoop-yarn* 
2)  Monitor system resource utilization and dynamically adjust resource of 
Opportunistic_Group
3) Kill container only when adjust resource fail for given times

> ContainersMonitor support configurable
> --
>
> Key: YARN-7693
> URL: https://issues.apache.org/jira/browse/YARN-7693
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Minor
> Attachments: YARN-7693.001.patch, YARN-7693.002.patch
>
>
> Currently ContainersMonitor has only one default implementation 
> ContainersMonitorImpl,
> After introducing Opportunistic Container, ContainersMonitor needs to monitor 
> system metrics and even dynamically adjust Opportunistic and Guaranteed 
> resources in the cgroup, so another ContainersMonitor may need to be 
> implemented. 
> The current ContainerManagerImpl ContainersMonitorImpl direct new 
> ContainerManagerImpl, so ContainersMonitor need to be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309112#comment-16309112
 ] 

stefanlee commented on YARN-7695:
-

i think this problem occured in 
*transitionToStandby->createAndInitActiveServices->RMActiveServices.serviceInit->scheduler.reinitialize(conf,
 rmContext)*, the *scheduler* is a new object?please correct me if i am 
wrong.[~dan...@cloudera.com]

> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309079#comment-16309079
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} YARN-6592 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-common in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 27s{color} | {color:orange} root: The patch generated 87 new + 1481 
unchanged - 15 fixed = 1568 total (was 1496) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red

[jira] [Updated] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stefanlee updated YARN-7695:

Description: 
1. i test haoop-2.4.0 in my cluster.
2. RM1 is active and  RM2 is standby
3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
4. RM1 then transit from active to standby success.
5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" and 
two "FairSchedulerUpdateThread" in RM1.

  was:
1. i test haoop-2.4.0 in my cluster.
2. RM1 is active and  RM2 is standby
3. i delete /yarn-leader-election/DevSuningYarn/ActiveStandbyElectorLock from ZK
4. RM1 then transit from active to standby success.
5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" and 
two "FairSchedulerUpdateThread" in RM1.


> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/Yarn/ActiveStandbyElectorLock from ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309071#comment-16309071
 ] 

stefanlee commented on YARN-7695:
-

[~dan...@cloudera.com] please have a look.

> when active RM transit to standby , this RM will new another 
> FairSchedulerUpdate Thread
> ---
>
> Key: YARN-7695
> URL: https://issues.apache.org/jira/browse/YARN-7695
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.4.0
>Reporter: stefanlee
>
> 1. i test haoop-2.4.0 in my cluster.
> 2. RM1 is active and  RM2 is standby
> 3. i delete /yarn-leader-election/DevSuningYarn/ActiveStandbyElectorLock from 
> ZK
> 4. RM1 then transit from active to standby success.
> 5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" 
> and two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7695) when active RM transit to standby , this RM will new another FairSchedulerUpdate Thread

2018-01-02 Thread stefanlee (JIRA)
stefanlee created YARN-7695:
---

 Summary: when active RM transit to standby , this RM will new 
another FairSchedulerUpdate Thread
 Key: YARN-7695
 URL: https://issues.apache.org/jira/browse/YARN-7695
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler, resourcemanager
Affects Versions: 2.4.0
Reporter: stefanlee


1. i test haoop-2.4.0 in my cluster.
2. RM1 is active and  RM2 is standby
3. i delete /yarn-leader-election/DevSuningYarn/ActiveStandbyElectorLock from ZK
4. RM1 then transit from active to standby success.
5. at last ,i print RM1 jstack info and found two "AllocationFileReloader" and 
two "FairSchedulerUpdateThread" in RM1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2018-01-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309062#comment-16309062
 ] 

Hudson commented on YARN-7585:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13439 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13439/])
YARN-7585. NodeManager should go unhealthy when state store throws (szegedim: 
rev 7f515f57ede74dae787994f37bfafd5d20c9aa4c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/TestNMLeveldbStateStoreService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMStateStoreService.java


> NodeManager should go unhealthy when state store throws DBException 
> 
>
> Key: YARN-7585
> URL: https://issues.apache.org/jira/browse/YARN-7585
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 3.1.0
>
> Attachments: YARN-7585.001.patch, YARN-7585.002.patch, 
> YARN-7585.003.patch
>
>
> If work preserving recover is enabled the NM will not start up if the state 
> store does not initialise. However if the state store becomes unavailable 
> after that for any reason the NM will not go unhealthy. 
> Since the state store is not available new containers can not be started any 
> more and the NM should become unhealthy:
> {code}
> AMLauncher: Error launching appattempt_1508806289867_268617_01. Got 
> exception: org.apache.hadoop.yarn.exceptions.YarnException: 
> java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at o.a.h.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:721)
> ...
> Caused by: java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at 
> o.a.h.y.s.n.r.NMLeveldbStateStoreService.storeApplication(NMLeveldbStateStoreService.java:374)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:848)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:712)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2018-01-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309061#comment-16309061
 ] 

Hudson commented on YARN-6894:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13439 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13439/])
YARN-6894. RM Apps API returns only active apps when query parameter (szegedim: 
rev 80440231d49e518ab6411367d7d8474155ecca2b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md


> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch, 
> YARN-6894.003.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309060#comment-16309060
 ] 

genericqa commented on YARN-7682:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 25 new + 0 unchanged - 3 fixed = 25 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m  
4s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904299/YARN-7682-YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b3060bcbd918 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 1c5fa65 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19079/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19079/testReport/ |
| Max. process+thread count | 818 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoo

[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309050#comment-16309050
 ] 

Mike Drob commented on YARN-7602:
-

+1 here

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch, YARN-7602.03.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309046#comment-16309046
 ] 

Miklos Szegedi commented on YARN-7585:
--

+1. Thank you for the contribution [~wilfreds] and for the review [~grepas]. I 
will commit this shortly.

> NodeManager should go unhealthy when state store throws DBException 
> 
>
> Key: YARN-7585
> URL: https://issues.apache.org/jira/browse/YARN-7585
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7585.001.patch, YARN-7585.002.patch, 
> YARN-7585.003.patch
>
>
> If work preserving recover is enabled the NM will not start up if the state 
> store does not initialise. However if the state store becomes unavailable 
> after that for any reason the NM will not go unhealthy. 
> Since the state store is not available new containers can not be started any 
> more and the NM should become unhealthy:
> {code}
> AMLauncher: Error launching appattempt_1508806289867_268617_01. Got 
> exception: org.apache.hadoop.yarn.exceptions.YarnException: 
> java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at o.a.h.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:721)
> ...
> Caused by: java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at 
> o.a.h.y.s.n.r.NMLeveldbStateStoreService.storeApplication(NMLeveldbStateStoreService.java:374)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:848)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:712)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309042#comment-16309042
 ] 

Miklos Szegedi commented on YARN-6894:
--

+1 Thank you for the contribution [~GergelyNovak] and for the reviews [~gsohn] 
and [~sunilg]. I will commit this shortly.

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch, 
> YARN-6894.003.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2018-01-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309035#comment-16309035
 ] 

Hudson commented on YARN-7688:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13438 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13438/])
YARN-7688. Miscellaneous Improvements To ProcfsBasedProcessTree. (szegedim: rev 
626b5103d44692adf3882af61bdafa40114c44f7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java


> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch, 
> YARN-7688.4.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309011#comment-16309011
 ] 

Miklos Szegedi commented on YARN-7688:
--

Committed to trunk. Thank you for the contribution [~belugabehr].

> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch, 
> YARN-7688.4.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308982#comment-16308982
 ] 

Miklos Szegedi commented on YARN-7688:
--

+1 LGTM. I will commit this shortly.

> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch, 
> YARN-7688.4.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308953#comment-16308953
 ] 

Robert Kanter commented on YARN-7602:
-

+1 LGTM

[~mdrob], any other comments?

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch, YARN-7602.03.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7687) ContainerLogAppender Improvements

2018-01-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308952#comment-16308952
 ] 

Hudson commented on YARN-7687:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13435 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13435/])
YARN-7687. ContainerLogAppender Improvements. Contributed by BELUGA (szegedim: 
rev 33ae2a4ae1a9a6561157d2ec8a1d80cb5c50ff2d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerLogAppender.java


> ContainerLogAppender Improvements
> -
>
> Key: YARN-7687
> URL: https://issues.apache.org/jira/browse/YARN-7687
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: YARN-7687.1.patch, YARN-7687.2.patch, YARN-7687.3.patch
>
>
> * Use Array-backed collection instead of LinkedList
> * Ignore calls to {{close()}} after the initial call
> * Clear the queue after {{close}} is called to let garbage collection do its 
> magic on the items inside of it
> * Fix int-to-long conversion issue (overflow)
> * Remove superfluous white space



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7687) ContainerLogAppender Improvements

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308942#comment-16308942
 ] 

Miklos Szegedi edited comment on YARN-7687 at 1/3/18 12:58 AM:
---

Committed to trunk. Thank you for the contribution [~belugabehr].


was (Author: miklos.szeg...@cloudera.com):
Committed to trunk.

> ContainerLogAppender Improvements
> -
>
> Key: YARN-7687
> URL: https://issues.apache.org/jira/browse/YARN-7687
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: YARN-7687.1.patch, YARN-7687.2.patch, YARN-7687.3.patch
>
>
> * Use Array-backed collection instead of LinkedList
> * Ignore calls to {{close()}} after the initial call
> * Clear the queue after {{close}} is called to let garbage collection do its 
> magic on the items inside of it
> * Fix int-to-long conversion issue (overflow)
> * Remove superfluous white space



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7687) ContainerLogAppender Improvements

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308936#comment-16308936
 ] 

Miklos Szegedi commented on YARN-7687:
--

+1 LGTM. Thank you for the contribution [~belugabehr]. I will commit this 
shortly.

> ContainerLogAppender Improvements
> -
>
> Key: YARN-7687
> URL: https://issues.apache.org/jira/browse/YARN-7687
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-7687.1.patch, YARN-7687.2.patch, YARN-7687.3.patch
>
>
> * Use Array-backed collection instead of LinkedList
> * Ignore calls to {{close()}} after the initial call
> * Clear the queue after {{close}} is called to let garbage collection do its 
> magic on the items inside of it
> * Fix int-to-long conversion issue (overflow)
> * Remove superfluous white space



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.003.patch

[~asuresh] Thanks for the comments!
Attaching v003 of the patch.
Also added TestPlacementConstraintsUtil class testing the canSatisfyConstraints 
method in isolation as discussed.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682-YARN-6592.003.patch, 
> YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1630#comment-1630
 ] 

genericqa commented on YARN-7602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 46s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 19s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
|   | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerRecovery
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestLocalCacheDirectoryManager
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-760

[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308875#comment-16308875
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} root in YARN-6592 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
17s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} YARN-6592 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-api in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-common in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-common in YARN-6592 failed. {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-client in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-mapreduce-client-app in YARN-6592 failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-sls in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  6m 
28s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-api in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-common in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-common in YARN-6592 failed. {color} 
|
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-client in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-api in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-common in YARN-6592 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-common in YARN-6592 failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-6592 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-client in YARN-6592 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn in 

[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2018-01-02 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308849#comment-16308849
 ] 

Robert Kanter commented on YARN-7622:
-

Overall looks good.  Just one trivial comment:
- We should add s3 and viewfs to the list of allowed filesystems.

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch, YARN-7622.004.patch, YARN-7622.005.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7688) Miscellaneous Improvements To ProcfsBasedProcessTree

2018-01-02 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308841#comment-16308841
 ] 

BELUGA BEHR commented on YARN-7688:
---

[~miklos.szeg...@cloudera.com] Kindly consider this patch to the project. :)

> Miscellaneous Improvements To ProcfsBasedProcessTree
> 
>
> Key: YARN-7688
> URL: https://issues.apache.org/jira/browse/YARN-7688
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7688.1.patch, YARN-7688.2.patch, YARN-7688.3.patch, 
> YARN-7688.4.patch
>
>
> * Use ArrayDeque for performance instead of LinkedList
> * Use more Apache Commons routines to replace existing implementations
> * Remove superfluous code guards around DEBUG statements
> * Remove superfluous annotations in the tests
> * Other small improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7694) Optionally run shared cache manager as part of the resource manager

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308833#comment-16308833
 ] 

Miklos Szegedi commented on YARN-7694:
--

[~ctrezzo], thank you for raising this. The shared cache manager could extend 
its features, when running as part of resource manager. It might be useful to 
register any HDFS file for deletion, if not used for a while, even if they are 
not part of the shared cache with the same codebase. What do you think?

> Optionally run shared cache manager as part of the resource manager
> ---
>
> Key: YARN-7694
> URL: https://issues.apache.org/jira/browse/YARN-7694
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>
> Currently the shared cache manager is its own stand-alone daemon. It is a 
> YARN composite service. Ideally, the shared cache manager could optionally be 
> run as part of the resource manager. This way administrators would have to 
> manage one less daemon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308829#comment-16308829
 ] 

genericqa commented on YARN-7602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 12s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 45s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
|   | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerRecovery
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestLocalCacheDirectoryManager
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-760

[jira] [Commented] (YARN-7693) ContainersMonitor support configurable

2018-01-02 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308824#comment-16308824
 ] 

Miklos Szegedi commented on YARN-7693:
--

[~yangjiandan], thank you for raising this. How is jira this related to 
YARN-7064? I believe a new resource calculator is enough in order to use 
cgroups for oversubscription resource measurement.

> ContainersMonitor support configurable
> --
>
> Key: YARN-7693
> URL: https://issues.apache.org/jira/browse/YARN-7693
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Minor
> Attachments: YARN-7693.001.patch, YARN-7693.002.patch
>
>
> Currently ContainersMonitor has only one default implementation 
> ContainersMonitorImpl,
> After introducing Opportunistic Container, ContainersMonitor needs to monitor 
> system metrics and even dynamically adjust Opportunistic and Guaranteed 
> resources in the cgroup, so another ContainersMonitor may need to be 
> implemented. 
> The current ContainerManagerImpl ContainersMonitorImpl direct new 
> ContainerManagerImpl, so ContainersMonitor need to be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7694) Optionally run shared cache manager as part of the resource manager

2018-01-02 Thread Chris Trezzo (JIRA)
Chris Trezzo created YARN-7694:
--

 Summary: Optionally run shared cache manager as part of the 
resource manager
 Key: YARN-7694
 URL: https://issues.apache.org/jira/browse/YARN-7694
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chris Trezzo


Currently the shared cache manager is its own stand-alone daemon. It is a YARN 
composite service. Ideally, the shared cache manager could optionally be run as 
part of the resource manager. This way administrators would have to manage one 
less daemon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.006.patch

Rebased (006)

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308781#comment-16308781
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} YARN-6599 does not apply to YARN-6592. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6599 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904288/YARN-6599-YARN-6592.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19076/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.005.patch

Fixed more UTs. (Ver.5)

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308725#comment-16308725
 ] 

genericqa commented on YARN-7602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 1 new + 7 unchanged - 
0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 45s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 44s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904269/YARN-7602.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e3a14449f4f8 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh

[jira] [Commented] (YARN-3348) Add a 'yarn top' tool to help understand cluster usage

2018-01-02 Thread VP (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308713#comment-16308713
 ] 

VP commented on YARN-3348:
--

When i redirect the output of yarn top to a file , it shows junk characters in 
the file .

Normally , we have top -b which removes this data , but here in yarn top , do 
we have any similar options

> Add a 'yarn top' tool to help understand cluster usage
> --
>
> Key: YARN-3348
> URL: https://issues.apache.org/jira/browse/YARN-3348
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: apache-yarn-3348.0.patch, apache-yarn-3348.1.patch, 
> apache-yarn-3348.2.patch, apache-yarn-3348.3.patch, apache-yarn-3348.4.patch, 
> apache-yarn-3348.5.patch, apache-yarn-3348.branch-2.0.patch
>
>
> It would be helpful to have a 'yarn top' tool that would allow administrators 
> to understand which apps are consuming resources.
> Ideally the tool would allow you to filter by queue, user, maybe labels, etc 
> and show you statistics on container allocation across the cluster to find 
> out which apps are consuming the most resources on the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7602:
-
Attachment: YARN-7602.03.patch

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch, YARN-7602.03.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308680#comment-16308680
 ] 

Haibo Chen commented on YARN-7602:
--

The unit test failures seem unrelated to me. Will update the patch to address 
the checkstyle issue and see what Jenkins says.

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch, YARN-7602.03.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308628#comment-16308628
 ] 

genericqa commented on YARN-7602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 1 new + 7 unchanged - 
0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 13s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904258/YARN-7602.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f883f097fcc3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided

[jira] [Updated] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7213:
-
Attachment: YARN-7602.02.patch

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308568#comment-16308568
 ] 

Haibo Chen commented on YARN-7602:
--

[~mdrob], the patch is updated according to your quick review!

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7602:
-
Attachment: YARN-7602.02.patch

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch, 
> YARN-7602.02.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7213:
-
Attachment: (was: YARN-7602.02.patch)

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308502#comment-16308502
 ] 

Mike Drob commented on YARN-7602:
-

{code}
+Assert.assertTrue("initSingleton should return the singleton instance",
+jvmMetrics1.equals(jvmMetrics2));
{code}
Prefer assertEquals instead for clearer failure messages.

Add a test for what happens if there are two calls to initSingleton with 
different prefixes?

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7691) Add Unit Tests for Containers Launcher

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308447#comment-16308447
 ] 

genericqa commented on YARN-7691:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
44s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7691 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904248/YARN-7691.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dc7442faa0eb 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7fe6f83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19071/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19071/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add Unit Tests for Containers Launcher
> 

[jira] [Resolved] (YARN-6795) Add per-node max allocation threshold with respect to its capacity

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-6795.
--
Resolution: Duplicate

> Add per-node max allocation threshold with respect to its capacity
> --
>
> Key: YARN-6795
> URL: https://issues.apache.org/jira/browse/YARN-6795
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308445#comment-16308445
 ] 

genericqa commented on YARN-7590:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
21s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904249/YARN-7590.006.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 14846bdbd8af 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7fe6f83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19070/testReport/ |
| Max. process+thread count | 317 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19070/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files

[jira] [Commented] (YARN-3136) getTransferredContainers can be a bottleneck during AM registration

2018-01-02 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308446#comment-16308446
 ] 

Jason Lowe commented on YARN-3136:
--

bq. could you please tell me what is the jira about We've already done similar 
work during AM allocate calls to make sure they don't needlessly grab the 
scheduler lock ?

I was not referring to a specific JIRA  but rather the existing structure of 
the code where the scheduler drops off allocated containers for the AM to pick 
up without needing to grab the scheduler lock.  If you're seeing a lot of 
blocked IPC threads for AM allocate calls then I think you should file a new 
JIRA with the common stack trace(s) showing how it's blocked.  We can then move 
the discussion there.

> getTransferredContainers can be a bottleneck during AM registration
> ---
>
> Key: YARN-3136
> URL: https://issues.apache.org/jira/browse/YARN-3136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Sunil G
>  Labels: 2.7.2-candidate
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: 0001-YARN-3136.patch, 00010-YARN-3136.patch, 
> 00011-YARN-3136.patch, 00012-YARN-3136.patch, 00013-YARN-3136.patch, 
> 0002-YARN-3136.patch, 0003-YARN-3136.patch, 0004-YARN-3136.patch, 
> 0005-YARN-3136.patch, 0006-YARN-3136.patch, 0007-YARN-3136.patch, 
> 0008-YARN-3136.patch, 0009-YARN-3136.patch, YARN-3136.branch-2.7.patch
>
>
> While examining RM stack traces on a busy cluster I noticed a pattern of AMs 
> stuck waiting for the scheduler lock trying to call getTransferredContainers. 
>  The scheduler lock is highly contended, especially on a large cluster with 
> many nodes heartbeating, and it would be nice if we could find a way to 
> eliminate the need to grab this lock during this call.  We've already done 
> similar work during AM allocate calls to make sure they don't needlessly grab 
> the scheduler lock, and it would be good to do so here as well, if possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7602) NM should reference the singleton JvmMetrics instance

2018-01-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7602:
-
Attachment: YARN-7602.01.patch

Thanks [~md...@cloudera.com] for the review. I updated the patch accordingly.

> NM should reference the singleton JvmMetrics instance
> -
>
> Key: YARN-7602
> URL: https://issues.apache.org/jira/browse/YARN-7602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7602.00.patch, YARN-7602.01.patch
>
>
> NM does not reference the singleton JvmMetrics instance in its 
> NodeManagerMetrics. This will easily cause NM to crash if any of the node 
> manager components tries to register JvmMetrics. An example of this is 
> TimelineCollectorManager that hosts a HBaseClient that registers JvmMetrics 
> again. See HBASE-19409 for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7691) Add Unit Tests for Containers Launcher

2018-01-02 Thread Sampada Dehankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sampada Dehankar updated YARN-7691:
---
Attachment: YARN-7691.002.patch

> Add Unit Tests for Containers Launcher
> --
>
> Key: YARN-7691
> URL: https://issues.apache.org/jira/browse/YARN-7691
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.9.1
>Reporter: Sampada Dehankar
>Assignee: Sampada Dehankar
> Attachments: YARN-7691.001.patch, YARN-7691.002.patch
>
>
> We need to add more test in the recovry path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2018-01-02 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.006.patch

- Updated according to most recent feedback.

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2018-01-02 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308357#comment-16308357
 ] 

Eric Yang edited comment on YARN-7677 at 1/2/18 4:54 PM:
-

[~ebadger] Happy new year.  I think it will be safer for {{HADOOP_CONF_DIR}} to 
be passed from host to docker image as default.  This is better for preventing 
mistakes instead of allowing override system specific settings at container 
level.  This will also ensure that when an application requires system 
settings, docker doesn't need to reconstruct the environment, but simply mount 
the {{HADOOP_CONF_DIR}} as source of truth.  If docker container wants to 
generate its own environment, there shouldn't be anything getting in the way 
for docker application to accomplish that.  I don't understand how is this 
paramount for docker case, could you elaborate?  Thanks


was (Author: eyang):
[~ebadger] Happy new year.  I think it will be safer for {HADOOP_CONF_DIR} to 
be passed from host to docker image as default.  This is better for preventing 
mistakes instead of allowing override system specific settings at container 
level.  This will also ensure that when an application requires system 
settings, docker doesn't need to reconstruct the environment, but simply mount 
the {HADOOP_CONF_DIR} as source of truth.  If docker container wants to 
generate its own environment, there shouldn't be anything getting in the way 
for docker application to accomplish that.  I don't understand how is this 
paramount for docker case, could you elaborate?  Thanks

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2018-01-02 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308357#comment-16308357
 ] 

Eric Yang commented on YARN-7677:
-

[~ebadger] Happy new year.  I think it will be safer for {HADOOP_CONF_DIR} to 
be passed from host to docker image as default.  This is better for preventing 
mistakes instead of allowing override system specific settings at container 
level.  This will also ensure that when an application requires system 
settings, docker doesn't need to reconstruct the environment, but simply mount 
the {HADOOP_CONF_DIR} as source of truth.  If docker container wants to 
generate its own environment, there shouldn't be anything getting in the way 
for docker application to accomplish that.  I don't understand how is this 
paramount for docker case, could you elaborate?  Thanks

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308305#comment-16308305
 ] 

Arun Suresh commented on YARN-7682:
---

Thanks for the update [~pgaref]

It is starting to look neat now :)
Minor nits:
* Looks like changes in PlacementConstratintManager and 
MemoryPlacementConstratintManager are not needed.
* The tests look good - but can you put a brief comment describing the intent 
of the placementConstratint and expectation especially for the 
testComplexPlacement case.
* In addition to the testProcessor, maybe we can add one more test class 
-TestPlacementConstraintsUtil, which just tests {{canSatisfyConstraints}} in 
isolation ? That way, we can add more complex constraint tests in future if 
needed too.

+1 pending the above. [~kkaranasos], thoughts ?



> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308277#comment-16308277
 ] 

genericqa commented on YARN-7682:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 0 unchanged - 3 fixed = 8 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904231/YARN-7682-YARN-6592.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a9cae21312b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 1c5fa65 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19069/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19069/artifac

[jira] [Updated] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-02 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7451:
-
Description: 
When running jobs that request resource types the RM Cluster Apps API should 
include this in the "resourceRequests" object.

Additionally, when calling the RM scheduler API it returns:
{noformat}
 "childQueues": {
"queue": [
{
"allocatedContainers": 101,
"amMaxResources": {
"memory": 320390,
"vCores": 192
},
"amUsedResources": {
"memory": 1024,
"vCores": 1
},
"clusterResources": {
"memory": 640779,
"vCores": 384
},
"demandResources": {
"memory": 103424,
"vCores": 101
},
"fairResources": {
"memory": 640779,
"vCores": 384
},
"maxApps": 2147483647,
"maxResources": {
"memory": 640779,
"vCores": 384
},
"minResources": {
"memory": 0,
"vCores": 0
},
"numActiveApps": 1,
"numPendingApps": 0,
"preemptable": true,
"queueName": "root.users.systest",
"reservedContainers": 0,
"reservedResources": {
"memory": 0,
"vCores": 0
},
"schedulingPolicy": "fair",
"steadyFairResources": {
"memory": 320390,
"vCores": 192
},
"type": "fairSchedulerLeafQueueInfo",
"usedResources": {
"memory": 103424,
"vCores": 101
}
}
]
{noformat}

However, the web UI shows resource types usage.


  was:
When running jobs that request resource types the RM Cluster Apps API should 
include this in the "resourceRequests" object.

Additionally, when calling the RM scheduler API it returns:
{noformat}
 "childQueues": {
"queue": [
{
"allocatedContainers": 101,
"amMaxResources": {
"memory": 320390,
"vCores": 192
},
"amUsedResources": {
"memory": 1024,
"vCores": 1
},
"clusterResources": {
"memory": 640779,
"vCores": 384
},
"demandResources": {
"memory": 103424,
"vCores": 101
},
"fairResources": {
"memory": 640779,
"vCores": 384
},
"maxApps": 2147

[jira] [Updated] (YARN-7451) Resources Types should be visible in the Cluster Apps API "resourceRequests" section

2018-01-02 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7451:
-
Description: 
When running jobs that request resource types the RM Cluster Apps API should 
include this in the "resourceRequests" object.

Additionally, when calling the RM scheduler API it returns:
{noformat}
 "childQueues": {
"queue": [
{
"allocatedContainers": 101,
"amMaxResources": {
"memory": 320390,
"vCores": 192
},
"amUsedResources": {
"memory": 1024,
"vCores": 1
},
"clusterResources": {
"memory": 640779,
"vCores": 384
},
"demandResources": {
"memory": 103424,
"vCores": 101
},
"fairResources": {
"memory": 640779,
"vCores": 384
},
"maxApps": 2147483647,
"maxResources": {
"memory": 640779,
"vCores": 384
},
"minResources": {
"memory": 0,
"vCores": 0
},
"numActiveApps": 1,
"numPendingApps": 0,
"preemptable": true,
"queueName": "root.users.systest",
"reservedContainers": 0,
"reservedResources": {
"memory": 0,
"vCores": 0
},
"schedulingPolicy": "fair",
"steadyFairResources": {
"memory": 320390,
"vCores": 192
},
"type": "fairSchedulerLeafQueueInfo",
"usedResources": {
"memory": 103424,
"vCores": 101
}
}
]
{noformat}

However, the web UI shows resource types usage.  (See screenshot)


  was:
When running jobs that request resource types the RM Cluster Apps API should 
include this in the "resourceRequests" object.




> Resources Types should be visible in the Cluster Apps API "resourceRequests" 
> section
> 
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>
> When running jobs that request resource types the RM Cluster Apps API should 
> include this in the "resourceRequests" object.
> Additionally, when calling the RM scheduler API it returns:
> {noformat}
>  "childQueues": {
> "queue": [
> {
> "allocatedContainers": 101,
> "amMaxResources": {
> "memory": 320390,
> "vCores": 192
> },
> "amUsedResources": {
> "memory": 1024,
> "vCores": 1
> },
> "clusterRe

[jira] [Commented] (YARN-7653) Rack cardinality support for AllocationTagsManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308092#comment-16308092
 ] 

Panagiotis Garefalakis commented on YARN-7653:
--

Hello [~leftnoteasy],  I agree with the title change.
Regarding the node group support: 
* in the discussion above we agreed that we need to at least support Rack as it 
already defined in our API
* in the committed patch the CountedTags inner class is generic with the goal 
to support any arbitrary node group. The only thing we would add is an extra 
data structure keeping a group to CountedTags mapping (in that scenario RACK 
would be just a specific node group)
* to keep things simple since we dont have arbitrary groups so far this extra 
mapping is not there - as we would also need a way to define/add/remove node 
groups -  but I would be happy to work on that if we want to support it



> Rack cardinality support for AllocationTagsManager
> --
>
> Key: YARN-7653
> URL: https://issues.apache.org/jira/browse/YARN-7653
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
> Fix For: YARN-6592
>
> Attachments: YARN-7653-YARN-6592.001.patch, 
> YARN-7653-YARN-6592.002.patch, YARN-7653-YARN-6592.003.patch
>
>
> AllocationTagsManager currently supports node and cluster-wide tag 
> cardinality retrieval.
> If we want to support arbitrary node-groups/scopes for our placement 
> constraints TagsManager should be extended to provide such functionality.
> As a first step we need to support RACK scope cardinality retrieval (as 
> defined in our API).
> i.e. how many "spark" containers are currently running on "RACK-1"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308067#comment-16308067
 ] 

Panagiotis Garefalakis edited comment on YARN-7682 at 1/2/18 1:44 PM:
--

[~asuresh] [~kkaranasos] thanks for the feedback.

Please check the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.


was (Author: pgaref):
[~asuresh] [~kkaranasos] thanks for the feedback.

Please find attached the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7682) Expose canAssign method in the PlacementConstraintManager

2018-01-02 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7682:
-
Attachment: YARN-7682-YARN-6592.002.patch

[~asuresh] [~kkaranasos] thanks for the feedback.

Please find attached the latest patch.
It assumes target allocation tags need to be present before the constrained 
request arrival otherwise they get rejected and it is up to the AM to resend.
Thus there is no need to differentiate between source and target Tags in the 
current implementation.

I also included some more complex test cases including intra-application 
affinity, antiaffinity and cardinality constraints.

> Expose canAssign method in the PlacementConstraintManager
> -
>
> Key: YARN-7682
> URL: https://issues.apache.org/jira/browse/YARN-7682
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7682-YARN-6592.001.patch, 
> YARN-7682-YARN-6592.002.patch, YARN-7682.wip.patch
>
>
> As per discussion in YARN-7613. Lets expose {{canAssign}} method in the 
> PlacementConstraintManager that takes a sourceTags, applicationId, 
> SchedulerNode and AllocationTagsManager and returns true if constraints are 
> not violated by placing the container on the node.
> I prefer not passing in the SchedulingRequest, since it can have > 1 
> numAllocations. We want this api to be called for single allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2018-01-02 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307968#comment-16307968
 ] 

Gergo Repas commented on YARN-7585:
---

+1 (non-binding)

> NodeManager should go unhealthy when state store throws DBException 
> 
>
> Key: YARN-7585
> URL: https://issues.apache.org/jira/browse/YARN-7585
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7585.001.patch, YARN-7585.002.patch, 
> YARN-7585.003.patch
>
>
> If work preserving recover is enabled the NM will not start up if the state 
> store does not initialise. However if the state store becomes unavailable 
> after that for any reason the NM will not go unhealthy. 
> Since the state store is not available new containers can not be started any 
> more and the NM should become unhealthy:
> {code}
> AMLauncher: Error launching appattempt_1508806289867_268617_01. Got 
> exception: org.apache.hadoop.yarn.exceptions.YarnException: 
> java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at o.a.h.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:721)
> ...
> Caused by: java.io.IOException: org.iq80.leveldb.DBException: IO error: 
> /dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
> Read-only file system
> at 
> o.a.h.y.s.n.r.NMLeveldbStateStoreService.storeApplication(NMLeveldbStateStoreService.java:374)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:848)
> at 
> o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:712)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-02 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307944#comment-16307944
 ] 

Gergo Repas commented on YARN-2185:
---

Thanks [~miklos.szeg...@cloudera.com] for the patch, I like this improvement. I 
have a couple of comments and questions:
In {{FileUtil.runCommandOnStream()}}:
# the closing of {{process.getOutputStream()}} will not happen if there is an 
exception in the first {{org.apache.commons.io.IOUtils.copy(inputStream, 
process.getOutputStream());}} call.
# The process's outputstream may be closed before IOUtils.toString() has a 
chance to read from it on the executor thread.
# The std error stream is not closed.
# {{org.apache.commons.io.IOUtils.copy(inputStream, 
process.getOutputStream());}} appears twice: once before the process.waitFor() 
call, and once after - what's the reason for the second call?

In {{RunJar.unJarAndSave()}} there is no need to use multiple try blocks, a 
single try-with can handle multiple Closeable-s.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307905#comment-16307905
 ] 

genericqa commented on YARN-6894:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6894 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904197/YARN-6894.003.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 6106fde023d7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7fe6f83 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19067/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch, 
> YARN-6894.003.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types

2018-01-02 Thread lovekesh bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lovekesh bansal updated YARN-5589:
--
Attachment: YARN-5589_trunk.001.patch

uploading patch with code changes. [~sunilg] Can you please review, accordingly 
I'll change the test cases. Thanks.

> Update CapacitySchedulerConfiguration minimum and maximum calculations to 
> consider all resource types
> -
>
> Key: YARN-5589
> URL: https://issues.apache.org/jira/browse/YARN-5589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
> Attachments: YARN-5589_trunk.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307873#comment-16307873
 ] 

lujie edited comment on YARN-7663 at 1/2/18 11:22 AM:
--

I restudy the related code, I think just ignore the START event is ok. I attach 
a simple patch without unit test. the patch is based on 2.8.3


was (Author: xiaoheipangzi):
I restudy the related code, I think just ignore the START event is ok. I attach 
a simple patch without unit test.

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
>  Labels: patch
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7693) ContainersMonitor support configurable

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307900#comment-16307900
 ] 

genericqa commented on YARN-7693:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904186/YARN-7693.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| 

[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Fix Version/s: (was: 2.8.4)

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
>  Labels: patch
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: (was: YARN-7663.patch)

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
>  Labels: patch
> Fix For: 2.8.4
>
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: YARN-7663.patch

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
>  Labels: patch
> Fix For: 2.8.4
>
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: YARN-7663.patch

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: (was: YARN-7663.patch)

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307881#comment-16307881
 ] 

genericqa commented on YARN-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 14s{color} 
| {color:red} YARN-7663 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904199/YARN-7663.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19068/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Comment: was deleted

(was: I restudy the related code, I think just ignore the START event is ok. I 
attach a simple patch without unit test. )

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7663) RMAppImpl:Invalid event: START at KILLED

2018-01-02 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-7663:

Attachment: YARN-7663.patch

> RMAppImpl:Invalid event: START at KILLED
> 
>
> Key: YARN-7663
> URL: https://issues.apache.org/jira/browse/YARN-7663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: lujie
>Priority: Minor
> Attachments: YARN-7663.patch
>
>
> Send kill to application, the RM log shows:
> {code:java}
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> START at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:805)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:116)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:901)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:885)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> if insert sleep before where the START event was created, this bug will 
> deterministically reproduce. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2018-01-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307871#comment-16307871
 ] 

Gergely Novák commented on YARN-6894:
-

Updated the explanation according to [~sunilg]'s suggestion.

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch, 
> YARN-6894.003.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6894) RM Apps API returns only active apps when query parameter queue used

2018-01-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-6894:

Attachment: YARN-6894.003.patch

> RM Apps API returns only active apps when query parameter queue used
> 
>
> Key: YARN-6894
> URL: https://issues.apache.org/jira/browse/YARN-6894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Reporter: Grant Sohn
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-6894.001.patch, YARN-6894.002.patch, 
> YARN-6894.003.patch
>
>
> If you run RM's Cluster Applications API with no query parameters, you get a 
> list of apps.
> If you run RM's Cluster Applications API with any query parameters other than 
> "queue" you get the list of apps with the parameter filters being applied.
> However, when you use the "queue" query parameter, you only see the 
> applications that are active in the cluster (NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING).  This behavior is inconsistent with the API.  If there is 
> a sound reason behind this, it should be documented and it seems like there 
> might be as the mapred queue CLI behaves similarly.
> http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307846#comment-16307846
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
51s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
39s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 23s{color} | {color:orange} root: The patch generated 87 new + 1481 
unchanged - 15 fixed = 1568 total (was 1496) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 22s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  1s{color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
2s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:gr

[jira] [Updated] (YARN-7693) ContainersMonitor support configurable

2018-01-02 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7693:

Attachment: YARN-7693.002.patch

fix TestYarnConfigurationFields error

> ContainersMonitor support configurable
> --
>
> Key: YARN-7693
> URL: https://issues.apache.org/jira/browse/YARN-7693
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Minor
> Attachments: YARN-7693.001.patch, YARN-7693.002.patch
>
>
> Currently ContainersMonitor has only one default implementation 
> ContainersMonitorImpl,
> After introducing Opportunistic Container, ContainersMonitor needs to monitor 
> system metrics and even dynamically adjust Opportunistic and Guaranteed 
> resources in the cgroup, so another ContainersMonitor may need to be 
> implemented. 
> The current ContainerManagerImpl ContainersMonitorImpl direct new 
> ContainerManagerImpl, so ContainersMonitor need to be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org