[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221282#comment-15221282
 ] 

Varun Vasudev commented on YARN-4857:
-

Pushed revert to trunk and branch-2. Confirmed that TestYarnConfiguration 
passes after revert.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4908) Move preemption configurations of CapacityScheduler to YarnConfiguration

2016-03-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221277#comment-15221277
 ] 

Sunil G commented on YARN-4908:
---

I think it's better suited in CS xml. So test case can be properly changed. Cc/ 
[~leftnoteasy] 

> Move preemption configurations of CapacityScheduler to YarnConfiguration
> 
>
> Key: YARN-4908
> URL: https://issues.apache.org/jira/browse/YARN-4908
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>
> In YARN-4857, preemption configurations of CapacitySchedulers are written in 
> yarn-default.xml. Since TestYarnConfigurationFields checks the fields in 
> yarn-default.xml and YarnConfiguration, we need to move them 
> YarnConfiguration as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reopened YARN-4857:
-

Re-opening issue due to failing unit tests.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221273#comment-15221273
 ] 

Kai Sasaki commented on YARN-4857:
--

Sure, I'll do that accordingly.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221271#comment-15221271
 ] 

Varun Vasudev commented on YARN-4857:
-

I'm fine with that. [~lewuathe] - can you upload a new patch to this ticket 
with the yarn-default.xml changes and the refactoring to move the variables 
into YarnConfiguration?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221265#comment-15221265
 ] 

Brahma Reddy Battula commented on YARN-4857:


IMO,can we revert this and update patch here..?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4908) Move preemption configurations of CapacityScheduler to YarnConfiguration

2016-03-31 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-4908:


 Summary: Move preemption configurations of CapacityScheduler to 
YarnConfiguration
 Key: YARN-4908
 URL: https://issues.apache.org/jira/browse/YARN-4908
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor


In YARN-4857, preemption configurations of CapacitySchedulers are written in 
yarn-default.xml. Since TestYarnConfigurationFields checks the fields in 
yarn-default.xml and YarnConfiguration, we need to move them YarnConfiguration 
as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221263#comment-15221263
 ] 

Brahma Reddy Battula commented on YARN-4857:


Yes

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221256#comment-15221256
 ] 

Kai Sasaki commented on YARN-4857:
--

Sure, I'll do that. Thanks [~brahmareddy] and [~vvasudev].

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221249#comment-15221249
 ] 

Varun Vasudev edited comment on YARN-4857 at 4/1/16 6:39 AM:
-

Thanks for pointing out the failure [~brahmareddy]. My apologies for not 
catching it.

[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!


was (Author: vvasudev):
[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221249#comment-15221249
 ] 

Varun Vasudev commented on YARN-4857:
-

[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221230#comment-15221230
 ] 

Kai Sasaki commented on YARN-4857:
--

[~brahmareddy] Thanks for letting me know. Should we move these configuration 
to {{YarnConfiguration}}?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-03-31 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221204#comment-15221204
 ] 

Kuhu Shukla commented on YARN-4311:
---

... and the Log line during removal.

> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v13.patch, YARN-4311-v13.patch, YARN-4311-v2.patch, 
> YARN-4311-v3.patch, YARN-4311-v4.patch, YARN-4311-v5.patch, 
> YARN-4311-v6.patch, YARN-4311-v7.patch, YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-03-31 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4311:
--
Attachment: YARN-4311-v13.patch

> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v13.patch, YARN-4311-v13.patch, YARN-4311-v2.patch, 
> YARN-4311-v3.patch, YARN-4311-v4.patch, YARN-4311-v5.patch, 
> YARN-4311-v6.patch, YARN-4311-v7.patch, YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-03-31 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4311:
--
Attachment: YARN-4311-v13.patch

Thank you [~jlowe]! Updated patch with the one minor change in yarn-default.xml.

> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v13.patch, YARN-4311-v2.patch, YARN-4311-v3.patch, 
> YARN-4311-v4.patch, YARN-4311-v5.patch, YARN-4311-v6.patch, 
> YARN-4311-v7.patch, YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221084#comment-15221084
 ] 

Brahma Reddy Battula commented on YARN-4857:


After this in {{TestYarnConfigurationFields}} is failing.. Since configs are 
present in {{CapacitySchedulerConfiguration.java}}

https://builds.apache.org/job/Hadoop-Yarn-trunk/1950/
{noformat}
File yarn-default.xml (278 properties)

yarn-default.xml has 6 properties missing in  class 
org.apache.hadoop.yarn.conf.YarnConfiguration

  yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity
  yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill
  yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval
  yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor
  yarn.resourcemanager.monitor.capacity.preemption.observe_only
  yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round

=

class org.apache.hadoop.yarn.conf.YarnConfiguration
  (272 member variables)

class org.apache.hadoop.yarn.conf.YarnConfiguration has 0 variables missing in 
yarn-default.xml

  (None)

=
{noformat}

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Clarify the behavior of timeline service config properties

2016-03-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221070#comment-15221070
 ] 

Naganarasimha G R commented on YARN-4183:
-

Thanks for the reveiw and commit [~sjlee0], and reviews from 
[~jeagles],[~mitdesai],[~xgong],[~vinodkv],[~gtCarrera9] & [~varun_saxena].
I will cross check for 2.6 branch in a while.

> Clarify the behavior of timeline service config properties
> --
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Naganarasimha G R
> Fix For: 2.8.0, 2.7.3, 2.9.0
>
> Attachments: YARN-4183.1.patch, YARN-4183.v1.001.patch, 
> YARN-4183.v1.002.patch
>
>
> Configurations *"yarn.timeline-service.enabled"* and 
> *"yarn.timeline-service.client.best-effort"* are not captured better. 
> Currently if the client doesn't want the tokens to be generated for the 
> timeline service they can set "yarn.timeline-service.enabled" to false and/or 
> "yarn.timeline-service.client.best-effort" to true so that even if the ATS is 
> down jobs can continue to get submitted. This functionality is not properly 
> documented, so as part of this jira we try to document and clarify these 
> configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4905) Improve Yarn log Command line option to show log metadata

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221052#comment-15221052
 ] 

Hadoop QA commented on YARN-4905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 8 new + 
64 unchanged - 1 fixed = 72 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 34s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 163m 49s {color} 

[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221051#comment-15221051
 ] 

Hadoop QA commented on YARN-4807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 1s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 8s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 123m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestRMAd

[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-03-31 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221047#comment-15221047
 ] 

Brahma Reddy Battula commented on YARN-4893:


As I mentioned earlier, failures are unrelated. 
||TestClass||Remark||
|TestAMAuthorization|Locally Passed|
|TestClientRMTokens|Locally Passed|
|TestNodeLabelContainerAllocation| Jira already there YARN-4890|
|TestRMWebServices|Address Already Use,It's Random failure|
|TestRMWithCSRFFilter| Address already in use,It's Random failure|

If you want to me raise seperate issue for {{TestRMWebServices}} and 
{{TestRMWithCSRFFilter}}, will raise.

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221032#comment-15221032
 ] 

Hadoop QA commented on YARN-4849:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 43s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 16s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 12s 
{color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 51 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 4s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 53s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://i

[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-03-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220965#comment-15220965
 ] 

Sunil G commented on YARN-4893:
---

Hi [~brahmareddy], 
Most of the Web service test case are failed because of "address already in 
use" port bind exception. May be it's fine to kick Jenkins and a clean run 
again. [~djp], is that fine? 

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220955#comment-15220955
 ] 

Sunil G commented on YARN-4634:
---

Thank you very much for the review and commit [~leftnoteasy]! 

> Scheduler UI/Metrics need to consider cases like non-queue label mappings
> -
>
> Key: YARN-4634
> URL: https://issues.apache.org/jira/browse/YARN-4634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4634.patch, 0002-YARN-4634.patch, 
> 0003-YARN-4634.patch, 0004-YARN-4634.patch, 0005-YARN-4634.patch
>
>
> Currently when label-queue mappings are not available, there are few 
> assumptions taken in UI and in metrics.
> In above case where labels are enabled and available in cluster but without 
> any queue mappings, UI displays queues under labels. This is not correct.
> Currently  labels enabled check and availability of labels are considered to 
> render scheduler UI. Henceforth we also need to check whether 
> - queue-mappings are available
> - nodes are mapped with labels with proper exclusivity flags on
> This ticket also will try to see the default configurations in queue when 
> labels are not mapped. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4907) Make all MockRM#waitForState consistent.

2016-03-31 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-4907:
--

 Summary: Make all MockRM#waitForState consistent. 
 Key: YARN-4907
 URL: https://issues.apache.org/jira/browse/YARN-4907
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Yufei Gu
Assignee: Yufei Gu


There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
1. Some return a boolean, some don't.  
2. Some have a minimum waiting time(1000ms), some one don't. I appreciate if 
someone can explain why we need a minimum waiting time. 
3. Some wait function don't have a timeout, they can wait for ever. 
4. Some use LOG.info and others use {{System.out.println}} to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-03-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4807:
---
Attachment: YARN-4807.006.patch

Hi [~ka...@cloudera.com] and [~templedf], I made some changes about loop 
according to [~ka...@cloudera.com]'s comment. And I will create a followup JIRA 
for all inconsistencies. Patch 006 is uploaded. Would you please take a look? 
Thanks.

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220931#comment-15220931
 ] 

Hadoop QA commented on YARN-4849:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/10923/console in case of 
problems.


> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Attachment: YARN-4849-YARN-3368.4.patch

Attached ver.4 patch

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4906) Capture container start/finish time in container metrics

2016-03-31 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4906:
--
Attachment: YARN-4906.1.patch

Uploaded a patch which adds start/finish time, exit code into container metrics.
It is done outside of ContainerMonitor, because I think these kind of metrics 
need not to be dependent on whether resource-calculator plugin or vmem/pmem 
flag is enabled or not.  

> Capture container start/finish time in container metrics
> 
>
> Key: YARN-4906
> URL: https://issues.apache.org/jira/browse/YARN-4906
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4906.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4906) Capture container start/finish time in container metrics

2016-03-31 Thread Jian He (JIRA)
Jian He created YARN-4906:
-

 Summary: Capture container start/finish time in container metrics
 Key: YARN-4906
 URL: https://issues.apache.org/jira/browse/YARN-4906
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3742) YARN RM will shut down if ZKClient creation times out

2016-03-31 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-3742:
---
Assignee: Daniel Templeton

> YARN RM  will shut down if ZKClient creation times out 
> ---
>
> Key: YARN-3742
> URL: https://issues.apache.org/jira/browse/YARN-3742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Daniel Templeton
>
> The RM goes down showing the following stacktrace if the ZK client connection 
> fails to be created. We should not exit but transition to StandBy and stop 
> doing things and let the other RM take over.
> {code}
> 2015-04-19 01:22:20,513  FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received a 
> org.apache.hadoop.yarn.server.resourcemanager.RMFatalEvent of type 
> STATE_STORE_OP_FAILED. Cause:
> java.io.IOException: Wait for ZKClient creation timed out
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1066)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1090)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:996)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationStateInternal(ZKRMStateStore.java:643)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:162)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:147)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:879)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:874)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4905) Improve Yarn log Command line option to show log metadata

2016-03-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4905:

Attachment: YARN-4905.1.patch

> Improve Yarn log Command line option to show log metadata
> -
>
> Key: YARN-4905
> URL: https://issues.apache.org/jira/browse/YARN-4905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4905.1.patch
>
>
> Improve the Yarn log commandline to have "ls" command which can list 
> containers for which we have logs, list files within each container, along 
> with file size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4905) Improve Yarn log Command line option to show log metadata

2016-03-31 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-4905:
---

 Summary: Improve Yarn log Command line option to show log metadata
 Key: YARN-4905
 URL: https://issues.apache.org/jira/browse/YARN-4905
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong


Improve the Yarn log commandline to have "ls" command which can list containers 
for which we have logs, list files within each container, along with file size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220838#comment-15220838
 ] 

Hadoop QA commented on YARN-4849:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 19s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 27s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 43s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 38s 
{color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 235 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red} 0m 2s {color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 24s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 6s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 17 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.net.TestClusterTopology |
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryCh

[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-03-31 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220836#comment-15220836
 ] 

Junping Du commented on YARN-4893:
--

Thanks [~brahmareddy] for updating the patch. v3 patch LGTM. However, can you 
check if failed tests are related to the patch? If not, do we have JIRAs to 
track them?

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4842) yarn logs command should not require the appOwner argument

2016-03-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4842:

Issue Type: Sub-task  (was: Bug)
Parent: YARN-4904

> yarn logs command should not require the appOwner argument
> --
>
> Key: YARN-4842
> URL: https://issues.apache.org/jira/browse/YARN-4842
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ram Venkatesh
>Assignee: Ram Venkatesh
> Attachments: YARN-4842.1.patch, YARN-4842.2.patch
>
>
> The yarn logs command is among the most common ways to troubleshoot yarn app 
> failures, especially by an admin.
> Currently if you run the command as a user different from the job owner, the 
> command will fail with a subtle message that it could not find the app under 
> the running user's name. This can be confusing especially to new admins.
> We can figure out the job owner from the app report returned by the RM or the 
> AHS, or, by looking for the app directory using a glob pattern, so in most 
> cases this error can be avoided.
> Question - are there scenarios where users will still need to specify the 
> -appOwner option?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4904) YARN Log tooling enhancement

2016-03-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4904:

Issue Type: Improvement  (was: Task)

> YARN Log tooling enhancement
> 
>
> Key: YARN-4904
> URL: https://issues.apache.org/jira/browse/YARN-4904
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4904) YARN Log tooling enhancement

2016-03-31 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-4904:
---

 Summary: YARN Log tooling enhancement
 Key: YARN-4904
 URL: https://issues.apache.org/jira/browse/YARN-4904
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4903) Document how to pass ReservationId through the RM REST API

2016-03-31 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4903:
-
Description: YARN-4625 added the reservation-id field to the RM 
submitApplication REST API. This JIRA is to add the corresponding 
documentation.  (was: YARN-4265 added the reservation-id field to the RM 
submitApplication REST API. This JIRA is to add the corresponding 
documentation.)

> Document how to pass ReservationId through the RM REST API
> --
>
> Key: YARN-4903
> URL: https://issues.apache.org/jira/browse/YARN-4903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> YARN-4625 added the reservation-id field to the RM submitApplication REST 
> API. This JIRA is to add the corresponding documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220767#comment-15220767
 ] 

Hadoop QA commented on YARN-4390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 28 new + 484 unchanged - 15 fixed = 512 total (was 499) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 34s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95
 with JDK v1.7.0_95 generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 41s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 147m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/

[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220758#comment-15220758
 ] 

Hadoop QA commented on YARN-4390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 28 new + 484 unchanged - 15 fixed = 512 total (was 499) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 28s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95
 with JDK v1.7.0_95 generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 28s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/

[jira] [Commented] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220761#comment-15220761
 ] 

Hudson commented on YARN-4634:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9537/])
YARN-4634. Scheduler UI/Metrics need to consider cases like non-queue (wangda: 
rev 12b11e2e688158404feeb3ded37eb6cccad4ea5c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java


> Scheduler UI/Metrics need to consider cases like non-queue label mappings
> -
>
> Key: YARN-4634
> URL: https://issues.apache.org/jira/browse/YARN-4634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4634.patch, 0002-YARN-4634.patch, 
> 0003-YARN-4634.patch, 0004-YARN-4634.patch, 0005-YARN-4634.patch
>
>
> Currently when label-queue mappings are not available, there are few 
> assumptions taken in UI and in metrics.
> In above case where labels are enabled and available in cluster but without 
> any queue mappings, UI displays queues under labels. This is not correct.
> Currently  labels enabled check and availability of labels are considered to 
> render scheduler UI. Henceforth we also need to check whether 
> - queue-mappings are available
> - nodes are mapped with labels with proper exclusivity flags on
> This ticket also will try to see the default configurations in queue when 
> labels are not mapped. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4811) Generate histograms in ContainerMetrics for actual container resource usage

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220760#comment-15220760
 ] 

Hudson commented on YARN-4811:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9537/])
YARN-4811. Generate histograms in ContainerMetrics for actual container 
(jianhe: rev 0dd9bcab97ccdf24a2174636604110b74664cf80)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/QuantileEstimator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleQuantiles.java


> Generate histograms in ContainerMetrics for actual container resource usage
> ---
>
> Key: YARN-4811
> URL: https://issues.apache.org/jira/browse/YARN-4811
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 2.9.0
>
> Attachments: YARN-4811.001.patch, YARN-4811.002.patch
>
>
> The ContainerMetrics class stores some details about actual container 
> resource usage. It would be useful to generate histograms for the actual 
> resource usage as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4678) Cluster used capacity is > 100 when container reserved

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220725#comment-15220725
 ] 

Wangda Tan commented on YARN-4678:
--

[~sunilg],

Thanks for uploading test case, however, it doesn't demonstrate maximum 
resource of queue (or cluster total resource) will be is violated:
Total cluster resource in your test case is 16G, and the queue only uses 8+G.

IIUC, [~brahmareddy] mentioned in the JIRA is, cluster total used capacity 
could be more than 100%.

Thanks,
Wangda

> Cluster used capacity is > 100 when container reserved 
> ---
>
> Key: YARN-4678
> URL: https://issues.apache.org/jira/browse/YARN-4678
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Sunil G
> Attachments: 0001-YARN-4678.patch, 0002-YARN-4678.patch, 
> 0003-YARN-4678-testcase.patch, 0003-YARN-4678.patch, 
> reservedCapInClusterMetrics.png
>
>
>  *Scenario:* 
> * Start cluster with Three NM's each having 8GB (cluster memory:24GB).
> * Configure queues with elasticity and userlimitfactor=10.
> * disable pre-emption.
> * run two job with different priority in different queue at the same time
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=LOW 
> -Dmapreduce.job.queuename=QueueA -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 
> -Dmapreduce.job.reduce.slowstart.completedmaps=1.0 10 1
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=HIGH 
> -Dmapreduce.job.queuename=QueueB -Dmapreduce.map.memory.mb=4096 
> -Dyarn.app.mapreduce.am.resource.mb=1536 3 1
> * observe the cluster capacity which was used in RM web UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4484) Available Resource calculation for a queue is not correct when used with labels

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220700#comment-15220700
 ] 

Wangda Tan commented on YARN-4484:
--

[~sunilg],

Thanks for working on this, approach looks good.

Not sure if I missed following test in your patch: a queue has >0 available 
resource from default partition and another partition, when container 
allocated/released, we should modify available resource from both partitions. 
Suggest to add the test if it isn't existed now.

> Available Resource calculation for a queue is not correct when used with 
> labels
> ---
>
> Key: YARN-4484
> URL: https://issues.apache.org/jira/browse/YARN-4484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4484.patch, 0002-YARN-4484.patch, 
> 0003-YARN-4484-v2.patch, 0003-YARN-4484.patch
>
>
> To calculate available resource for a queue, we have to get the total 
> resource allocated for all labels in queue compare to its usage. 
> Also address the comments given in 
> [YARN-4304-comments|https://issues.apache.org/jira/browse/YARN-4304?focusedCommentId=15064874&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15064874
>  ] given by [~leftnoteasy] for same.
> ClusterMetrics related issues will also get handled once we fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4811) Generate histograms in ContainerMetrics for actual container resource usage

2016-03-31 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4811:
--
Summary: Generate histograms in ContainerMetrics for actual container 
resource usage  (was: Generate histograms for actual container resource usage)

> Generate histograms in ContainerMetrics for actual container resource usage
> ---
>
> Key: YARN-4811
> URL: https://issues.apache.org/jira/browse/YARN-4811
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-4811.001.patch, YARN-4811.002.patch
>
>
> The ContainerMetrics class stores some details about actual container 
> resource usage. It would be useful to generate histograms for the actual 
> resource usage as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4699) Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to change label of a node

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220681#comment-15220681
 ] 

Wangda Tan commented on YARN-4699:
--

[~sunilg],

Thanks for working on this, fix looks good to me. I think following code in 
test may not needed:
bq.  mgr.replaceLabelsOnNode(ImmutableMap.of(nm1.getNodeId(), toSet("z")));

IIRC, if we do:
bq. cs.handle(new 
NodeLabelsUpdateSchedulerEvent(ImmutableMap.of(nm1.getNodeId(), ..
Labels-related resources should be updated synchronously.

> Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to 
> change label of a node
> 
>
> Key: YARN-4699
> URL: https://issues.apache.org/jira/browse/YARN-4699
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: 0001-YARN-4699.patch, 0002-YARN-4699.patch, 
> AfterAppFInish-LabelY-Metrics.png, ForLabelX-AfterSwitch.png, 
> ForLabelY-AfterSwitch.png
>
>
> Scenario is as follows:
> a. 2 nodes are available in the cluster (node1 with label "x", node2 with 
> label "y")
> b. Submit an application to node1 for label "x". 
> c. Change node1 label to "y" by using *replaceLabelsOnNode* command.
> d. Verify Scheduler UI for metrics such as "Used Capacity", "Absolute 
> Capacity" etc. "x" still shows some capacity.
> e. Change node1 label back to "x" and verify UI and REST o/p
> Output:
> 1. "Used Capacity", "Absolute Capacity" etc are not decremented once labels 
> is changed for a node.
> 2. UI tab for respective label shows wrong GREEN color in these cases.
> 3. REST o/p is wrong for each label after executing above scenario.
> Attaching screen shots also. This ticket will try to cover UI and REST o/p 
> fix when label is changed runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220679#comment-15220679
 ] 

Hadoop QA commented on YARN-4849:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/10920/console in case of 
problems.


> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Attachment: YARN-4849-YARN-3368.3.patch

Attached ver.3 patch.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220654#comment-15220654
 ] 

Hadoop QA commented on YARN-4849:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 53s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
4s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 54s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 54s 
{color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
33s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
27s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 28s 
{color} | {color:green} YARN-3368 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
23s {color} | {color:green} YARN-3368 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 6m 7s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-ui in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 7m 53s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 15s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 15s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 55s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 55s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 31s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 38s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 13s 
{color} | {color:red} The applied patch generated 552 new + 98 unchanged - 0 
fixed = 650 total (was 98) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 235 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red} 0m 3s {color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 22s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 7s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 43s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 33s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {

[jira] [Commented] (YARN-4639) Remove dead code in TestDelegationTokenRenewer added in YARN-3055

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220608#comment-15220608
 ] 

Hudson commented on YARN-4639:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9536 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9536/])
YARN-4639. Remove dead code in TestDelegationTokenRenewer added in (rkanter: 
rev 7a021471c376ce846090fbd1a315266bada048d4)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java


> Remove dead code in TestDelegationTokenRenewer added in YARN-3055
> -
>
> Key: YARN-4639
> URL: https://issues.apache.org/jira/browse/YARN-4639
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4639.001.patch
>
>
> Remove lines 1093-1094:
> {code}
> //MyFS fs = (MyFS)FileSystem.get(conf);
> //MyToken token1 = fs.getDelegationToken("user123");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4639) Remove dead code in TestDelegationTokenRenewer added in YARN-3055

2016-03-31 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220587#comment-15220587
 ] 

Robert Kanter commented on YARN-4639:
-

+1

> Remove dead code in TestDelegationTokenRenewer added in YARN-3055
> -
>
> Key: YARN-4639
> URL: https://issues.apache.org/jira/browse/YARN-4639
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4639.001.patch
>
>
> Remove lines 1093-1094:
> {code}
> //MyFS fs = (MyFS)FileSystem.get(conf);
> //MyToken token1 = fs.getDelegationToken("user123");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4595) Add support for configurable read-only mounts

2016-03-31 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220585#comment-15220585
 ] 

Vinod Kumar Vavilapalli commented on YARN-4595:
---

bq. What's preventing users from mounting files and file systems they shouldn't 
have access to?
If we just restrict ourselves to accessing distributed-cache files inside a 
docker container, we can simply inherit the permission model that we already 
have there - essentially you cannot mount files that you don't already have 
access to in the dist-cache and the remote FS.



> Add support for configurable read-only mounts
> -
>
> Key: YARN-4595
> URL: https://issues.apache.org/jira/browse/YARN-4595
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-4595.1.patch, YARN-4595.2.patch
>
>
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container.  We could allow 
> the user to set a list of mounts in the environment of ContainerLaunchContext 
> (e.g. /dir1:/targetdir1,/dir2:/targetdir2).  These would be mounted read-only 
> to the specified target locations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4390) Consider container request size during CS preemption

2016-03-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4390:
-
Attachment: YARN-4390.2.patch

Re-attached patch to kick Jenkins

> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, 
> YARN-4390.1.patch, YARN-4390.2.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4390) Consider container request size during CS preemption

2016-03-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4390:
-
Attachment: (was: YARN-4390.2.patch)

> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, 
> YARN-4390.1.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4879) Proposal for a simple (delta) allocate protocol

2016-03-31 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220482#comment-15220482
 ] 

Vinod Kumar Vavilapalli commented on YARN-4879:
---

BTW, for the federation related issue, does the client-library need to always 
generate these IDs? How does that interact with application generated IDs?

> Proposal for a simple (delta) allocate protocol
> ---
>
> Key: YARN-4879
> URL: https://issues.apache.org/jira/browse/YARN-4879
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: SimpleAllocateProtocolProposal-v1.pdf
>
>
> For legacy reasons, the current allocate protocol expects expanded requests 
> which represent the cumulative request for any change in resource 
> constraints. This is not only very difficult to comprehend but makes it 
> impossible for the scheduler to associate container allocations to the 
> original requests. This problem is amplified by the fact that the expansion 
> is managed by the AMRMClient which makes it cumbersome for non-Java clients 
> as they all have to replicate the non-trivial logic. In this JIRA, we are 
> proposing a delta allocate protocol where the AM will need to only specify 
> changes in resource constraints.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4879) Proposal for a simple (delta) allocate protocol

2016-03-31 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220475#comment-15220475
 ] 

Vinod Kumar Vavilapalli commented on YARN-4879:
---

Tx for the doc, [~subru] and [~asuresh]! +1 overall for a unique identifier.

h4. Comments on your doc

 - I'd rather call it "an enhancement to identify requests explicitly" instead 
of "simple (delta) allocate protocol". We used to use the phrase "delta 
protocol" in a slightly different context - see YARN-110.
 - bq. The RM will attempt to allocate containers in decreasing sequence number 
order,
Why are we putting priority semantics onto the ID? We should just follow the 
existing priority ordering.
 - bq.  In our proposal, we could potentially have requests for each container 
at worst case. 
It is both network / memory overhead as well as scheduler's CPU time. Till we 
move off to global scheduling completely, we should be cautious about this. Of 
course, by inverting the ResourceRequest and still keying by ResourceName in 
the API, we are limiting the total entries to be of the order of the 
cluster-size.
I already suggested on YARN-1547 that we also have an upper limit on the total 
number of requests - see 
[here|https://issues.apache.org/jira/browse/YARN-1547?focusedCommentId=15218681&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15218681].
 But I strongly suggest that we have additional limits on the total number of 
IDs that can be used - this will fit our narrative at YARN-4902 too.

h4. Comments from YARN-4902

Copy-edit-pasting here a few comments that we posted in the document for 
YARN-4902, and those I think were not laid out in the doc explicitly. We were 
calling it Allocation-ID there, I guess I now like Request-ID better. If some 
or all of them make sense, you can add them to your doc
 - *Scope*: This ID is a unique identifier for different ResourceRequests from 
the *same application* - essentially IDs can conflict across applications.
 - *Generation*: The application should simply generate a unique identifier 
within the application - if not the client-libraries can do so if desired by 
the application.
 - *Non-binding nature*: Applications can continue to completely ignore the 
returned Allocation-ID in the response and use the allocation for any of their 
outstanding requests
 - *Responses*: The scheduler may return multiple responses corresponding to 
the same Allocation-ID - as and when scheduler returns allocations
 - *Deeper details on updates*: Similar to the current API, update of only 
selected fields against a previously existing Allocation-ID will only update 
the object (as opposed to replacing it). For e.g, say a ResourceRequest first 
gets created with Allocation-ID "76589" and with _"host: *"_. A future 
ResourceRequest with the same Allocation-ID but with contents _“rack05: 10”_ 
will only append the rack information to the existing object. This is how one 
can replace parts of an object and is similar to how the existing 
per-record-deltas based protocol works.
 - *Deletes*: Similarly, if one wishes to replace an entire ResourceRequest 
corresponding to a specific allocation-ID, they can simply cancel the 
corresponding ResourceRequest and submit a new one afresh.

h4. Other responses
bq.  If a node local allocation is made for node N1, we can immediately lookup 
the entries for rack and ANY by using the ID key and decrement them instead of 
linearly scanning the rack/ANY entries.
+1, ID is really the logical grouping key.

bq. While making these changes, would it possible to address YARN-314 too? 
I'm okay if we can get two in a shot, but I'd caution against risking this 
effort by blowing up the size.

> Proposal for a simple (delta) allocate protocol
> ---
>
> Key: YARN-4879
> URL: https://issues.apache.org/jira/browse/YARN-4879
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: SimpleAllocateProtocolProposal-v1.pdf
>
>
> For legacy reasons, the current allocate protocol expects expanded requests 
> which represent the cumulative request for any change in resource 
> constraints. This is not only very difficult to comprehend but makes it 
> impossible for the scheduler to associate container allocations to the 
> original requests. This problem is amplified by the fact that the expansion 
> is managed by the AMRMClient which makes it cumbersome for non-Java clients 
> as they all have to replicate the non-trivial logic. In this JIRA, we are 
> proposing a delta allocate protocol where the AM will need to only specify 
> changes in resource constraints.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220435#comment-15220435
 ] 

Hadoop QA commented on YARN-4849:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/10916/console in case of 
problems.


> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, YARN-4849-YARN-3368.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-03-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Attachment: YARN-4849-YARN-3368.2.patch

Attached ver.2 patch to run Jenkins.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, YARN-4849-YARN-3368.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4726) [Umbrella] Allocation reuse for application upgrades

2016-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220400#comment-15220400
 ] 

Arun Suresh commented on YARN-4726:
---

@Wangda, agreed... I have an rough proposal which I posted on YARN-1040. Will 
flesh it out and post it here.. where I think it should really belong

> [Umbrella] Allocation reuse for application upgrades
> 
>
> Key: YARN-4726
> URL: https://issues.apache.org/jira/browse/YARN-4726
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>
> See overview doc at YARN-4692, copying the sub-section to track all related 
> efforts.
> Once auto-­restart of containers is taken care of (YARN-4725), we need to 
> address what I believe is the second most important reason for service 
> containers to restart : upgrades. Once a service is running on YARN, the way 
> container allocation-­lifecycle works, any time the container exits, YARN 
> will reclaim the resources. During an upgrade, with multitude of other 
> applications running in the system, giving up and getting back resources 
> allocated to the service is hard to manage. Things like N​ode­Labels in YARN 
> ​help this cause but are not straight­forward to use to address the 
> app­-specific use­cases.
> We need a first class way of letting application reuse the same 
> resource­allocation for multiple launches of the processes inside the 
> container. This is done by decoupling allocation lifecycle and the process 
> life­cycle.
> The JIRA YARN-1040 initiated this conversation. We need two things here: 
>  - (1) (​Task) ​the ApplicationMaster should be able to use the same 
> container-allocation and issue multiple s​tartContainer​requests to the 
> NodeManager.
>  - (2) [(Task) To support the upgrade of the ApplicationMaster itself, 
> clients should be able to inform YARN to restart AM within the same 
> allocation but with new bits.
> The JIRAs YARN-3417 and YARN-4470 talk about the second task above ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4896) ProportionalPreemptionPolicy needs to handle AMResourcePercentage per partition

2016-03-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4896:
--
Attachment: 0001-YARN-4896.patch

Attaching an initial version of the patch. I will add one more test case with 
label and AM container preemption in short while.
cc/[~leftnoteasy]

> ProportionalPreemptionPolicy needs to handle AMResourcePercentage per 
> partition
> ---
>
> Key: YARN-4896
> URL: https://issues.apache.org/jira/browse/YARN-4896
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4896.patch
>
>
> In PCPP, currently we are using {{getMaxAMResourcePerQueuePercent()}} to get 
> the max AM capacity for queue to save AM Containers from preemption. As we 
> are now supporting MaxAMResourcePerQueuePercent per partition, PCPP also need 
> to handle the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Clarify the behavior of timeline service config properties

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220323#comment-15220323
 ] 

Hudson commented on YARN-4183:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9535 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9535/])
YARN-4183. Clarify the behavior of timeline service config properties (sjlee: 
rev 6d67420dbc5c6097216fa40fcec8ed626b2bae14)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Clarify the behavior of timeline service config properties
> --
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Naganarasimha G R
> Attachments: YARN-4183.1.patch, YARN-4183.v1.001.patch, 
> YARN-4183.v1.002.patch
>
>
> Configurations *"yarn.timeline-service.enabled"* and 
> *"yarn.timeline-service.client.best-effort"* are not captured better. 
> Currently if the client doesn't want the tokens to be generated for the 
> timeline service they can set "yarn.timeline-service.enabled" to false and/or 
> "yarn.timeline-service.client.best-effort" to true so that even if the ATS is 
> down jobs can continue to get submitted. This functionality is not properly 
> documented, so as part of this jira we try to document and clarify these 
> configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-03-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220279#comment-15220279
 ] 

Jason Lowe commented on YARN-4311:
--

Thanks for updating the patch!   Everything looks great except nodes list 
should not be possessive, i.e.: "nodes' list" should just be "nodes list".

bq. I have additionally added a log line at info level when the node is removed 
from the inactive list to better track when nodes finally go away. Should this 
be at debug level?

I think it's fine to log it at INFO.  It should be a relatively rare log 
message, and it helps explain to users/admins why a node disappeared from the 
RM UI.


> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v2.patch, YARN-4311-v3.patch, YARN-4311-v4.patch, 
> YARN-4311-v5.patch, YARN-4311-v6.patch, YARN-4311-v7.patch, 
> YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220221#comment-15220221
 ] 

Hadoop QA commented on YARN-2883:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 10 new + 
425 unchanged - 4 fixed = 435 total (was 429) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s {color} 
| {color:red} hadoop-yarn-api in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 55s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s {color} 
| {color:red

[jira] [Commented] (YARN-4726) [Umbrella] Allocation reuse for application upgrades

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220217#comment-15220217
 ] 

Wangda Tan commented on YARN-4726:
--

[~asuresh], 

Thanks for raising these JIRAs, they are required by a couple of scheduling 
improvements.
Before starting implementation, could you add a design doc so we can understand 
better about scopes?

> [Umbrella] Allocation reuse for application upgrades
> 
>
> Key: YARN-4726
> URL: https://issues.apache.org/jira/browse/YARN-4726
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>
> See overview doc at YARN-4692, copying the sub-section to track all related 
> efforts.
> Once auto-­restart of containers is taken care of (YARN-4725), we need to 
> address what I believe is the second most important reason for service 
> containers to restart : upgrades. Once a service is running on YARN, the way 
> container allocation-­lifecycle works, any time the container exits, YARN 
> will reclaim the resources. During an upgrade, with multitude of other 
> applications running in the system, giving up and getting back resources 
> allocated to the service is hard to manage. Things like N​ode­Labels in YARN 
> ​help this cause but are not straight­forward to use to address the 
> app­-specific use­cases.
> We need a first class way of letting application reuse the same 
> resource­allocation for multiple launches of the processes inside the 
> container. This is done by decoupling allocation lifecycle and the process 
> life­cycle.
> The JIRA YARN-1040 initiated this conversation. We need two things here: 
>  - (1) (​Task) ​the ApplicationMaster should be able to use the same 
> container-allocation and issue multiple s​tartContainer​requests to the 
> NodeManager.
>  - (2) [(Task) To support the upgrade of the ApplicationMaster itself, 
> clients should be able to inform YARN to restart AM within the same 
> allocation but with new bits.
> The JIRAs YARN-3417 and YARN-4470 talk about the second task above ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220199#comment-15220199
 ] 

Arun Suresh commented on YARN-4902:
---

Thanks for putting this up [~vinodkv] et. al. ..

Did an initial fly-by. It looks like the *Allocation-ID* mentioned in the doc 
serves the same purpose as Resource Request ID proposed in YARN-4879. Also, 
since we are introducing a first class notion of Allocation in YARN-4726, 
things might start to get confusing.
Does it make sense to rename the allocation-id proposed here to maybe 
*resource-request-id* ?

Will provide more comments shortly.. 

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4879) Proposal for a simple (delta) allocate protocol

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220145#comment-15220145
 ] 

Wangda Tan commented on YARN-4879:
--

Linked this to YARN-4902, which will be a longer term fix of resource request 
related issues.

> Proposal for a simple (delta) allocate protocol
> ---
>
> Key: YARN-4879
> URL: https://issues.apache.org/jira/browse/YARN-4879
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: SimpleAllocateProtocolProposal-v1.pdf
>
>
> For legacy reasons, the current allocate protocol expects expanded requests 
> which represent the cumulative request for any change in resource 
> constraints. This is not only very difficult to comprehend but makes it 
> impossible for the scheduler to associate container allocations to the 
> original requests. This problem is amplified by the fact that the expansion 
> is managed by the AMRMClient which makes it cumbersome for non-Java clients 
> as they all have to replicate the non-trivial logic. In this JIRA, we are 
> proposing a delta allocate protocol where the AM will need to only specify 
> changes in resource constraints.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4879) Proposal for a simple (delta) allocate protocol

2016-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220142#comment-15220142
 ] 

Wangda Tan commented on YARN-4879:
--

Overall looks good, thanks [~subru]!

This will be very useful to solve existing request tracking issues.

> Proposal for a simple (delta) allocate protocol
> ---
>
> Key: YARN-4879
> URL: https://issues.apache.org/jira/browse/YARN-4879
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: SimpleAllocateProtocolProposal-v1.pdf
>
>
> For legacy reasons, the current allocate protocol expects expanded requests 
> which represent the cumulative request for any change in resource 
> constraints. This is not only very difficult to comprehend but makes it 
> impossible for the scheduler to associate container allocations to the 
> original requests. This problem is amplified by the fact that the expansion 
> is managed by the AMRMClient which makes it cumbersome for non-Java clients 
> as they all have to replicate the non-trivial logic. In this JIRA, we are 
> proposing a delta allocate protocol where the AM will need to only specify 
> changes in resource constraints.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2883) Queuing of container requests in the NM

2016-03-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2883:
--
Attachment: YARN-2883-trunk.008.patch

rebasing patch and uploading to kick jenkins

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4364) NPE in ATS v1 web service: AppInfo constructor

2016-03-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved YARN-4364.
--
Resolution: Cannot Reproduce

> NPE in ATS v1 web service: AppInfo  constructor
> ---
>
> Key: YARN-4364
> URL: https://issues.apache.org/jira/browse/YARN-4364
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>
> Seen during testing of SPARK-1537; an NPE in the timeline server during 
> {{AppInfo}} construction. Presumably an incomplete record passed in is the 
> cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220041#comment-15220041
 ] 

Arun Suresh commented on YARN-4895:
---

Actually.. let me hold off on this till end of day.. [~vvasudev], let me know 
if your comments on 
https://issues.apache.org/jira/browse/YARN-4895?focusedCommentId=15218803&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15218803
 have been addressed to your satisfaction

> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220032#comment-15220032
 ] 

Arun Suresh commented on YARN-4895:
---

+1, Thanks for the patch [~kkaranasos]
will commit this shortly..

> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220012#comment-15220012
 ] 

Hadoop QA commented on YARN-2883:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-2883 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795942/YARN-2883-trunk.007.patch
 |
| JIRA Issue | YARN-2883 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10914/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-yarn-2877.001.patch, YARN-2883-yarn-2877.002.patch, 
> YARN-2883-yarn-2877.003.patch, YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4726) [Umbrella] Allocation reuse for application upgrades

2016-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219906#comment-15219906
 ] 

Arun Suresh commented on YARN-4726:
---

Created branch *yarn-4726* to start work on this

> [Umbrella] Allocation reuse for application upgrades
> 
>
> Key: YARN-4726
> URL: https://issues.apache.org/jira/browse/YARN-4726
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>
> See overview doc at YARN-4692, copying the sub-section to track all related 
> efforts.
> Once auto-­restart of containers is taken care of (YARN-4725), we need to 
> address what I believe is the second most important reason for service 
> containers to restart : upgrades. Once a service is running on YARN, the way 
> container allocation-­lifecycle works, any time the container exits, YARN 
> will reclaim the resources. During an upgrade, with multitude of other 
> applications running in the system, giving up and getting back resources 
> allocated to the service is hard to manage. Things like N​ode­Labels in YARN 
> ​help this cause but are not straight­forward to use to address the 
> app­-specific use­cases.
> We need a first class way of letting application reuse the same 
> resource­allocation for multiple launches of the processes inside the 
> container. This is done by decoupling allocation lifecycle and the process 
> life­cycle.
> The JIRA YARN-1040 initiated this conversation. We need two things here: 
>  - (1) (​Task) ​the ApplicationMaster should be able to use the same 
> container-allocation and issue multiple s​tartContainer​requests to the 
> NodeManager.
>  - (2) [(Task) To support the upgrade of the ApplicationMaster itself, 
> clients should be able to inform YARN to restart AM within the same 
> allocation but with new bits.
> The JIRAs YARN-3417 and YARN-4470 talk about the second task above ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4699) Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to change label of a node

2016-03-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4699:
--
Target Version/s: 2.8.0

> Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to 
> change label of a node
> 
>
> Key: YARN-4699
> URL: https://issues.apache.org/jira/browse/YARN-4699
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: 0001-YARN-4699.patch, 0002-YARN-4699.patch, 
> AfterAppFInish-LabelY-Metrics.png, ForLabelX-AfterSwitch.png, 
> ForLabelY-AfterSwitch.png
>
>
> Scenario is as follows:
> a. 2 nodes are available in the cluster (node1 with label "x", node2 with 
> label "y")
> b. Submit an application to node1 for label "x". 
> c. Change node1 label to "y" by using *replaceLabelsOnNode* command.
> d. Verify Scheduler UI for metrics such as "Used Capacity", "Absolute 
> Capacity" etc. "x" still shows some capacity.
> e. Change node1 label back to "x" and verify UI and REST o/p
> Output:
> 1. "Used Capacity", "Absolute Capacity" etc are not decremented once labels 
> is changed for a node.
> 2. UI tab for respective label shows wrong GREEN color in these cases.
> 3. REST o/p is wrong for each label after executing above scenario.
> Attaching screen shots also. This ticket will try to cover UI and REST o/p 
> fix when label is changed runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4884) Fix missing documentation about rmadmin command regarding node labels

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219607#comment-15219607
 ] 

Hudson commented on YARN-4884:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9532 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9532/])
YARN-4884. Fix missing documentation about rmadmin command regarding (vvasudev: 
rev f1b8f6b2c16403869f78a54268ae1165982a7050)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md


> Fix missing documentation about rmadmin command regarding node labels
> -
>
> Key: YARN-4884
> URL: https://issues.apache.org/jira/browse/YARN-4884
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-4884.01.patch
>
>
> There is no documentation about node labels in rmadmin section such as 
> {{-addToClusterNodeLabels}}, {{-removeFromClusterNodeLabels}}.
> In addition, the command inherited from HAAdmin command are also missing. 
> They are available when {{yarn.resourcemanager.ha.enabled}} with rmadmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219606#comment-15219606
 ] 

Hudson commented on YARN-4857:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9532 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9532/])
YARN-4857. Add missing default configuration regarding preemption of (vvasudev: 
rev 0064cba169d1bb761f6e81ee86830be598d7c500)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4884) Fix missing documentation about rmadmin command regarding node labels

2016-03-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4884:

Fix Version/s: 2.9.0

> Fix missing documentation about rmadmin command regarding node labels
> -
>
> Key: YARN-4884
> URL: https://issues.apache.org/jira/browse/YARN-4884
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-4884.01.patch
>
>
> There is no documentation about node labels in rmadmin section such as 
> {{-addToClusterNodeLabels}}, {{-removeFromClusterNodeLabels}}.
> In addition, the command inherited from HAAdmin command are also missing. 
> They are available when {{yarn.resourcemanager.ha.enabled}} with rmadmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-03-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4857:

Summary: Add missing default configuration regarding preemption of 
CapacityScheduler  (was: Missing default configuration regarding preemption of 
CapacityScheduler)

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4884) Fix missing documentation about rmadmin command regarding node labels

2016-03-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4884:

Summary: Fix missing documentation about rmadmin command regarding node 
labels  (was: Missing documentation about rmadmin command regarding node labels)

> Fix missing documentation about rmadmin command regarding node labels
> -
>
> Key: YARN-4884
> URL: https://issues.apache.org/jira/browse/YARN-4884
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-4884.01.patch
>
>
> There is no documentation about node labels in rmadmin section such as 
> {{-addToClusterNodeLabels}}, {{-removeFromClusterNodeLabels}}.
> In addition, the command inherited from HAAdmin command are also missing. 
> They are available when {{yarn.resourcemanager.ha.enabled}} with rmadmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)