[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: YARN-2181.patch

Thanks [~jianhe] for your comment,
Uploaded a new patch rebased to latest trunk, used 
ContainerStatus#getExitStatus#PREEMPTED to check whether the container is 
preempted or not.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: NM apps.png, YARN-2181.patch, YARN-2181.patch, queue 
> page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2163) WebUI: Order of AppId in apps table should be consistent with ApplicationId.compareTo().

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043114#comment-14043114
 ] 

Hadoop QA commented on YARN-2163:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12650545/apps%20page.png
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4073//console

This message is automatically generated.

> WebUI: Order of AppId in apps table should be consistent with 
> ApplicationId.compareTo().
> 
>
> Key: YARN-2163
> URL: https://issues.apache.org/jira/browse/YARN-2163
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Priority: Minor
> Attachments: YARN-2163.patch, apps page.png
>
>
> Currently, AppId is treated as numeric, so the sort result in applications 
> table is sorted by int typed id only (not included cluster timestamp), see 
> attached screenshot. Order of AppId in web page should be consistent with 
> ApplicationId.compareTo().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: application page.png

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, application page.png, 
> queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: (was: NM apps.png)

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, application page.png, 
> queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043137#comment-14043137
 ] 

Hadoop QA commented on YARN-2181:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12652351/application%20page.png
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4075//console

This message is automatically generated.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, application page.png, 
> queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2022) Preempting an Application Master container can be kept as least priority when multiple applications are marked for preemption by ProportionalCapacityPreemptionPolicy

2014-06-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043143#comment-14043143
 ] 

Wangda Tan commented on YARN-2022:
--

I've revisited code,
(1) getMaxAMResourcePerQueuePercent with getAbsoluteMaximumCapacity is used to 
check if an app can be active when considering total #application within a 
queue.
And
(2) getMaxAMResourcePerQueuePercent with getAbsoluteCapacity is used to check 
if an app can be active when considering total #application under a user within 
a queue.

bq. Could I go along by changing maxAMCapacity w.r.t getAbsoluteCapacity?
IMO, I'd prefer use (1) instead of (2). And (2) can be used when we considering 
user-limit when do preemption, which is the scope of YARN-2069.

And I think [~sunilg]'s solution should be correct.



> Preempting an Application Master container can be kept as least priority when 
> multiple applications are marked for preemption by 
> ProportionalCapacityPreemptionPolicy
> -
>
> Key: YARN-2022
> URL: https://issues.apache.org/jira/browse/YARN-2022
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-2022-DesignDraft.docx, YARN-2022.2.patch, 
> YARN-2022.3.patch, YARN-2022.4.patch, YARN-2022.5.patch, YARN-2022.6.patch, 
> YARN-2022.7.patch, Yarn-2022.1.patch
>
>
> Cluster Size = 16GB [2NM's]
> Queue A Capacity = 50%
> Queue B Capacity = 50%
> Consider there are 3 applications running in Queue A which has taken the full 
> cluster capacity. 
> J1 = 2GB AM + 1GB * 4 Maps
> J2 = 2GB AM + 1GB * 4 Maps
> J3 = 2GB AM + 1GB * 2 Maps
> Another Job J4 is submitted in Queue B [J4 needs a 2GB AM + 1GB * 2 Maps ].
> Currently in this scenario, Jobs J3 will get killed including its AM.
> It is better if AM can be given least priority among multiple applications. 
> In this same scenario, map tasks from J3 and J2 can be preempted.
> Later when cluster is free, maps can be allocated to these Jobs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043153#comment-14043153
 ] 

Hadoop QA commented on YARN-2181:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652348/YARN-2181.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4074//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/4074//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4074//console

This message is automatically generated.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, application page.png, 
> queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2052:
-

Attachment: YARN-2052.5.patch

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: YARN-2181.patch

Suppressed findbug warnings and fixed UT.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1039) Add parameter for YARN resource requests to indicate "long lived"

2014-06-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043238#comment-14043238
 ] 

Wangda Tan commented on YARN-1039:
--

bq. it must be set at application creation time and all containers of the app 
will be considered long lived. This is because the RM does not keep track of 
individual container requests.
I think [~vinodkv]'s suggestion makes more sense to me: 
https://issues.apache.org/jira/browse/YARN-1039?focusedCommentId=14041652&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14041652
And as [~cwelch] mentioned, we don't need constraint if an app is long-lived 
that all its containers should be long-lived, it's better to leave this 
decision to app itself.

> Add parameter for YARN resource requests to indicate "long lived"
> -
>
> Key: YARN-1039
> URL: https://issues.apache.org/jira/browse/YARN-1039
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Steve Loughran
>Assignee: Craig Welch
> Attachments: YARN-1039.1.patch, YARN-1039.2.patch, YARN-1039.3.patch
>
>
> A container request could support a new parameter "long-lived". This could be 
> used by a scheduler that would know not to host the service on a transient 
> (cloud: spot priced) node.
> Schedulers could also decide whether or not to allocate multiple long-lived 
> containers on the same node



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043263#comment-14043263
 ] 

Hadoop QA commented on YARN-2181:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652369/YARN-2181.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebApp

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4076//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4076//console

This message is automatically generated.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043264#comment-14043264
 ] 

Hadoop QA commented on YARN-2052:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652370/YARN-2052.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4077//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/4077//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4077//console

This message is automatically generated.

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2111) In FairScheduler.attemptScheduling, we don't count containers as assigned if they have 0 memory but non-zero cores

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043332#comment-14043332
 ] 

Hudson commented on YARN-2111:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2111. In FairScheduler.attemptScheduling, we don't count containers as 
assigned if they have 0 memory but non-zero cores (Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605113)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> In FairScheduler.attemptScheduling, we don't count containers as assigned if 
> they have 0 memory but non-zero cores
> --
>
> Key: YARN-2111
> URL: https://issues.apache.org/jira/browse/YARN-2111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.4.0
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.5.0
>
> Attachments: YARN-2111.patch
>
>
> {code}
> if (Resources.greaterThan(RESOURCE_CALCULATOR, clusterResource,
>   queueMgr.getRootQueue().assignContainer(node),
>   Resources.none())) {
> {code}
> As RESOURCE_CALCULATOR is a DefaultResourceCalculator, we won't take cores 
> here into account.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2195) Clean a piece of code in ResourceRequest

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043322#comment-14043322
 ] 

Hudson commented on YARN-2195:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2195. Clean a piece of code in ResourceRequest. Contributed by Wei Yan. 
(devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605083)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java


> Clean a piece of code in ResourceRequest
> 
>
> Key: YARN-2195
> URL: https://issues.apache.org/jira/browse/YARN-2195
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: YARN-2195.patch
>
>
> {code}
> if (numContainersComparison == 0) {
>   return 0;
> } else {
>   return numContainersComparison;
> }
> {code}
> This code should be cleaned as 
> {code}
> return numContainersComparison;
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2152) Recover missing container information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043326#comment-14043326
 ] 

Hudson commented on YARN-2152:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2152. Added missing information into ContainerTokenIdentifier so that 
NodeManagers can report the same to RM when RM restarts. Contributed Jian He. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605205)
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NMContainerStatusPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestProtocolRecords.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestRegisterNodeManagerRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/

[jira] [Commented] (YARN-2072) RM/NM UIs and webservices are missing vcore information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043336#comment-14043336
 ] 

Hudson commented on YARN-2072:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2072. RM/NM UIs and webservices are missing vcore information. (Nathan 
Roberts via tgraves) (tgraves: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605162)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ResourceView.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NodePage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/MetricsOverviewTable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/UserMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestNodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm


> RM/NM UIs and webservices are missing vcore information
> ---
>
> Key: YARN-2072
> URL: https://issues.apache.org/jira/browse/YARN-2072
> Project: Hadoop YARN
>  Issue Type:

[jira] [Commented] (YARN-1365) ApplicationMasterService to allow Register of an app that was running before restart

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043330#comment-14043330
 ] 

Hudson commented on YARN-1365:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-1365. Changed ApplicationMasterService to allow an app to re-register 
after RM restart. Contributed by Anubhav Dhoot (jianhe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605263)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ApplicationMasterNotRegisteredException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/InvalidApplicationMasterRequestException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/event/AppAttemptAddedSchedulerEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> ApplicationMasterService to allow Register of an app that was running before 
> restart
> 
>
> Key: YARN-1365
> URL: https://issues.apache.org/jira/browse/YARN-1365
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-1365.001.patch, YARN-1365.002.patch, 
> YARN-1365.003.patch, YARN-1365.004.patch, YARN-1365.005.patch, 
> YARN-1365.005.patch, YARN-1365.006.patch, YARN-1365.007.patch, 
> YARN-1365.008.patch, YARN-1365.008.patch, YARN-1365.009.patch, 
> YARN-1365.initial.patch
>
>
> For an application that was running before restart, the 
> ApplicationMasterService currently throws an exception when the app tries to 
> make the initial register or final unregister call. These should succeed and 
> the RMApp state machine should transition to completed like normal. 
> Unregistration should succeed for an app that the RM considers complete since 
> the RM may have died after saving completion in the store but before 
> notifying the AM that the AM is free to exit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2192) TestRMHA fails when run with a mix of Schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043324#comment-14043324
 ] 

Hudson commented on YARN-2192:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2192. TestRMHA fails when run with a mix of Schedulers. (Anubhav Dhoot via 
kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605138)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java


> TestRMHA fails when run with a mix of Schedulers
> 
>
> Key: YARN-2192
> URL: https://issues.apache.org/jira/browse/YARN-2192
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-2192.patch
>
>
> If the test is run with FairSchedulers, some of the tests fail because the 
> metricsssytem objects are shared across tests and not destroyed completely.
> {code}
> Error Message
> Metrics source QueueMetrics,q0=root already exists!
> Stacktrace
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> QueueMetrics,q0=root already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:126)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:107)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:96)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2109) Fix TestRM to work with both schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043329#comment-14043329
 ] 

Hudson commented on YARN-2109:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2109. Fix TestRM to work with both schedulers. (Anubhav Dhoot via kasha) 
(kasha: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605142)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java


> Fix TestRM to work with both schedulers
> ---
>
> Key: YARN-2109
> URL: https://issues.apache.org/jira/browse/YARN-2109
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Anubhav Dhoot
>Assignee: Karthik Kambatla
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: YARN-2109.001.patch, YARN-2109.001.patch
>
>
> testNMTokenSentForNormalContainer requires CapacityScheduler and was fixed in 
> [YARN-1846|https://issues.apache.org/jira/browse/YARN-1846] to explicitly set 
> it to be CapacityScheduler. But if the default scheduler is set to 
> FairScheduler then the rest of the tests that execute after this will fail 
> with invalid cast exceptions when getting queuemetrics. This is based on test 
> execution order as only the tests that execute after this test will fail. 
> This is because the queuemetrics will be initialized by this test to 
> QueueMetrics and shared by the subsequent tests. 
> We can explicitly clear the metrics at the end of this test to fix this.
> For example
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics cannot 
> be cast to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:103)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:418)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:808)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:230)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:90)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:85)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:81)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testNMToken(TestRM.java:232)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2074) Preemption of AM containers shouldn't count towards AM failures

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1404#comment-1404
 ] 

Hudson commented on YARN-2074:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/594/])
YARN-2074. Changed ResourceManager to not count AM preemptions towards app 
failures. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605106)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationAttemptStateDataPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java


> Preemption of AM containers shouldn't count towards AM failures
> ---
>
> Key: YARN-2074
> URL: https://issues.apache.org/jira/browse/YARN-2074
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jian He
> Fix For: 2.5.0
>
> Attachments: YARN-2074.1.patch, YARN-2074.2.patch, YARN-2074.3.patch, 
> YARN-2074.4.patch, YARN-2074.5.patch, YARN-2074.6.patch, YARN-2074.6.patch, 
> YARN-2074.7.patch, YARN-2074.7.patch, YARN-2074.8.patch
>
>
> One orthogonal concern with issues like YARN-2055 and YARN-2022 is that AM 
> containers getting preempted shouldn't count towards AM failures and thus 
> shouldn't eventually fail applications.
> We should explicitly handle AM container preemption/kill as a separate issue 
> and not count it towards the limit on AM failures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-2201:


Attachment: apache-yarn-2201.0.patch

Hi [~rchiang], I wrote the code for that test. My apologies for not handling 
the dependency.

The test is meant to ensure that a random user cannot kill another user's app 
via web services. When the RM receives a request to kill an app, it checks if 
the user who sent the request either submitted the app or has administrator 
privileges on the queue in which the app was submitted.

The FifoScheduler give administrator privileges to all users as a result we 
can't use it for this test. CapacityScheduler and FairScheduler allow queue 
specific administrators.The test sets up a queue with an administrator, submits 
an app to that queue, attempts to kill that app as a different user and then 
checks that the request was denied.

In order to add support for FairScheduler, all you have to do is setup a 
FairScheduler conf which sets up the queue with a specific administator and 
reload the rm scheduler with that config. I can make the change myself if you 
wish, but I'm not too familiar with FairScheduler and how to setup queues in 
it. I've also uploaded a patch that skips the test if the scheduler is not 
CapacityScheduler. In the future, once we add support for FairScheduler to it 
to run with FairScheduler as well.

If you'd like me to add support for FairScheduler, please re-assign the bug to 
me.

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: test
> Attachments: apache-yarn-2201.0.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.263 sec  <<< FAILURE!
> java.lang.AssertionError: 

[jira] [Updated] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-2201:


Attachment: apache-yarn-2201.1.patch

Uploaded new patch with formatting fixed.

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: test
> Attachments: apache-yarn-2201.0.patch, apache-yarn-2201.1.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.263 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> testSingleAppKillInvalidId[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 0.214 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidId(TestRMWebServicesAppsModification.java:482)
> I'm opening this JIRA as a discussion for the best way to fix this.  I've got 
> a few ideas, but I would like to get some feedback about potentially more 
> robust ways to fix this test.



--
This message 

[jira] [Updated] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2052:
-

Attachment: (was: YARN-2091.6.patch)

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch, YARN-2052.6.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2052:
-

Attachment: YARN-2052.6.patch

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch, YARN-2052.6.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2052:
-

Attachment: YARN-2091.6.patch

Updated a patch to address comments by Jian and fixed a warning by findbugs.

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch, YARN-2052.6.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: YARN-2181.patch

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Anyone knows which class is JobHistoryServer use to talk to a secured hdfs?

2014-06-25 Thread Liu, David
> Hi all,

I find JobHistoryServer have access to secured hdfs, can anyone paste some code 
or some class name it use for it to pass secure authentication? 


Thanks



[jira] [Commented] (YARN-2109) Fix TestRM to work with both schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043497#comment-14043497
 ] 

Hudson commented on YARN-2109:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2109. Fix TestRM to work with both schedulers. (Anubhav Dhoot via kasha) 
(kasha: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605142)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java


> Fix TestRM to work with both schedulers
> ---
>
> Key: YARN-2109
> URL: https://issues.apache.org/jira/browse/YARN-2109
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Anubhav Dhoot
>Assignee: Karthik Kambatla
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: YARN-2109.001.patch, YARN-2109.001.patch
>
>
> testNMTokenSentForNormalContainer requires CapacityScheduler and was fixed in 
> [YARN-1846|https://issues.apache.org/jira/browse/YARN-1846] to explicitly set 
> it to be CapacityScheduler. But if the default scheduler is set to 
> FairScheduler then the rest of the tests that execute after this will fail 
> with invalid cast exceptions when getting queuemetrics. This is based on test 
> execution order as only the tests that execute after this test will fail. 
> This is because the queuemetrics will be initialized by this test to 
> QueueMetrics and shared by the subsequent tests. 
> We can explicitly clear the metrics at the end of this test to fix this.
> For example
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics cannot 
> be cast to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:103)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:418)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:808)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:230)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:90)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:85)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:81)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testNMToken(TestRM.java:232)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2072) RM/NM UIs and webservices are missing vcore information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043504#comment-14043504
 ] 

Hudson commented on YARN-2072:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2072. RM/NM UIs and webservices are missing vcore information. (Nathan 
Roberts via tgraves) (tgraves: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605162)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ResourceView.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NodePage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/MetricsOverviewTable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/UserMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestNodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm


> RM/NM UIs and webservices are missing vcore information
> ---
>
> Key: YARN-2072
> URL: https://issues.apache.org/jira/browse/YARN-2072
> Project: Hadoop YARN
>  Issue Typ

[jira] [Commented] (YARN-2195) Clean a piece of code in ResourceRequest

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043490#comment-14043490
 ] 

Hudson commented on YARN-2195:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2195. Clean a piece of code in ResourceRequest. Contributed by Wei Yan. 
(devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605083)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java


> Clean a piece of code in ResourceRequest
> 
>
> Key: YARN-2195
> URL: https://issues.apache.org/jira/browse/YARN-2195
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: YARN-2195.patch
>
>
> {code}
> if (numContainersComparison == 0) {
>   return 0;
> } else {
>   return numContainersComparison;
> }
> {code}
> This code should be cleaned as 
> {code}
> return numContainersComparison;
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1365) ApplicationMasterService to allow Register of an app that was running before restart

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043498#comment-14043498
 ] 

Hudson commented on YARN-1365:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-1365. Changed ApplicationMasterService to allow an app to re-register 
after RM restart. Contributed by Anubhav Dhoot (jianhe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605263)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ApplicationMasterNotRegisteredException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/InvalidApplicationMasterRequestException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/event/AppAttemptAddedSchedulerEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> ApplicationMasterService to allow Register of an app that was running before 
> restart
> 
>
> Key: YARN-1365
> URL: https://issues.apache.org/jira/browse/YARN-1365
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-1365.001.patch, YARN-1365.002.patch, 
> YARN-1365.003.patch, YARN-1365.004.patch, YARN-1365.005.patch, 
> YARN-1365.005.patch, YARN-1365.006.patch, YARN-1365.007.patch, 
> YARN-1365.008.patch, YARN-1365.008.patch, YARN-1365.009.patch, 
> YARN-1365.initial.patch
>
>
> For an application that was running before restart, the 
> ApplicationMasterService currently throws an exception when the app tries to 
> make the initial register or final unregister call. These should succeed and 
> the RMApp state machine should transition to completed like normal. 
> Unregistration should succeed for an app that the RM considers complete since 
> the RM may have died after saving completion in the store but before 
> notifying the AM that the AM is free to exit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2192) TestRMHA fails when run with a mix of Schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043492#comment-14043492
 ] 

Hudson commented on YARN-2192:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2192. TestRMHA fails when run with a mix of Schedulers. (Anubhav Dhoot via 
kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605138)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java


> TestRMHA fails when run with a mix of Schedulers
> 
>
> Key: YARN-2192
> URL: https://issues.apache.org/jira/browse/YARN-2192
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-2192.patch
>
>
> If the test is run with FairSchedulers, some of the tests fail because the 
> metricsssytem objects are shared across tests and not destroyed completely.
> {code}
> Error Message
> Metrics source QueueMetrics,q0=root already exists!
> Stacktrace
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> QueueMetrics,q0=root already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:126)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:107)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:96)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2152) Recover missing container information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043494#comment-14043494
 ] 

Hudson commented on YARN-2152:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2152. Added missing information into ContainerTokenIdentifier so that 
NodeManagers can report the same to RM when RM restarts. Contributed Jian He. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605205)
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NMContainerStatusPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestProtocolRecords.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestRegisterNodeManagerRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/sr

[jira] [Commented] (YARN-2111) In FairScheduler.attemptScheduling, we don't count containers as assigned if they have 0 memory but non-zero cores

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043500#comment-14043500
 ] 

Hudson commented on YARN-2111:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2111. In FairScheduler.attemptScheduling, we don't count containers as 
assigned if they have 0 memory but non-zero cores (Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605113)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> In FairScheduler.attemptScheduling, we don't count containers as assigned if 
> they have 0 memory but non-zero cores
> --
>
> Key: YARN-2111
> URL: https://issues.apache.org/jira/browse/YARN-2111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.4.0
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.5.0
>
> Attachments: YARN-2111.patch
>
>
> {code}
> if (Resources.greaterThan(RESOURCE_CALCULATOR, clusterResource,
>   queueMgr.getRootQueue().assignContainer(node),
>   Resources.none())) {
> {code}
> As RESOURCE_CALCULATOR is a DefaultResourceCalculator, we won't take cores 
> here into account.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2074) Preemption of AM containers shouldn't count towards AM failures

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043501#comment-14043501
 ] 

Hudson commented on YARN-2074:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/])
YARN-2074. Changed ResourceManager to not count AM preemptions towards app 
failures. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605106)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationAttemptStateDataPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java


> Preemption of AM containers shouldn't count towards AM failures
> ---
>
> Key: YARN-2074
> URL: https://issues.apache.org/jira/browse/YARN-2074
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jian He
> Fix For: 2.5.0
>
> Attachments: YARN-2074.1.patch, YARN-2074.2.patch, YARN-2074.3.patch, 
> YARN-2074.4.patch, YARN-2074.5.patch, YARN-2074.6.patch, YARN-2074.6.patch, 
> YARN-2074.7.patch, YARN-2074.7.patch, YARN-2074.8.patch
>
>
> One orthogonal concern with issues like YARN-2055 and YARN-2022 is that AM 
> containers getting preempted shouldn't count towards AM failures and thus 
> shouldn't eventually fail applications.
> We should explicitly handle AM container preemption/kill as a separate issue 
> and not count it towards the limit on AM failures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (YARN-2210) resource manager fails to start if core-site.xml contains an xi:include

2014-06-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-2210.
--

Resolution: Duplicate

Resolving as a dup of YARN-1741, as that has more discussion around how this 
was broken and potential fixes.

> resource manager fails to start if core-site.xml contains an xi:include
> ---
>
> Key: YARN-2210
> URL: https://issues.apache.org/jira/browse/YARN-2210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Priority: Critical
>
> The resource manager fails to start if core-site.xml contains an xi:include. 
> This is easily reproduced with a pseudo-distributed mode. Just add something 
> like this in the core-site.xml:
> {noformat}
> http://www.w3.org/2001/XInclude";> 
>   
>   ...
> {noformat}
> and place mounttable.xml in the same directory (doesn't matter what the file 
> is really).
> Then try starting the resource manager, and it will fail while handling this 
> include. The exception encountered:
> {noformat}
> [Warning] :20:38: Include operation failed, reverting to fallback. Resource 
> error reading file as XML (href='mounttable.xml'). Reason: 
> /Users/sjlee/hadoop-2.4.0/mounttable.xml (No such file or directory)
> [Fatal Error] :20:38: An include failed, and no fallback element was found.
> 14/06/24 23:30:16 FATAL conf.Configuration: error parsing conf 
> java.io.BufferedInputStream@7426dbec
> org.xml.sax.SAXParseException: An include failed, and no fallback element was 
> found.
>   at 
> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:246)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
>   at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
>   at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2173)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2246)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2195)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
>   at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
>   at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:870)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1889)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1919)
>   at org.apache.hadoop.security.Groups.(Groups.java:64)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingServiceWithLoadedConfiguration(Groups.java:255)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:197)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1038)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043580#comment-14043580
 ] 

Hadoop QA commented on YARN-2181:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652421/YARN-2181.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4078//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4078//console

This message is automatically generated.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-1741:
-

Priority: Critical  (was: Minor)

Bumping the priority of this based on YARN-2210 and the fact that existing 
configuration setups that relied on relative xincludes used to work in prior 
releases.

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043593#comment-14043593
 ] 

Hadoop QA commented on YARN-2201:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12652419/apache-yarn-2201.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4080//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4080//console

This message is automatically generated.

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: test
> Attachments: apache-yarn-2201.0.patch, apache-yarn-2201.1.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModificati

[jira] [Commented] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043599#comment-14043599
 ] 

Hadoop QA commented on YARN-2052:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652420/YARN-2052.6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup
  
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4079//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4079//console

This message is automatically generated.

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch, YARN-2052.6.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043620#comment-14043620
 ] 

Sangjin Lee commented on YARN-1741:
---

+1 with the idea for the ConfigurationProvider to return a tuple object of 
(input stream, system id).

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-2201:
-

Assignee: Varun Vasudev  (was: Ray Chiang)

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Varun Vasudev
>  Labels: test
> Attachments: apache-yarn-2201.0.patch, apache-yarn-2201.1.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.263 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> testSingleAppKillInvalidId[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 0.214 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidId(TestRMWebServicesAppsModification.java:482)
> I'm opening this JIRA as a discussion for the best way to fix this.  I've got 
> a few ideas, but I would like to get some feedback about potentially more 
> robust ways to fix this test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043653#comment-14043653
 ] 

Ray Chiang commented on YARN-2201:
--

[~vvasudev], since you already uploaded a patch, I've reassigned the JIRA to 
you.  I'll test out the patch for the yarn.acl.enable property change as 
well--it wasn't clear from the earlier message whether the patch will fix that 
issue.

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Varun Vasudev
>  Labels: test
> Attachments: apache-yarn-2201.0.patch, apache-yarn-2201.1.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.263 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> testSingleAppKillInvalidId[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 0.214 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidId(TestRMWebServicesAppsModification.java:482)

[jira] [Commented] (YARN-2111) In FairScheduler.attemptScheduling, we don't count containers as assigned if they have 0 memory but non-zero cores

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043683#comment-14043683
 ] 

Hudson commented on YARN-2111:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2111. In FairScheduler.attemptScheduling, we don't count containers as 
assigned if they have 0 memory but non-zero cores (Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605113)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> In FairScheduler.attemptScheduling, we don't count containers as assigned if 
> they have 0 memory but non-zero cores
> --
>
> Key: YARN-2111
> URL: https://issues.apache.org/jira/browse/YARN-2111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.4.0
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.5.0
>
> Attachments: YARN-2111.patch
>
>
> {code}
> if (Resources.greaterThan(RESOURCE_CALCULATOR, clusterResource,
>   queueMgr.getRootQueue().assignContainer(node),
>   Resources.none())) {
> {code}
> As RESOURCE_CALCULATOR is a DefaultResourceCalculator, we won't take cores 
> here into account.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1365) ApplicationMasterService to allow Register of an app that was running before restart

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043681#comment-14043681
 ] 

Hudson commented on YARN-1365:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-1365. Changed ApplicationMasterService to allow an app to re-register 
after RM restart. Contributed by Anubhav Dhoot (jianhe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605263)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ApplicationMasterNotRegisteredException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/InvalidApplicationMasterRequestException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/event/AppAttemptAddedSchedulerEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> ApplicationMasterService to allow Register of an app that was running before 
> restart
> 
>
> Key: YARN-1365
> URL: https://issues.apache.org/jira/browse/YARN-1365
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-1365.001.patch, YARN-1365.002.patch, 
> YARN-1365.003.patch, YARN-1365.004.patch, YARN-1365.005.patch, 
> YARN-1365.005.patch, YARN-1365.006.patch, YARN-1365.007.patch, 
> YARN-1365.008.patch, YARN-1365.008.patch, YARN-1365.009.patch, 
> YARN-1365.initial.patch
>
>
> For an application that was running before restart, the 
> ApplicationMasterService currently throws an exception when the app tries to 
> make the initial register or final unregister call. These should succeed and 
> the RMApp state machine should transition to completed like normal. 
> Unregistration should succeed for an app that the RM considers complete since 
> the RM may have died after saving completion in the store but before 
> notifying the AM that the AM is free to exit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2072) RM/NM UIs and webservices are missing vcore information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043687#comment-14043687
 ] 

Hudson commented on YARN-2072:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2072. RM/NM UIs and webservices are missing vcore information. (Nathan 
Roberts via tgraves) (tgraves: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605162)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ResourceView.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NodePage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/MetricsOverviewTable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/UserMetricsInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestNodesPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm


> RM/NM UIs and webservices are missing vcore information
> ---
>
> Key: YARN-2072
> URL: https://issues.apache.org/jira/browse/YARN-2072
> Project: Hadoop YARN
> 

[jira] [Commented] (YARN-2192) TestRMHA fails when run with a mix of Schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043676#comment-14043676
 ] 

Hudson commented on YARN-2192:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2192. TestRMHA fails when run with a mix of Schedulers. (Anubhav Dhoot via 
kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605138)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java


> TestRMHA fails when run with a mix of Schedulers
> 
>
> Key: YARN-2192
> URL: https://issues.apache.org/jira/browse/YARN-2192
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Fix For: 2.5.0
>
> Attachments: YARN-2192.patch
>
>
> If the test is run with FairSchedulers, some of the tests fail because the 
> metricsssytem objects are shared across tests and not destroyed completely.
> {code}
> Error Message
> Metrics source QueueMetrics,q0=root already exists!
> Stacktrace
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> QueueMetrics,q0=root already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:126)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:107)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:96)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2152) Recover missing container information

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043678#comment-14043678
 ] 

Hudson commented on YARN-2152:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2152. Added missing information into ContainerTokenIdentifier so that 
NodeManagers can report the same to RM when RM restarts. Contributed Jian He. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605205)
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NMContainerStatusPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestProtocolRecords.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/api/protocolrecords/TestRegisterNodeManagerRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resource

[jira] [Commented] (YARN-2109) Fix TestRM to work with both schedulers

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043680#comment-14043680
 ] 

Hudson commented on YARN-2109:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2109. Fix TestRM to work with both schedulers. (Anubhav Dhoot via kasha) 
(kasha: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605142)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java


> Fix TestRM to work with both schedulers
> ---
>
> Key: YARN-2109
> URL: https://issues.apache.org/jira/browse/YARN-2109
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Anubhav Dhoot
>Assignee: Karthik Kambatla
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: YARN-2109.001.patch, YARN-2109.001.patch
>
>
> testNMTokenSentForNormalContainer requires CapacityScheduler and was fixed in 
> [YARN-1846|https://issues.apache.org/jira/browse/YARN-1846] to explicitly set 
> it to be CapacityScheduler. But if the default scheduler is set to 
> FairScheduler then the rest of the tests that execute after this will fail 
> with invalid cast exceptions when getting queuemetrics. This is based on test 
> execution order as only the tests that execute after this test will fail. 
> This is because the queuemetrics will be initialized by this test to 
> QueueMetrics and shared by the subsequent tests. 
> We can explicitly clear the metrics at the end of this test to fix this.
> For example
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics cannot 
> be cast to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueueMetrics.forQueue(FSQueueMetrics.java:103)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.reinitialize(FairScheduler.java:1275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:418)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:808)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:230)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:90)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:85)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:81)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testNMToken(TestRM.java:232)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2074) Preemption of AM containers shouldn't count towards AM failures

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043684#comment-14043684
 ] 

Hudson commented on YARN-2074:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2074. Changed ResourceManager to not count AM preemptions towards app 
failures. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605106)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationAttemptStateDataPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java


> Preemption of AM containers shouldn't count towards AM failures
> ---
>
> Key: YARN-2074
> URL: https://issues.apache.org/jira/browse/YARN-2074
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jian He
> Fix For: 2.5.0
>
> Attachments: YARN-2074.1.patch, YARN-2074.2.patch, YARN-2074.3.patch, 
> YARN-2074.4.patch, YARN-2074.5.patch, YARN-2074.6.patch, YARN-2074.6.patch, 
> YARN-2074.7.patch, YARN-2074.7.patch, YARN-2074.8.patch
>
>
> One orthogonal concern with issues like YARN-2055 and YARN-2022 is that AM 
> containers getting preempted shouldn't count towards AM failures and thus 
> shouldn't eventually fail applications.
> We should explicitly handle AM container preemption/kill as a separate issue 
> and not count it towards the limit on AM failures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2195) Clean a piece of code in ResourceRequest

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043674#comment-14043674
 ] 

Hudson commented on YARN-2195:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/])
YARN-2195. Clean a piece of code in ResourceRequest. Contributed by Wei Yan. 
(devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605083)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java


> Clean a piece of code in ResourceRequest
> 
>
> Key: YARN-2195
> URL: https://issues.apache.org/jira/browse/YARN-2195
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: YARN-2195.patch
>
>
> {code}
> if (numContainersComparison == 0) {
>   return 0;
> } else {
>   return numContainersComparison;
> }
> {code}
> This code should be cleaned as 
> {code}
> return numContainersComparison;
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043758#comment-14043758
 ] 

Gera Shegalov commented on YARN-1741:
-

Since there is a general problem of loading conf via InputStream, to support 
these cases we need to enable users to pass custom EntityResolver.

We should implement this kind of method:
{code}
Configuration#addResource(InputStream is, EntityResolver er)
{code}

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2211) RMStateStore needs to save AMRMToken master key for recovery when RM restart/failover happens

2014-06-25 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-2211:
---

 Summary: RMStateStore needs to save AMRMToken master key for 
recovery when RM restart/failover happens 
 Key: YARN-2211
 URL: https://issues.apache.org/jira/browse/YARN-2211
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2212) ApplicationMaster needs to find a way to update the AMRMToken periodically

2014-06-25 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-2212:
---

 Summary: ApplicationMaster needs to find a way to update the 
AMRMToken periodically
 Key: YARN-2212
 URL: https://issues.apache.org/jira/browse/YARN-2212
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2213) Change proxy-user cookie log in AmIpFilter to DEBUG

2014-06-25 Thread Ted Yu (JIRA)
Ted Yu created YARN-2213:


 Summary: Change proxy-user cookie log in AmIpFilter to DEBUG
 Key: YARN-2213
 URL: https://issues.apache.org/jira/browse/YARN-2213
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor


I saw a lot of the following lines in AppMaster log:
{code}
14/06/24 17:12:36 WARN web.SliderAmIpFilter: Could not find proxy-user cookie, 
so user will not be set
14/06/24 17:12:39 WARN web.SliderAmIpFilter: Could not find proxy-user cookie, 
so user will not be set
14/06/24 17:12:39 WARN web.SliderAmIpFilter: Could not find proxy-user cookie, 
so user will not be set
{code}
For long running app, this would consume considerable log space.
Log level should be changed to DEBUG.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2201) TestRMWebServicesAppsModification dependent on yarn-default.xml

2014-06-25 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043855#comment-14043855
 ] 

Ray Chiang commented on YARN-2201:
--

[~vvasudev].  Thanks, I see how your change works.  Should the same thing be 
done for yarn.acl.enable or is there a better way to do modify the test to work 
correctly for both possible values?

> TestRMWebServicesAppsModification dependent on yarn-default.xml
> ---
>
> Key: YARN-2201
> URL: https://issues.apache.org/jira/browse/YARN-2201
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Varun Vasudev
>  Labels: test
> Attachments: apache-yarn-2201.0.patch, apache-yarn-2201.1.patch
>
>
> TestRMWebServicesAppsModification.java has some errors that are 
> yarn-default.xml dependent.  By changing yarn-default.xml properties, I'm 
> seeing the following errors:
> 1) Changing yarn.resourcemanager.scheduler.class from 
> capacity.CapacityScheduler to fair.FairScheduler gives the error:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.047 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKillUnauthorized[1](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 3.22 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> 2) Changing yarn.acl.enable from false to true results in the following 
> errors:
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
> testSingleAppKill[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.986 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKill(TestRMWebServicesAppsModification.java:287)
> testSingleAppKillInvalidState[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidState(TestRMWebServicesAppsModification.java:369)
> testSingleAppKillUnauthorized[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 2.263 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillUnauthorized(TestRMWebServicesAppsModification.java:458)
> testSingleAppKillInvalidId[0](org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification)
>   Time elapsed: 0.214 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification.testSingleAppKillInvalidId(TestRMWebServicesAppsModification.java:482)
> I'm opening this JIRA as a discussion for 

[jira] [Updated] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-1741:
--

Target Version/s: 2.5.0

Targetting for 2.5.

[~sirianni]/[~jeffgx619], can one of you express intention of working on this 
by assigning this to yourselves?

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043907#comment-14043907
 ] 

Karthik Kambatla commented on YARN-2204:


+1.

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2204:
---

Priority: Trivial  (was: Major)

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043912#comment-14043912
 ] 

Vinod Kumar Vavilapalli commented on YARN-2204:
---

Not sure why this test should depend on CS. AM restart is independent of 
scheduler. Shouldn't we fix the test to instead not depend on CS?

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043920#comment-14043920
 ] 

Hudson commented on YARN-2204:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5778 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5778/])
YARN-2204. TestAMRestart#testAMRestartWithExistingContainers assumes 
CapacityScheduler. (Robert Kanter via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605548)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java


> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043922#comment-14043922
 ] 

Karthik Kambatla commented on YARN-2204:


Sorry Vinod. Didn't see your comments. Fixing the test to be scheduler-agnostic 
makes sense. [~rkanter] - mind taking a look? We can commit an addendum patch. 

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Eric Sirianni (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043938#comment-14043938
 ] 

Eric Sirianni commented on YARN-1741:
-

[~vinodkv], unfortunately I'm no longer working with Hadoop at my day job so I 
likely won't have the bandwidth to work on this in any reasonable timeframe.  
Great to hear that there is momentum to fix this though!

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1727) provide (async) application lifecycle events to management tools

2014-06-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044003#comment-14044003
 ] 

Steve Loughran commented on YARN-1727:
--

Yes, this can and should be appilcation timeline service eventing

> provide (async) application lifecycle events to management tools
> 
>
> Key: YARN-1727
> URL: https://issues.apache.org/jira/browse/YARN-1727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.2.0
>Reporter: Steve Loughran
>
> Management tools need to monitor long-lived applications. While the 
> {{AM<->Management System}} protocol is a matter for them, the management 
> tooling will need to know about async events happening in YARN
> # application submitted
> # AM started
> # AM failed
> # AM restarted
> # AM finished
> This could be done by pushing events somewhere, or supporting a pollable 
> history mechanism



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-2204:
---


Cool, reopening it for the right fix..

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: YARN-2204.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2052) ContainerId creation after work preserving restart is broken

2014-06-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044059#comment-14044059
 ] 

Jian He commented on YARN-2052:
---

- we can update the structure graph in ZKRMStateStore to reflect the new epoch 
node too.
- this can be replaced with RMEpoch.newInstance(); and promote getProto to the 
parent class as ApplicationAttemptStateData does.
{code}
RMEpochPBImpl pb = new RMEpochPBImpl();
pb.setEpoch(epoch);
{code}
-  This was there only for a temporary fix. This can be removed given the 
change is made in this patch. The new containers allocated from new RM won’t 
collide with previous containers any more after this patch
{code}
// ContainerId is refreshed with epoch after RM restart.
this.containerIdCounter.incrementAndGet();
{code}
- what will the ContainerId.toString() print after this patch ? is it more 
intuitive to parse out the epoch number and print the epoch+id ? may add 
comments for this new format on the “getId” method. 
- can you add comments on “public abstract int getId();” method and explain 
that first 10 bits are reserved for the number of RM restarts

> ContainerId creation after work preserving restart is broken
> 
>
> Key: YARN-2052
> URL: https://issues.apache.org/jira/browse/YARN-2052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-2052.1.patch, YARN-2052.2.patch, YARN-2052.3.patch, 
> YARN-2052.4.patch, YARN-2052.5.patch, YARN-2052.6.patch
>
>
> Container ids are made unique by using the app identifier and appending a 
> monotonically increasing sequence number to it. Since container creation is a 
> high churn activity the RM does not store the sequence number per app. So 
> after restart it does not know what the new sequence number should be for new 
> allocations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2214) preemptContainerPreCheck() in FSParentQueue delays convergence towards fairness

2014-06-25 Thread Ashwin Shankar (JIRA)
Ashwin Shankar created YARN-2214:


 Summary: preemptContainerPreCheck() in FSParentQueue delays 
convergence towards fairness
 Key: YARN-2214
 URL: https://issues.apache.org/jira/browse/YARN-2214
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.5.0
Reporter: Ashwin Shankar


preemptContainerPreCheck() in FSParentQueue rejects preemption requests if the 
parent queue is below fair share. This can cause a delay in converging towards 
fairness when the starved leaf queue and the queue above fairshare belong under 
a non-root parent queue(ie their least common ancestor is a parent queue which 
is not root).
Here is an example :
root.parent has fair share = 80% and usage = 80%
root.parent.child1 has fair share =40% usage = 80%
root.parent.child2 has fair share=40% usage=0%

Now a job is submitted to child2 and the demand is 40%.
Preemption will kick in and try to reclaim all the 40% from child1.
When it preempts the first container from child1,the usage of root.parent will 
become <80%, which is less than root.parent's fair share,causing preemption to 
stop.So only one container gets preempted in this round although the need is a 
lot more. child2 would eventually get to half its fair share but only after 
multiple rounds of preemption.

Solution is to remove preemptContainerPreCheck() in FSParentQueue and keep it 
only in FSLeafQueue(which is already there).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2214) preemptContainerPreCheck() in FSParentQueue delays convergence towards fairness

2014-06-25 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044066#comment-14044066
 ] 

Sandy Ryza commented on YARN-2214:
--

Makes sense

> preemptContainerPreCheck() in FSParentQueue delays convergence towards 
> fairness
> ---
>
> Key: YARN-2214
> URL: https://issues.apache.org/jira/browse/YARN-2214
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.5.0
>Reporter: Ashwin Shankar
>
> preemptContainerPreCheck() in FSParentQueue rejects preemption requests if 
> the parent queue is below fair share. This can cause a delay in converging 
> towards fairness when the starved leaf queue and the queue above fairshare 
> belong under a non-root parent queue(ie their least common ancestor is a 
> parent queue which is not root).
> Here is an example :
> root.parent has fair share = 80% and usage = 80%
> root.parent.child1 has fair share =40% usage = 80%
> root.parent.child2 has fair share=40% usage=0%
> Now a job is submitted to child2 and the demand is 40%.
> Preemption will kick in and try to reclaim all the 40% from child1.
> When it preempts the first container from child1,the usage of root.parent 
> will become <80%, which is less than root.parent's fair share,causing 
> preemption to stop.So only one container gets preempted in this round 
> although the need is a lot more. child2 would eventually get to half its fair 
> share but only after multiple rounds of preemption.
> Solution is to remove preemptContainerPreCheck() in FSParentQueue and keep it 
> only in FSLeafQueue(which is already there).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (YARN-2214) preemptContainerPreCheck() in FSParentQueue delays convergence towards fairness

2014-06-25 Thread Ashwin Shankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Shankar reassigned YARN-2214:


Assignee: Ashwin Shankar

> preemptContainerPreCheck() in FSParentQueue delays convergence towards 
> fairness
> ---
>
> Key: YARN-2214
> URL: https://issues.apache.org/jira/browse/YARN-2214
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.5.0
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
>
> preemptContainerPreCheck() in FSParentQueue rejects preemption requests if 
> the parent queue is below fair share. This can cause a delay in converging 
> towards fairness when the starved leaf queue and the queue above fairshare 
> belong under a non-root parent queue(ie their least common ancestor is a 
> parent queue which is not root).
> Here is an example :
> root.parent has fair share = 80% and usage = 80%
> root.parent.child1 has fair share =40% usage = 80%
> root.parent.child2 has fair share=40% usage=0%
> Now a job is submitted to child2 and the demand is 40%.
> Preemption will kick in and try to reclaim all the 40% from child1.
> When it preempts the first container from child1,the usage of root.parent 
> will become <80%, which is less than root.parent's fair share,causing 
> preemption to stop.So only one container gets preempted in this round 
> although the need is a lot more. child2 would eventually get to half its fair 
> share but only after multiple rounds of preemption.
> Solution is to remove preemptContainerPreCheck() in FSParentQueue and keep it 
> only in FSLeafQueue(which is already there).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (YARN-1741) XInclude support broken for YARN ResourceManager

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned YARN-1741:
-

Assignee: Xuan Gong

Tx for the note, Eric. Assigning this to Xuan for now as he did the original 
conf-provider work.

> XInclude support broken for YARN ResourceManager
> 
>
> Key: YARN-1741
> URL: https://issues.apache.org/jira/browse/YARN-1741
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Eric Sirianni
>Assignee: Xuan Gong
>Priority: Critical
>  Labels: regression
>
> The XInclude support in Hadoop configuration files (introduced via 
> HADOOP-4944) was broken by the recent {{ConfigurationProvider}} changes to 
> YARN ResourceManager.  Specifically, YARN-1459 and, more generally, the 
> YARN-1611 family of JIRAs for ResourceManager HA.
> The issue is that {{ConfigurationProvider}} provides a raw {{InputStream}} as 
> a {{Configuration}} resource for what was previously a {{Path}}-based 
> resource.  
> For {{Path}} resources, the absolute file path is used as the {{systemId}} 
> for the {{DocumentBuilder.parse()}} call:
> {code}
>   } else if (resource instanceof Path) {  // a file resource
> ...
>   doc = parse(builder, new BufferedInputStream(
>   new FileInputStream(file)), ((Path)resource).toString());
> }
> {code}
> The {{systemId}} is used to resolve XIncludes (among other things):
> {code}
> /**
>  * Parse the content of the given InputStream as an
>  * XML document and return a new DOM Document object.
> ...
>  * @param systemId Provide a base for resolving relative URIs.
> ...
>  */
> public Document parse(InputStream is, String systemId)
> {code}
> However, for loading raw {{InputStream}} resources, the {{systemId}} is set 
> to {{null}}:
> {code}
>   } else if (resource instanceof InputStream) {
> doc = parse(builder, (InputStream) resource, null);
> {code}
> causing XInclude resolution to fail.
> In our particular environment, we make extensive use of XIncludes to 
> standardize common configuration parameters across multiple Hadoop clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2209) Replace allocate#resync command with ApplicationMasterNotRegisteredException to indicate AM to re-register on RM restart

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044096#comment-14044096
 ] 

Vinod Kumar Vavilapalli commented on YARN-2209:
---

Makes sense. We need to be consistent across allocate/unregister calls. For 
things like shut-down and resync, I prefer exceptions. We can deprecate the 
corresponding AMCommands.

> Replace allocate#resync command with ApplicationMasterNotRegisteredException 
> to indicate AM to re-register on RM restart
> 
>
> Key: YARN-2209
> URL: https://issues.apache.org/jira/browse/YARN-2209
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Jian He
>
> YARN-1365 introduced an ApplicationMasterNotRegisteredException to indicate 
> application to re-register on RM restart. we should do the same for 
> AMS#allocate call also.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2171) AMs block on the CapacityScheduler lock during allocate()

2014-06-25 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044100#comment-14044100
 ] 

Vinod Kumar Vavilapalli commented on YARN-2171:
---

+1, looks good. Checking this in..

> AMs block on the CapacityScheduler lock during allocate()
> -
>
> Key: YARN-2171
> URL: https://issues.apache.org/jira/browse/YARN-2171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 0.23.10, 2.4.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-2171.patch, YARN-2171v2.patch
>
>
> When AMs heartbeat into the RM via the allocate() call they are blocking on 
> the CapacityScheduler lock when trying to get the number of nodes in the 
> cluster via getNumClusterNodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2171) AMs block on the CapacityScheduler lock during allocate()

2014-06-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044110#comment-14044110
 ] 

Hudson commented on YARN-2171:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5780 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5780/])
YARN-2171. Improved CapacityScheduling to not lock on nodemanager-count when 
AMs heartbeat in. Contributed by Jason Lowe. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1605616)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java


> AMs block on the CapacityScheduler lock during allocate()
> -
>
> Key: YARN-2171
> URL: https://issues.apache.org/jira/browse/YARN-2171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 0.23.10, 2.4.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: YARN-2171.patch, YARN-2171v2.patch
>
>
> When AMs heartbeat into the RM via the allocate() call they are blocking on 
> the CapacityScheduler lock when trying to get the number of nodes in the 
> cluster via getNumClusterNodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-614) Retry attempts automatically for hardware failures or YARN issues and set default app retries to 1

2014-06-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044118#comment-14044118
 ] 

Jian He commented on YARN-614:
--

Xuan, can you emulate what are the failures that should not be counted towards 
AM failures and the corresponding am container exit code? seems ABORTED , 
KILL_BY_RESOURCEMANAGER are used for other sources too. If necessary, we need 
to create separate exit code for these particular cases. Can you also update 
the title/description to reflect what this patch is doing ? thx

> Retry attempts automatically for hardware failures or YARN issues and set 
> default app retries to 1
> --
>
> Key: YARN-614
> URL: https://issues.apache.org/jira/browse/YARN-614
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bikas Saha
>Assignee: Xuan Gong
> Fix For: 2.5.0
>
> Attachments: YARN-614-0.patch, YARN-614-1.patch, YARN-614-2.patch, 
> YARN-614-3.patch, YARN-614-4.patch, YARN-614-5.patch, YARN-614-6.patch, 
> YARN-614.7.patch
>
>
> Attempts can fail due to a large number of user errors and they should not be 
> retried unnecessarily. The only reason YARN should retry an attempt is when 
> the hardware fails or YARN has an error. NM failing, lost NM and NM disk 
> errors are the hardware errors that come to mind.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2208) AMRMTokenManager need to have a way to roll over AMRMToken

2014-06-25 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044153#comment-14044153
 ] 

Xuan Gong commented on YARN-2208:
-

This patch only focused on the changes on AMRMTokenSecretManager to roll over 
master key periodically.

> AMRMTokenManager need to have a way to roll over AMRMToken
> --
>
> Key: YARN-2208
> URL: https://issues.apache.org/jira/browse/YARN-2208
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-2208.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2208) AMRMTokenManager need to have a way to roll over AMRMToken

2014-06-25 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-2208:


Attachment: YARN-2208.1.patch

> AMRMTokenManager need to have a way to roll over AMRMToken
> --
>
> Key: YARN-2208
> URL: https://issues.apache.org/jira/browse/YARN-2208
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-2208.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-614) Separate AM failures from hardware failure or YARN error and do not count them to AM retry count

2014-06-25 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-614:
---

Summary: Separate AM failures from hardware failure or YARN error and do 
not count them to AM retry count  (was: Separate AM failures from hardware 
failure or YARN error and do not count them to )

> Separate AM failures from hardware failure or YARN error and do not count 
> them to AM retry count
> 
>
> Key: YARN-614
> URL: https://issues.apache.org/jira/browse/YARN-614
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bikas Saha
>Assignee: Xuan Gong
> Fix For: 2.5.0
>
> Attachments: YARN-614-0.patch, YARN-614-1.patch, YARN-614-2.patch, 
> YARN-614-3.patch, YARN-614-4.patch, YARN-614-5.patch, YARN-614-6.patch, 
> YARN-614.7.patch
>
>
> Attempts can fail due to a large number of user errors and they should not be 
> retried unnecessarily. The only reason YARN should retry an attempt is when 
> the hardware fails or YARN has an error. NM failing, lost NM and NM disk 
> errors are the hardware errors that come to mind.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-614) Separate AM failures from hardware failure or YARN error and do not count them to

2014-06-25 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-614:
---

Summary: Separate AM failures from hardware failure or YARN error and do 
not count them to   (was: Retry attempts automatically for hardware failures or 
YARN issues and set default app retries to 1)

> Separate AM failures from hardware failure or YARN error and do not count 
> them to 
> --
>
> Key: YARN-614
> URL: https://issues.apache.org/jira/browse/YARN-614
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bikas Saha
>Assignee: Xuan Gong
> Fix For: 2.5.0
>
> Attachments: YARN-614-0.patch, YARN-614-1.patch, YARN-614-2.patch, 
> YARN-614-3.patch, YARN-614-4.patch, YARN-614-5.patch, YARN-614-6.patch, 
> YARN-614.7.patch
>
>
> Attempts can fail due to a large number of user errors and they should not be 
> retried unnecessarily. The only reason YARN should retry an attempt is when 
> the hardware fails or YARN has an error. NM failing, lost NM and NM disk 
> errors are the hardware errors that come to mind.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-614) Separate AM failures from hardware failure or YARN error and do not count them to AM retry count

2014-06-25 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044184#comment-14044184
 ] 

Xuan Gong commented on YARN-614:


Ignore three type of failure with the following ContainerExistStatus:
* DISK_FAILURE
* ABORTED 
* KILL_BY_RESOURCEMANAGER

For ABORTED/KILL_BY_RESOURCEMANAGER:
* when the NMs are re-connection to RM, DeactivateNode or unHealthy node, all 
containers in those nodes will be stopped with ABORTED exist status. 
* or CONTAINER_EXPIRED
* or dropContainerReservation in RMContainerPreemptEvent
* or for all containers which are still alive or Reserved when 
ApplicationAttempt is done
* or all containers which are in release list when AM do the allocate call
* or all containers which are over-reserved when Scheduler process the 
nodeUpdate
* NMResync
* For some unknow containers
* For Unknown application

Most of scenarios will not happen in ApplicationMaster. But for those cases 
which might happen in ApplicationMaster container, I think that we can skip 
those failure and do not count them to AM retry count.
Please correct me if I miss something.

Also create a new patch which is no much difference from the previous one. But 
did not move the test case. Append the new test cases for easily review. Those 
test cases have some duplicate codes. will remove it after we finished the code 
review.

> Separate AM failures from hardware failure or YARN error and do not count 
> them to AM retry count
> 
>
> Key: YARN-614
> URL: https://issues.apache.org/jira/browse/YARN-614
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bikas Saha
>Assignee: Xuan Gong
> Fix For: 2.5.0
>
> Attachments: YARN-614-0.patch, YARN-614-1.patch, YARN-614-2.patch, 
> YARN-614-3.patch, YARN-614-4.patch, YARN-614-5.patch, YARN-614-6.patch, 
> YARN-614.7.patch
>
>
> Attempts can fail due to a large number of user errors and they should not be 
> retried unnecessarily. The only reason YARN should retry an attempt is when 
> the hardware fails or YARN has an error. NM failing, lost NM and NM disk 
> errors are the hardware errors that come to mind.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2208) AMRMTokenManager need to have a way to roll over AMRMToken

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044195#comment-14044195
 ] 

Hadoop QA commented on YARN-2208:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652512/YARN-2208.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4081//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4081//console

This message is automatically generated.

> AMRMTokenManager need to have a way to roll over AMRMToken
> --
>
> Key: YARN-2208
> URL: https://issues.apache.org/jira/browse/YARN-2208
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-2208.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2208) AMRMTokenManager need to have a way to roll over AMRMToken

2014-06-25 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044199#comment-14044199
 ] 

Xuan Gong commented on YARN-2208:
-

The test case failure in TestRMRestart is expected. We will fix on YARN-2211.

> AMRMTokenManager need to have a way to roll over AMRMToken
> --
>
> Key: YARN-2208
> URL: https://issues.apache.org/jira/browse/YARN-2208
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-2208.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-2204:


Attachment: YARN-2204_addendum.patch

Makes sense.  I've attached an addendum patch that's scheduler agnostic.

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: YARN-2204.patch, YARN-2204_addendum.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044215#comment-14044215
 ] 

Jian He commented on YARN-2181:
---

- how about the parent queue metrics, are we capturing preemption metrics only 
at leaf queue? If not, we can add the metrics as part of QueueMetrics class 
which is more flexible.
- CSQueue, FiCaSchedulerApp changes can be reverted.
- if this can be true, the previous queue.getQueueName() will fail upfront. we 
may not need this null check?
{code}
if (null != queue) {
{code}
- we don’t need isAMContainerPreempted() method.  isPreempted() does the same.
-  these new methods maybe not needed. AppInfo() can just use current attempt 
to access.
{code}
getResourcePreemptedFromLatestAttempt
getNumberOfTaskContainersPreemptedFromLatestAttempt
isMasterContainersPreemptedFromLatestAttempt
{code}
- “Did AM Containers Preempted..”: this is transient state and may be not 
needed. 
- how about the following on 
 -- app page:
Total Resource Preempted
Total Number of AM Containers Preempted
Total Number of Non-AM Containers Preempted
Resource Preempted from Current Attempt
Number of Non-AM Containers Preempted from Current Attempt
-- queue page:
Num AM Containers Preempted
Num Non-AM Containers Preempted


> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2022) Preempting an Application Master container can be kept as least priority when multiple applications are marked for preemption by ProportionalCapacityPreemptionPolicy

2014-06-25 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044217#comment-14044217
 ] 

Mayank Bansal commented on YARN-2022:
-

Hi [~sunilg]

If we dont use getAbsoluteCapacity then there is possibility we are running 
only AM's in the queue.
lets say we have 10% capacity of the queue and MAX capacity is 100% and AM 
precentage is 10% that means with your approach 10 AM's can run for this 
queue.And if we have cluster fully utilized then only AM's will be running in 
this queue.

Make sense?

Thanks,
Mayank


> Preempting an Application Master container can be kept as least priority when 
> multiple applications are marked for preemption by 
> ProportionalCapacityPreemptionPolicy
> -
>
> Key: YARN-2022
> URL: https://issues.apache.org/jira/browse/YARN-2022
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-2022-DesignDraft.docx, YARN-2022.2.patch, 
> YARN-2022.3.patch, YARN-2022.4.patch, YARN-2022.5.patch, YARN-2022.6.patch, 
> YARN-2022.7.patch, Yarn-2022.1.patch
>
>
> Cluster Size = 16GB [2NM's]
> Queue A Capacity = 50%
> Queue B Capacity = 50%
> Consider there are 3 applications running in Queue A which has taken the full 
> cluster capacity. 
> J1 = 2GB AM + 1GB * 4 Maps
> J2 = 2GB AM + 1GB * 4 Maps
> J3 = 2GB AM + 1GB * 2 Maps
> Another Job J4 is submitted in Queue B [J4 needs a 2GB AM + 1GB * 2 Maps ].
> Currently in this scenario, Jobs J3 will get killed including its AM.
> It is better if AM can be given least priority among multiple applications. 
> In this same scenario, map tasks from J3 and J2 can be preempted.
> Later when cluster is free, maps can be allocated to these Jobs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044218#comment-14044218
 ] 

Mayank Bansal commented on YARN-2181:
-

If we are adding this information into Web UI then we should change the CLI and 
Rest Apis as well for adding that info.
Thats inconsistent if we dont change the CLI/Rest and only add this info to Web 
UI

Thanks,
Mayank

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1710) Admission Control: agents to allocate reservation

2014-06-25 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-1710:
---

Attachment: YARN-1710.patch

> Admission Control: agents to allocate reservation
> -
>
> Key: YARN-1710
> URL: https://issues.apache.org/jira/browse/YARN-1710
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Subramaniam Venkatraman Krishnan
> Attachments: YARN-1710.patch
>
>
> This JIRA tracks the algorithms used to allocate a user ReservationRequest 
> coming in from the new reservation API (YARN-1708), in the inventory 
> subsystem (YARN-1709) maintaining the current plan for the cluster. The focus 
> of this "agents" is to quickly find a solution for the set of contraints 
> provided by the user, and the physical constraints of the plan.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (YARN-1710) Admission Control: agents to allocate reservation

2014-06-25 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-1710:
--

Assignee: Carlo Curino  (was: Subramaniam Venkatraman Krishnan)

> Admission Control: agents to allocate reservation
> -
>
> Key: YARN-1710
> URL: https://issues.apache.org/jira/browse/YARN-1710
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-1710.patch
>
>
> This JIRA tracks the algorithms used to allocate a user ReservationRequest 
> coming in from the new reservation API (YARN-1708), in the inventory 
> subsystem (YARN-1709) maintaining the current plan for the cluster. The focus 
> of this "agents" is to quickly find a solution for the set of contraints 
> provided by the user, and the physical constraints of the plan.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2204) TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044246#comment-14044246
 ] 

Hadoop QA commented on YARN-2204:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12652529/YARN-2204_addendum.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4082//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4082//console

This message is automatically generated.

> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler
> ---
>
> Key: YARN-2204
> URL: https://issues.apache.org/jira/browse/YARN-2204
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: YARN-2204.patch, YARN-2204_addendum.patch
>
>
> TestAMRestart#testAMRestartWithExistingContainers assumes CapacityScheduler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1710) Admission Control: agents to allocate reservation

2014-06-25 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044243#comment-14044243
 ] 

Carlo Curino commented on YARN-1710:


I uploaded a patch that contains a GreedyReservationAgent. 

The philosophy behind this agent is to greedily place from the deadline back 
towards arrival. This intuitively increase the chances of a job showing up 
later 
but with an earlier deadline to find space in the plan. It works well when 
paired with opportunistic anticipation of work (i.e., thanks to YARN-1957 even 
if the
reservation has a zero allocation at the moment the scheduler can give it 
resources if they are not claim by anyone.). 

Smarter placement policies are worth thinking about, and will increase the % of 
jobs accepted, or minimize other parameters such as preemption. We are
exploring this space. 

> Admission Control: agents to allocate reservation
> -
>
> Key: YARN-1710
> URL: https://issues.apache.org/jira/browse/YARN-1710
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-1710.patch
>
>
> This JIRA tracks the algorithms used to allocate a user ReservationRequest 
> coming in from the new reservation API (YARN-1708), in the inventory 
> subsystem (YARN-1709) maintaining the current plan for the cluster. The focus 
> of this "agents" is to quickly find a solution for the set of contraints 
> provided by the user, and the physical constraints of the plan.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2022) Preempting an Application Master container can be kept as least priority when multiple applications are marked for preemption by ProportionalCapacityPreemptionPolicy

2014-06-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044270#comment-14044270
 ] 

Wangda Tan commented on YARN-2022:
--

Hi [~mayank_bansal], 
For existing CapacityScheduler implementation, each user can use at most 
getMaxAMResourcePerQueuePercent() * getAbsoluteCapacity(). In your case, each 
user can use 10% * 10% = 1% cluster resource. So in extreme case, if the queue 
has 10 users, each user have 1 application, after preemption, all resource will 
be used by AM. 
I'll create a separated JIRA to discuss this. What [~sunilg] has done should be 
consistent with existing CapacityScheduler behavior. Is this make sense to you?

Thanks,
Wangda

> Preempting an Application Master container can be kept as least priority when 
> multiple applications are marked for preemption by 
> ProportionalCapacityPreemptionPolicy
> -
>
> Key: YARN-2022
> URL: https://issues.apache.org/jira/browse/YARN-2022
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-2022-DesignDraft.docx, YARN-2022.2.patch, 
> YARN-2022.3.patch, YARN-2022.4.patch, YARN-2022.5.patch, YARN-2022.6.patch, 
> YARN-2022.7.patch, Yarn-2022.1.patch
>
>
> Cluster Size = 16GB [2NM's]
> Queue A Capacity = 50%
> Queue B Capacity = 50%
> Consider there are 3 applications running in Queue A which has taken the full 
> cluster capacity. 
> J1 = 2GB AM + 1GB * 4 Maps
> J2 = 2GB AM + 1GB * 4 Maps
> J3 = 2GB AM + 1GB * 2 Maps
> Another Job J4 is submitted in Queue B [J4 needs a 2GB AM + 1GB * 2 Maps ].
> Currently in this scenario, Jobs J3 will get killed including its AM.
> It is better if AM can be given least priority among multiple applications. 
> In this same scenario, map tasks from J3 and J2 can be preempted.
> Later when cluster is free, maps can be allocated to these Jobs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2181:
-

Attachment: YARN-2181.patch

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044278#comment-14044278
 ] 

Hadoop QA commented on YARN-2181:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652539/YARN-2181.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4083//console

This message is automatically generated.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044280#comment-14044280
 ] 

Wangda Tan commented on YARN-2181:
--

[~jianhe], thanks for all your comments,
bq. how about the parent queue metrics, are we capturing preemption metrics 
only at leaf queue? If not, we can add the metrics as part of QueueMetrics 
class which is more flexible.
I think this JIRA is focus on showing necessary preemption info on RM web page, 
and what QueueBlock has done today is leveraging exposed APIs from LeafQueue. 
Let's keep it as-is until we need use QueueMetrics.

I've updated a patch addressed all your other comments.

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2181) Add preemption info to RM Web UI

2014-06-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044281#comment-14044281
 ] 

Wangda Tan commented on YARN-2181:
--

Hi [~mayank_bansal], 
{quote}
If we are adding this information into Web UI then we should change the CLI and 
Rest Apis as well for adding that info.
Thats inconsistent if we dont change the CLI/Rest and only add this info to Web 
UI
{quote}
I've already added preemption fields into AppInfo and 
CapacitySchedulerLeafQueueInfo, RMWebService should be able to access them. For 
CLI, I think it's better to track it separately. Let's make this JIRA only 
focus on Web UI related changes.

Does it make sense to you?

> Add preemption info to RM Web UI
> 
>
> Key: YARN-2181
> URL: https://issues.apache.org/jira/browse/YARN-2181
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2181.patch, YARN-2181.patch, YARN-2181.patch, 
> YARN-2181.patch, YARN-2181.patch, application page.png, queue page.png
>
>
> We need add preemption info to RM web page to make administrator/user get 
> more understanding about preemption happened on app/queue, etc. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1710) Admission Control: agents to allocate reservation

2014-06-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044353#comment-14044353
 ] 

Hadoop QA commented on YARN-1710:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12652531/YARN-1710.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4084//console

This message is automatically generated.

> Admission Control: agents to allocate reservation
> -
>
> Key: YARN-1710
> URL: https://issues.apache.org/jira/browse/YARN-1710
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-1710.patch
>
>
> This JIRA tracks the algorithms used to allocate a user ReservationRequest 
> coming in from the new reservation API (YARN-1708), in the inventory 
> subsystem (YARN-1709) maintaining the current plan for the cluster. The focus 
> of this "agents" is to quickly find a solution for the set of contraints 
> provided by the user, and the physical constraints of the plan.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2080) Admission Control: Integrate Reservation subsystem with ResourceManager

2014-06-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044394#comment-14044394
 ] 

Karthik Kambatla commented on YARN-2080:


Thanks for the patch, Subru. Just skimmed through it, the patch understandably 
didn't apply on trunk. Couple of high-level comments:
# Can we make the ReservationSystem a service and make it a part of 
RMActiveServices. 
# Looks like we ll need a different ReservationSystem implementation per policy 
(or scheduler). Can we put as much functionality as possible in 
AbstractReservationSystem and have CapacityReservationSystem extend it? This 
would be similar to how the scheduler hierarchy works today.

> Admission Control: Integrate Reservation subsystem with ResourceManager
> ---
>
> Key: YARN-2080
> URL: https://issues.apache.org/jira/browse/YARN-2080
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subramaniam Venkatraman Krishnan
>Assignee: Subramaniam Venkatraman Krishnan
> Attachments: YARN-2080.patch
>
>
> This JIRA tracks the integration of Reservation subsystem data structures 
> introduced in YARN-1709 with the YARN RM. This is essentially end2end wiring 
> of YARN-1051.



--
This message was sent by Atlassian JIRA
(v6.2#6252)