[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772482#comment-13772482
 ] 

Hudson commented on YARN-1203:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4446 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4446/])
YARN-1203. Changed YARN web-app proxy to handle http and https URLs from AM 
registration and finish correctly. Contributed by Omkar Vinit Joshi.
MAPREDUCE-5515. Fixed MR AM's webapp to depend on a new config 
mapreduce.ssl.enabled to enable https and disabling it by default as MR AM needs
to set up its own certificates etc and not depend on clusters'. Contributed by 
Omkar Vinit Joshi. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1524864)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/JobBlock.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/NavBlock.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/WebAppUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/dao/AMAttemptInfo.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java


> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch, YARN-1203.20131019.1.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job leve

[jira] [Commented] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772443#comment-13772443
 ] 

Hadoop QA commented on YARN-415:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604142/YARN-415--n3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1977//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1977//console

This message is automatically generated.

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415--n2.patch, YARN-415--n3.patch, YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772441#comment-13772441
 ] 

Vinod Kumar Vavilapalli commented on YARN-1203:
---

+1, looks good. Checking this in.

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch, YARN-1203.20131019.1.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-954) [YARN-321] History Service should create the webUI and wire it to HistoryStorage

2013-09-19 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772427#comment-13772427
 ] 

Mayank Bansal commented on YARN-954:


Thanks [~devaraj.k] for the patch.

I think we have all the report class available and all the user interfaces like 
CLI and web apps should be using those instead of history data classes.

Please look into YARN-978 and YARN-1123 

Thanks,
Mayank

> [YARN-321] History Service should create the webUI and wire it to 
> HistoryStorage
> 
>
> Key: YARN-954
> URL: https://issues.apache.org/jira/browse/YARN-954
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Devaraj K
> Attachments: YARN-954-v0.patch, YARN-954-v1.patch, YARN-954-v2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-1222) Make improvements in ZKRMStateStore for fencing

2013-09-19 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-1222:
--

Assignee: Karthik Kambatla

> Make improvements in ZKRMStateStore for fencing
> ---
>
> Key: YARN-1222
> URL: https://issues.apache.org/jira/browse/YARN-1222
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
>
> Using multi-operations for every ZK interaction. 
> In every operation, automatically creating/deleting a lock znode that is the 
> child of the root znode. This is to achieve fencing by modifying the 
> create/delete permissions on the root znode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772407#comment-13772407
 ] 

Hadoop QA commented on YARN-415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604132/YARN-415--n2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebApp

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1975//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1975//console

This message is automatically generated.

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415--n2.patch, YARN-415--n3.patch, YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated YARN-415:
-

Attachment: YARN-415--n2.patch

Updating the patch with fixes of findbugs warnings on multithreaded correctness

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415--n2.patch, YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772401#comment-13772401
 ] 

Hadoop QA commented on YARN-1214:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604131/YARN-1214.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1149 javac 
compiler warnings (more than the trunk's current 1145 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1976//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1976//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1976//console

This message is automatically generated.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.1.patch, YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772400#comment-13772400
 ] 

Karthik Kambatla commented on YARN-1068:


bq. Another downside is having yet another port to configure on the RM. 1 for 
HAProtocol and 1 for HAAdmin. Do we envisage an RM where one of them are 
present without the other?
Only the RMHAAdminService listens on a port, not the RMHAProtocolService. Am I 
missing something here?

Let me be more elaborate. The MockRM overrides several RM#create*() methods to 
create services which don't start the listening servers. Similarly, I wanted to 
override createRMHAProtocolService in MockRM to create an instance of 
RMHAProtocolService which doesn't start the HA admin listener. Now, consider 
the two scenarios:
- HAAdmin functionality embedded in RMHAProtocolService
RMHAProtocolService#serviceStart() would look like the following:
{code}
protected void serviceStart() {
  if (haEnabled) {
transitionToStandby(true);

// create admin server logic
// start admin server


  } else {
transitionToActive();
  }
  super.serviceStart();
{code}

and the MockRM override would look like:
{code}
@Override
protected void createRMHAProtocolService() {
  return new RMHAProtocolService() {
@Override
protected void serviceStart() {
  if (haEnabled) {
transitionToStandby(true);
  } else {
transitionToActive();
  }
  super.serviceStart();
}
  };
}
{code}
Any changes to RMHAProtocolService#serviceStart should be replicated to MockRM.
- RMHAAdminService implements the admin functionality and is part of 
RMHAProtocolService (latest patch)
RMHAProtocolService#createRMHAAdminService() creates the admin service to use. 
If not null, the service is added to RMHAProtocolService and is 
inited/started/stopped along with it. Now, all MockRM needs to do is return 
null in MockRM#createRMHAAdminService.

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated YARN-415:
-

Attachment: YARN-415--n3.patch

Updating the patch with fixes in tests.

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415--n2.patch, YARN-415--n3.patch, YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-1214:
--

Attachment: YARN-1214.1.patch

New patch fixed the comments.

bq. TestClientToAMTokens. Assert that clientToken is null before and not null 
after.
Turns out this is to test AM attempt can get the master key after it's 
registered.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.1.patch, YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772377#comment-13772377
 ] 

Hadoop QA commented on YARN-415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604125/YARN-415.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebApp

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1973//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1973//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1973//console

This message is automatically generated.

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1219) FSDownload changes file suffix making FileUtil.unTar() throw exception

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772364#comment-13772364
 ] 

Hadoop QA commented on YARN-1219:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604123/YARN-1219.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1974//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1974//console

This message is automatically generated.

> FSDownload changes file suffix making FileUtil.unTar() throw exception
> --
>
> Key: YARN-1219
> URL: https://issues.apache.org/jira/browse/YARN-1219
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: YARN-1219.patch
>
>
> While running a Hive join operation on Yarn, I saw exception as described 
> below. This is caused by FSDownload copy the files into a temp file and 
> change the suffix into ".tmp" before unpacking it. In unpack(), it uses 
> FileUtil.unTar() which will determine if the file is "gzipped" by looking at 
> the file suffix:
> {code}
> boolean gzipped = inFile.toString().endsWith("gz");
> {code}
> To fix this problem, we can remove the ".tmp" in the temp file name.
> Here is the detailed exception:
> org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:240)
>   at org.apache.hadoop.fs.FileUtil.unTarUsingJava(FileUtil.java:676)
>   at org.apache.hadoop.fs.FileUtil.unTar(FileUtil.java:625)
>   at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:203)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:287)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:50)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772325#comment-13772325
 ] 

Bikas Saha commented on YARN-1214:
--

Can you please mark the following with LimitedPrivate("RMStateStore") and leave 
a comment saying this is exposed only for state store. Normal operation must 
invoke the secret manager and not use the local key directly. Both in 
RMAppAttempt.java and RMAppAttemptImpl.java
{code}
RMAppAttemptImpl.getClientTokenMasterKey()
{code}

The first assert should be moved after moveCurrentAttemptToLaunchedState(). The 
second assert should be copied before moveCurrentAttemptToLaunchedState() and 
changed to false.
{code}
 Assert.assertNull(report.getClientToAMToken());
+moveCurrentAttemptToLaunchedState(app.getCurrentAppAttempt());
 report = app.createAndGetApplicationReport("clientuser", true);
 Assert.assertNotNull(report.getClientToAMToken());
{code}

The first assert should be retained and change to assertNull. We can re-use the 
same assert (with true) instead of querying the secret manager for the master 
key.
{code}
+  verify(clientToAMTokenManager).createMasterKey(
   applicationAttempt.getAppAttemptId());
-  assertNotNull(applicationAttempt.createClientToken("some client"));
 }
 assertNull(applicationAttempt.createClientToken(null));
 assertNotNull(applicationAttempt.getAMRMToken());
@@ -428,7 +429,10 @@ private void testAppAttemptLaunchedState(Container 
container) {
 assertEquals(RMAppAttemptState.LAUNCHED, 
 applicationAttempt.getAppAttemptState());
 assertEquals(container, applicationAttempt.getMasterContainer());
-
+if (UserGroupInformation.isSecurityEnabled()) {
+  Assert.assertNotNull(clientToAMTokenManager
+.getMasterKey(applicationAttempt.getAppAttemptId()));
+}
{code}

TestClientToAMTokens. Assert that clientToken is null before and not null after.


> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772190#comment-13772190
 ] 

Hadoop QA commented on YARN-1203:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12604083/YARN-1203.20131019.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1972//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1972//console

This message is automatically generated.

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch, YARN-1203.20131019.1.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772296#comment-13772296
 ] 

Hudson commented on YARN-353:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4443/])
YARN-353. Add Zookeeper-based store implementation for RMStateStore. 
Contributed by Bikas Saha, Jian He and Karthik Kambatla. (hitesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1524829)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/ClientBaseWithFixes.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStoreZKClientConnections.java


> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Fix For: 2.3.0
>
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772329#comment-13772329
 ] 

Bikas Saha commented on YARN-1214:
--

[~vinodkv] Should master key be given to clients only after the attempt has 
registered. That is when the attempt gets the master key. Until then the 
clients cannot connect to the attempt anyways.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-415) Capture memory utilization at the app-level for chargeback

2013-09-19 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated YARN-415:
-

Attachment: YARN-415.patch

The patch exposes MB-seconds and CPU-seconds through CLI, REST API and UI. 

> Capture memory utilization at the app-level for chargeback
> --
>
> Key: YARN-415
> URL: https://issues.apache.org/jira/browse/YARN-415
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 0.23.6
>Reporter: Kendall Thrapp
>Assignee: Andrey Klochkov
> Attachments: YARN-415.patch
>
>
> For the purpose of chargeback, I'd like to be able to compute the cost of an
> application in terms of cluster resource usage.  To start out, I'd like to 
> get the memory utilization of an application.  The unit should be MB-seconds 
> or something similar and, from a chargeback perspective, the memory amount 
> should be the memory reserved for the application, as even if the app didn't 
> use all that memory, no one else was able to use it.
> (reserved ram for container 1 * lifetime of container 1) + (reserved ram for
> container 2 * lifetime of container 2) + ... + (reserved ram for container n 
> * lifetime of container n)
> It'd be nice to have this at the app level instead of the job level because:
> 1. We'd still be able to get memory usage for jobs that crashed (and wouldn't 
> appear on the job history server).
> 2. We'd be able to get memory usage for future non-MR jobs (e.g. Storm).
> This new metric should be available both through the RM UI and RM Web 
> Services REST API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772340#comment-13772340
 ] 

Bikas Saha commented on YARN-1068:
--

Another downside is having yet another port to configure on the RM. 1 for 
HAProtocol and 1 for HAAdmin. Do we envisage an RM where one of them are 
present without the other?

bq. This makes it very simple to create a RMHAProtocolService without the admin 
parts without having to override all its service*() calls. In the current 
patch, MockRM does the following.
Sorry, didnt quite follow this.


> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1219) FSDownload changes file suffix making FileUtil.unTar() throw exception

2013-09-19 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated YARN-1219:
--

Attachment: YARN-1219.patch

Patch attached. There's only one line change in source code. Added a new unit 
test case to test .tgz file download. Also refactored the testing code to avoid 
code redundancy.

> FSDownload changes file suffix making FileUtil.unTar() throw exception
> --
>
> Key: YARN-1219
> URL: https://issues.apache.org/jira/browse/YARN-1219
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: YARN-1219.patch
>
>
> While running a Hive join operation on Yarn, I saw exception as described 
> below. This is caused by FSDownload copy the files into a temp file and 
> change the suffix into ".tmp" before unpacking it. In unpack(), it uses 
> FileUtil.unTar() which will determine if the file is "gzipped" by looking at 
> the file suffix:
> {code}
> boolean gzipped = inFile.toString().endsWith("gz");
> {code}
> To fix this problem, we can remove the ".tmp" in the temp file name.
> Here is the detailed exception:
> org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:240)
>   at org.apache.hadoop.fs.FileUtil.unTarUsingJava(FileUtil.java:676)
>   at org.apache.hadoop.fs.FileUtil.unTar(FileUtil.java:625)
>   at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:203)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:287)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:50)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772306#comment-13772306
 ] 

Arun C Murthy commented on YARN-353:


No, I think we are good. branch-2.1 is too close for major additions. Thanks.

> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Fix For: 2.3.0
>
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-979) [YARN-321] Add more APIs related to ApplicationAttempt and Container in ApplicationHistoryProtocol

2013-09-19 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-979:
---

Attachment: YARN-979-3.patch

Removing Unwanted classes, fixing compilation.

Thanks,
Mayank

> [YARN-321] Add more APIs related to ApplicationAttempt and Container in 
> ApplicationHistoryProtocol
> --
>
> Key: YARN-979
> URL: https://issues.apache.org/jira/browse/YARN-979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
> Attachments: YARN-979-1.patch, YARN-979.2.patch, YARN-979-3.patch
>
>
> ApplicationHistoryProtocol should have the following APIs as well:
> * getApplicationAttemptReport
> * getApplicationAttempts
> * getContainerReport
> * getContainers
> The corresponding request and response classes need to be added as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772275#comment-13772275
 ] 

Bikas Saha commented on YARN-353:
-

We should be storing the information inside the data of the znode instead of 
encoding it in the name of the znode. The rename pattern was followed as an 
optimization in FSStore but is not necessary in ZKStore give set-with-overwrite 
operations. Its also better to avoid encoding the info in the name since doing 
"ls" in ZK is more relaxed wrt permissions than it is in FileSystem.
{code}
+  if (childNodeName.startsWith(DELEGATION_TOKEN_SEQUENCE_NUMBER_PREFIX)) {
+rmState.rmSecretManagerState.dtSequenceNumber =
+Integer.parseInt(childNodeName.split("_")[1])
{code}

We should move to creating a node hierarchy for apps such that all znodes for 
an app are stored under an app znode instead of the app root znode. This will 
help in removeApplication and also in scaling better on ZK. The earlier code 
was written this way to ensure create/delete happens under a root znode for 
fencing. But given that we have moved to multi-operations globally, this isnt 
required anymore.

storeRMDTMasterKeyState() is not using multi-operation. We should be issuing 
every action on ZK using a multi-operation for fencing to work.

I am fine with the patch as is since it puts the framework in place. The above 
comments may be addressed in YARN-1222 thats opened under YARN-149 for HA. 
YARN-1222 blocks YARN-1026.


> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1222) Make improvements in ZKRMStateStore for fencing

2013-09-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772273#comment-13772273
 ] 

Bikas Saha commented on YARN-1222:
--

Other improvements carried over from YARN-353
We should be storing the information inside the data of the znode instead of 
encoding it in the name of the znode. The rename pattern was followed as an 
optimization in FSStore but is not necessary in ZKStore give set-with-overwrite 
operations. Its also better to avoid encoding the info in the name since doing 
"ls" in ZK is more relaxed wrt permissions than it is in FileSystem.
{code}
+  if (childNodeName.startsWith(DELEGATION_TOKEN_SEQUENCE_NUMBER_PREFIX)) {
+rmState.rmSecretManagerState.dtSequenceNumber =
+Integer.parseInt(childNodeName.split("_")[1])
{code}

We should move to creating a node hierarchy for apps such that all znodes for 
an app are stored under an app znode instead of the app root znode. This will 
help in removeApplication and also in scaling better on ZK. The earlier code 
was written this way to ensure create/delete happens under a root znode for 
fencing. But given that we have moved to multi-operations globally, this isnt 
required anymore.

storeRMDTMasterKeyState() is not using multi-operation. We should be issuing 
every action on ZK using a multi-operation for fencing to work.

> Make improvements in ZKRMStateStore for fencing
> ---
>
> Key: YARN-1222
> URL: https://issues.apache.org/jira/browse/YARN-1222
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>
> Using multi-operations for every ZK interaction. 
> In every operation, automatically creating/deleting a lock znode that is the 
> child of the root znode. This is to achieve fencing by modifying the 
> create/delete permissions on the root znode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1222) Make improvements in ZKRMStateStore for fencing

2013-09-19 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-1222:


 Summary: Make improvements in ZKRMStateStore for fencing
 Key: YARN-1222
 URL: https://issues.apache.org/jira/browse/YARN-1222
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha


Using multi-operations for every ZK interaction. 
In every operation, automatically creating/deleting a lock znode that is the 
child of the root znode. This is to achieve fencing by modifying the 
create/delete permissions on the root znode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1199) Make NM/RM Versions Available

2013-09-19 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated YARN-1199:


Attachment: YARN-1199.patch

Thanks Rob for pointing this out. I have made the changes to the patch and 
attached it. Can you please review it.

> Make NM/RM Versions Available
> -
>
> Key: YARN-1199
> URL: https://issues.apache.org/jira/browse/YARN-1199
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: YARN-1199.patch, YARN-1199.patch, YARN-1199.patch
>
>
> Now as we have the NM and RM Versions available, we can display the YARN 
> version of nodes running in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-979) [YARN-321] Add more APIs related to ApplicationAttempt and Container in ApplicationHistoryProtocol

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772129#comment-13772129
 ] 

Hadoop QA commented on YARN-979:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604077/YARN-979-3.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1971//console

This message is automatically generated.

> [YARN-321] Add more APIs related to ApplicationAttempt and Container in 
> ApplicationHistoryProtocol
> --
>
> Key: YARN-979
> URL: https://issues.apache.org/jira/browse/YARN-979
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
> Attachments: YARN-979-1.patch, YARN-979.2.patch, YARN-979-3.patch
>
>
> ApplicationHistoryProtocol should have the following APIs as well:
> * getApplicationAttemptReport
> * getApplicationAttempts
> * getContainerReport
> * getContainers
> The corresponding request and response classes need to be added as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1203:


Attachment: YARN-1203.20131019.1.patch

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch, YARN-1203.20131019.1.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-19 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771988#comment-13771988
 ] 

Alejandro Abdelnur commented on YARN-1021:
--

Thanks [~ywskycn], last set of NITs I believe:

* I still see the issue that the simulator is taking the fs.defaultFS from the 
hadoop conf instead the one set to file:/// 
* The NM simulator initialization is much faster now, still it takes it time to 
get all NM simulators running. We should print some kind of feedback to the 
terminal while this is happening, something like every 5 seconds "N of M NMs 
initialized"
* The documentation should mention the default port of the simulator console, 
10001
* The slsrun.sh script should not reset HADOOP_OPTS (this is handy to increase 
heap or configure to connect a remote debugger/profiler)


> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1221) With Fair Scheduler, reserved MB reported in RM web UI increases indefinitely

2013-09-19 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772015#comment-13772015
 ] 

Joep Rottinghuis commented on YARN-1221:


Sandy, we're seeing the same. At some point there is an overflow and numbers 
even show negative.
Looks like we have to first turn on logging for the FS so that we see more 
messages and figure out what is going on.
Code looked like this may be when reserved containers go into allocated, but 
the metric is not updated, or there is some synchronization issue with that 
update or something.

> With Fair Scheduler, reserved MB reported in RM web UI increases indefinitely
> -
>
> Key: YARN-1221
> URL: https://issues.apache.org/jira/browse/YARN-1221
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-986) YARN should have a ClusterId/ServiceId that should be used to set the service address for tokens

2013-09-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771959#comment-13771959
 ] 

Karthik Kambatla commented on YARN-986:
---

Hi [~vinodkv], are you looking at this. If you are busy with other things, I 
can may be look into this.

> YARN should have a ClusterId/ServiceId that should be used to set the service 
> address for tokens
> 
>
> Key: YARN-986
> URL: https://issues.apache.org/jira/browse/YARN-986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> This needs to be done to support non-ip based fail over of RM. Once the 
> server sets the token service address to be this generic ClusterId/ServiceId, 
> clients can translate it to appropriate final IP and then be able to select 
> tokens via TokenSelectors.
> Some workarounds for other related issues were put in place at YARN-945.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-819) ResourceManager and NodeManager should check for a minimum allowed version

2013-09-19 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771978#comment-13771978
 ] 

Jonathan Eagles commented on YARN-819:
--

Thanks for the patch, Rob. Couple of minor things to address.

* TestResourceTrackerService#testNodeRegistrationVersionLessThanRM fails on 
branch-2. It looks like the tests pick up the version info from the current 
build that make the tests pass/fail depending on version in branch.
* There seems to be a discrepancy between the design in the JIRA description 
and the implementation regarding checks and actions to take. Can you comment?
* Remove the extra space after the equals in NodeStatusUpdaterImpl
{code:java}this.nodeManagerVersionId =  YarnVersionInfo.getVersion();{code}
* Refer to to rm version as rmVersion in message string for consistency in 
NodeStatusUpdaterImpl
{code:java}
  if (VersionUtil.compareVersions(rmVersion,minimumResourceManagerVersion) 
< 0) {
String message = "The Resource Manager's version ("
+ regNMResponse.getRMVersion() +") is less than the minimum "
+ "allowed version " + minimumResourceManagerVersion;
{code}
* Extra class definition for YarnVersion infor in TestNodeStatusUpdater
{code:java}
  private static class YarnVersionInfo {
public static String returnVersion = "3.0.0";

public static String getVersion() {
  return returnVersion;
}
  }
{code}

> ResourceManager and NodeManager should check for a minimum allowed version
> --
>
> Key: YARN-819
> URL: https://issues.apache.org/jira/browse/YARN-819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
> Attachments: YARN-819-1.patch, YARN-819-2.patch
>
>
> Our use case is during upgrade on a large cluster several NodeManagers may 
> not restart with the new version.  Once the RM comes back up the NodeManager 
> will re-register without issue to the RM.
> The NM should report the version the RM.  The RM should have a configuration 
> to disallow the check (default), equal to the RM (to prevent config change 
> for each release), equal to or greater than RM (to allow NM upgrades), and 
> finally an explicit version or version range.
> The RM should also have an configuration on how to treat the mismatch: 
> REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-986) YARN should have a ClusterId/ServiceId that should be used to set the service address for tokens

2013-09-19 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-986:
--

Issue Type: Sub-task  (was: Bug)
Parent: YARN-149

> YARN should have a ClusterId/ServiceId that should be used to set the service 
> address for tokens
> 
>
> Key: YARN-986
> URL: https://issues.apache.org/jira/browse/YARN-986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> This needs to be done to support non-ip based fail over of RM. Once the 
> server sets the token service address to be this generic ClusterId/ServiceId, 
> clients can translate it to appropriate final IP and then be able to select 
> tokens via TokenSelectors.
> Some workarounds for other related issues were put in place at YARN-945.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771956#comment-13771956
 ] 

Karthik Kambatla commented on YARN-353:
---

Thanks Hitesh. Dropping the try-catch blocks seems reasonable to me. 

> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1089) Add YARN compute units alongside virtual cores

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771686#comment-13771686
 ] 

Hadoop QA commented on YARN-1089:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603998/YARN-1089-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 51 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site:

org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1970//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1970//console

This message is automatically generated.

> Add YARN compute units alongside virtual cores
> --
>
> Key: YARN-1089
> URL: https://issues.apache.org/jira/browse/YARN-1089
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1089-1.patch, YARN-1089.patch
>
>
> Based on discussion in YARN-1024, we will add YARN compute units as a 
> resource for requesting and scheduling CPU processing power.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771670#comment-13771670
 ] 

Hadoop QA commented on YARN-353:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603992/YARN-353.16.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1969//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1969//console

This message is automatically generated.

> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1089) Add YARN compute units alongside virtual cores

2013-09-19 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-1089:
-

Attachment: YARN-1089-1.patch

Updated patch should fix TestFairScheduler and TestSchedulerUtils.  The 
TestRMContainerAllocator failure looks like MAPREDUCE-5514.

> Add YARN compute units alongside virtual cores
> --
>
> Key: YARN-1089
> URL: https://issues.apache.org/jira/browse/YARN-1089
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1089-1.patch, YARN-1089.patch
>
>
> Based on discussion in YARN-1024, we will add YARN compute units as a 
> resource for requesting and scheduling CPU processing power.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira