[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686434#comment-13686434
 ] 

Hadoop QA commented on YARN-846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588288/move_pbImpls.sh
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1326//console

This message is automatically generated.

 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: move_pbImpls.sh, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-799) CgroupsLCEResourcesHandler tries to write to cgroup.procs

2013-06-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-799:
---

Assignee: Chris Riccomini

 CgroupsLCEResourcesHandler tries to write to cgroup.procs
 -

 Key: YARN-799
 URL: https://issues.apache.org/jira/browse/YARN-799
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.4-alpha, 2.0.5-alpha
Reporter: Chris Riccomini
Assignee: Chris Riccomini
 Attachments: YARN-799.0.patch


 The implementation of
 bq. 
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
 Tells the container-executor to write PIDs to cgroup.procs:
 {code}
   public String getResourcesOption(ContainerId containerId) {
 String containerName = containerId.toString();
 StringBuilder sb = new StringBuilder(cgroups=);
 if (isCpuWeightEnabled()) {
   sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + 
 /cgroup.procs);
   sb.append(,);
 }
 if (sb.charAt(sb.length() - 1) == ',') {
   sb.deleteCharAt(sb.length() - 1);
 } 
 return sb.toString();
   }
 {code}
 Apparently, this file has not always been writeable:
 https://patchwork.kernel.org/patch/116146/
 http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
 https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
 The RHEL version of the Linux kernel that I'm using has a CGroup module that 
 has a non-writeable cgroup.procs file.
 {quote}
 $ uname -a
 Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 
 2011 x86_64 x86_64 x86_64 GNU/Linux
 {quote}
 As a result, when the container-executor tries to run, it fails with this 
 error message:
 bq.fprintf(LOGFILE, Failed to write pid %s (%d) to file %s - %s\n,
 This is because the executor is given a resource by the 
 CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
 {quote}
 $ pwd 
 /cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_01
 $ ls -l
 total 0
 -r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
 {quote}
 I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, 
 and this appears to have fixed the problem.
 I can think of several potential resolutions to this ticket:
 1. Ignore the problem, and make people patch YARN when they hit this issue.
 2. Write to /tasks instead of /cgroup.procs for everyone
 3. Check permissioning on /cgroup.procs prior to writing to it, and fall back 
 to /tasks.
 4. Add a config to yarn-site that lets admins specify which file to write to.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-846:
-

Attachment: YARN-846.full.patch

resubmit the patch to kick jenkins

 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: move_pbImpls.sh, YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-841) Annotate and document AuxService APIs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686439#comment-13686439
 ] 

Hudson commented on YARN-841:
-

Integrated in Hadoop-trunk-Commit #3964 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3964/])
YARN-841. Move Auxiliary service to yarn-api, annotate and document it. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1494031)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494031
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainerResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationInitializationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationTerminationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/AuxiliaryService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServicesEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java


 Annotate and document AuxService APIs
 -

 Key: YARN-841
 URL: https://issues.apache.org/jira/browse/YARN-841
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.5-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.1.0-beta

 Attachments: YARN-841-20130617.1.txt, YARN-841-20130617.2.txt, 
 YARN-841-20130617.3.txt, YARN-841-20130617.txt


 For users writing their own AuxServices, these APIs should be annotated and 
 need better documentation. Also, the classes may need to move out of the 
 NodeManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686440#comment-13686440
 ] 

Hadoop QA commented on YARN-846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588298/YARN-846.full.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1327//console

This message is automatically generated.

 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: move_pbImpls.sh, YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-846:
-

Attachment: YARN-846.full.1.patch

rebased on latest trunk

 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: move_pbImpls.sh, YARN-846.full.1.patch, 
 YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-799) CgroupsLCEResourcesHandler tries to write to cgroup.procs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686452#comment-13686452
 ] 

Hudson commented on YARN-799:
-

Integrated in Hadoop-trunk-Commit #3965 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3965/])
YARN-799. Fix CgroupsLCEResourcesHandler to use /tasks instead of 
/cgroup.procs. Contributed by Chris Riccomini. (Revision 1494035)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494035
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java


 CgroupsLCEResourcesHandler tries to write to cgroup.procs
 -

 Key: YARN-799
 URL: https://issues.apache.org/jira/browse/YARN-799
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.4-alpha, 2.0.5-alpha
Reporter: Chris Riccomini
Assignee: Chris Riccomini
 Fix For: 2.1.0-beta

 Attachments: YARN-799.0.patch


 The implementation of
 bq. 
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
 Tells the container-executor to write PIDs to cgroup.procs:
 {code}
   public String getResourcesOption(ContainerId containerId) {
 String containerName = containerId.toString();
 StringBuilder sb = new StringBuilder(cgroups=);
 if (isCpuWeightEnabled()) {
   sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + 
 /cgroup.procs);
   sb.append(,);
 }
 if (sb.charAt(sb.length() - 1) == ',') {
   sb.deleteCharAt(sb.length() - 1);
 } 
 return sb.toString();
   }
 {code}
 Apparently, this file has not always been writeable:
 https://patchwork.kernel.org/patch/116146/
 http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
 https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
 The RHEL version of the Linux kernel that I'm using has a CGroup module that 
 has a non-writeable cgroup.procs file.
 {quote}
 $ uname -a
 Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 
 2011 x86_64 x86_64 x86_64 GNU/Linux
 {quote}
 As a result, when the container-executor tries to run, it fails with this 
 error message:
 bq.fprintf(LOGFILE, Failed to write pid %s (%d) to file %s - %s\n,
 This is because the executor is given a resource by the 
 CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
 {quote}
 $ pwd 
 /cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_01
 $ ls -l
 total 0
 -r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
 {quote}
 I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, 
 and this appears to have fixed the problem.
 I can think of several potential resolutions to this ticket:
 1. Ignore the problem, and make people patch YARN when they hit this issue.
 2. Write to /tasks instead of /cgroup.procs for everyone
 3. Check permissioning on /cgroup.procs prior to writing to it, and fall back 
 to /tasks.
 4. Add a config to yarn-site that lets admins specify which file to write to.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686457#comment-13686457
 ] 

Hadoop QA commented on YARN-846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588299/YARN-846.full.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1328//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1328//console

This message is automatically generated.

 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: move_pbImpls.sh, YARN-846.full.1.patch, 
 YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686478#comment-13686478
 ] 

Hudson commented on YARN-846:
-

Integrated in Hadoop-trunk-Commit #3966 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3966/])
YARN-846.  Move pb Impl classes from yarn-api to yarn-common. Contributed 
by Jian He. (Revision 1494052)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494052
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb


 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: move_pbImpls.sh, YARN-846.full.1.patch, 
 YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, YARN-694-20130618.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-610) ClientToken (ClientToAMToken) should not be set in the environment

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686582#comment-13686582
 ] 

Hudson commented on YARN-610:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-610. ClientToken is no longer set in the environment of the 
Containers. Contributed by Omkar Vinit Joshi. (Revision 1493968)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493968
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRClientSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 

[jira] [Commented] (YARN-839) TestContainerLaunch.testContainerEnvVariables fails on Windows

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686575#comment-13686575
 ] 

Hudson commented on YARN-839:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-839. TestContainerLaunch.testContainerEnvVariables fails on Windows. 
Contributed by Chuan Liu. (Revision 1493937)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493937
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch.testContainerEnvVariables fails on Windows
 --

 Key: YARN-839
 URL: https://issues.apache.org/jira/browse/YARN-839
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-839-branch-2.2.patch, YARN-839-branch-2.patch, 
 YARN-839.patch


 The unit test case fails on Windows due to job id or container id was not 
 printed out as part of the container script. Later, the test tries to read 
 the pid from output of the file, and fails.
 Exception in trunk:
 {noformat}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.903 sec  
 FAILURE!
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 1307 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:278)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-822) Rename ApplicationToken to AMRMToken

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686585#comment-13686585
 ] 

Hudson commented on YARN-822:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-822. Renamed ApplicationToken to be AMRMToken, and similarly the 
corresponding TokenSelector and SecretManager. Contributed by Omkar Vinit 
Joshi. (Revision 1493889)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493889
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/SchedulerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/AMRMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ApplicationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMLaunchFailure.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 

[jira] [Commented] (YARN-799) CgroupsLCEResourcesHandler tries to write to cgroup.procs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686574#comment-13686574
 ] 

Hudson commented on YARN-799:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-799. Fix CgroupsLCEResourcesHandler to use /tasks instead of 
/cgroup.procs. Contributed by Chris Riccomini. (Revision 1494035)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494035
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java


 CgroupsLCEResourcesHandler tries to write to cgroup.procs
 -

 Key: YARN-799
 URL: https://issues.apache.org/jira/browse/YARN-799
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.4-alpha, 2.0.5-alpha
Reporter: Chris Riccomini
Assignee: Chris Riccomini
 Fix For: 2.1.0-beta

 Attachments: YARN-799.0.patch


 The implementation of
 bq. 
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
 Tells the container-executor to write PIDs to cgroup.procs:
 {code}
   public String getResourcesOption(ContainerId containerId) {
 String containerName = containerId.toString();
 StringBuilder sb = new StringBuilder(cgroups=);
 if (isCpuWeightEnabled()) {
   sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + 
 /cgroup.procs);
   sb.append(,);
 }
 if (sb.charAt(sb.length() - 1) == ',') {
   sb.deleteCharAt(sb.length() - 1);
 } 
 return sb.toString();
   }
 {code}
 Apparently, this file has not always been writeable:
 https://patchwork.kernel.org/patch/116146/
 http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
 https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
 The RHEL version of the Linux kernel that I'm using has a CGroup module that 
 has a non-writeable cgroup.procs file.
 {quote}
 $ uname -a
 Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 
 2011 x86_64 x86_64 x86_64 GNU/Linux
 {quote}
 As a result, when the container-executor tries to run, it fails with this 
 error message:
 bq.fprintf(LOGFILE, Failed to write pid %s (%d) to file %s - %s\n,
 This is because the executor is given a resource by the 
 CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
 {quote}
 $ pwd 
 /cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_01
 $ ls -l
 total 0
 -r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
 {quote}
 I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, 
 and this appears to have fixed the problem.
 I can think of several potential resolutions to this ticket:
 1. Ignore the problem, and make people patch YARN when they hit this issue.
 2. Write to /tasks instead of /cgroup.procs for everyone
 3. Check permissioning on /cgroup.procs prior to writing to it, and fall back 
 to /tasks.
 4. Add a config to yarn-site that lets admins specify which file to write to.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-840) Move ProtoUtils to yarn.api.records.pb.impl

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686587#comment-13686587
 ] 

Hudson commented on YARN-840:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-840. Moved ProtoUtils to yarn.api.records.pb.impl. Contributed by Jian 
He. (Revision 1494027)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494027
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestJHSSecurity.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/NodeReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/TokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/ProtoUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
* 

[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686576#comment-13686576
 ] 

Hudson commented on YARN-846:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-846.  Move pb Impl classes from yarn-api to yarn-common. Contributed 
by Jian He. (Revision 1494052)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494052
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb


 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: move_pbImpls.sh, YARN-846.full.1.patch, 
 YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-805) Fix yarn-api javadoc annotations

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686578#comment-13686578
 ] 

Hudson commented on YARN-805:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-805. Fix javadoc and annotations on classes in the yarn-api package. 
Contributed by Jian He. (Revision 1493992)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493992
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 

[jira] [Commented] (YARN-841) Annotate and document AuxService APIs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686586#comment-13686586
 ] 

Hudson commented on YARN-841:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-841. Move Auxiliary service to yarn-api, annotate and document it. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1494031)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494031
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainerResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationInitializationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationTerminationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/AuxiliaryService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServicesEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java


 Annotate and document AuxService APIs
 -

 Key: YARN-841
 URL: https://issues.apache.org/jira/browse/YARN-841
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.5-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.1.0-beta

 Attachments: YARN-841-20130617.1.txt, YARN-841-20130617.2.txt, 
 YARN-841-20130617.3.txt, YARN-841-20130617.txt


 For users writing their own AuxServices, these APIs should be annotated and 
 need better documentation. Also, the classes may need to move out of the 
 NodeManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-834) Review/fix annotations for yarn-client module and clearly differentiate *Async apis

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686577#comment-13686577
 ] 

Hudson commented on YARN-834:
-

Integrated in Hadoop-Yarn-trunk #244 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/244/])
YARN-834. Fixed annotations for yarn-client module, reorganized packages 
and clearly differentiated *Async apis. Contributed by Arun C Murthy and Zhijie 
Shen. (Revision 1494017)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494017
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestYarnClientProtocolProvider.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl
* 

[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.1.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, YARN-694-20130618.1.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: (was: YARN-694-20130618.patch)

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, YARN-694-20130618.1.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686656#comment-13686656
 ] 

Hadoop QA commented on YARN-694:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588335/YARN-694-20130618.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests:

  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1329//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1329//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1329//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1329//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1329//console

This message is automatically generated.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, YARN-694-20130618.1.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-846) Move pb Impl from yarn-api to yarn-common

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686672#comment-13686672
 ] 

Hudson commented on YARN-846:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-846.  Move pb Impl classes from yarn-api to yarn-common. Contributed 
by Jian He. (Revision 1494052)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494052
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb


 Move pb Impl from yarn-api to yarn-common
 -

 Key: YARN-846
 URL: https://issues.apache.org/jira/browse/YARN-846
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: move_pbImpls.sh, YARN-846.full.1.patch, 
 YARN-846.full.patch, YARN-846.full.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-834) Review/fix annotations for yarn-client module and clearly differentiate *Async apis

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686673#comment-13686673
 ] 

Hudson commented on YARN-834:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-834. Fixed annotations for yarn-client module, reorganized packages 
and clearly differentiated *Async apis. Contributed by Arun C Murthy and Zhijie 
Shen. (Revision 1494017)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494017
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestYarnClientProtocolProvider.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl
* 

[jira] [Commented] (YARN-610) ClientToken (ClientToAMToken) should not be set in the environment

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686678#comment-13686678
 ] 

Hudson commented on YARN-610:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-610. ClientToken is no longer set in the environment of the 
Containers. Contributed by Omkar Vinit Joshi. (Revision 1493968)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493968
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRClientSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 

[jira] [Commented] (YARN-822) Rename ApplicationToken to AMRMToken

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686681#comment-13686681
 ] 

Hudson commented on YARN-822:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-822. Renamed ApplicationToken to be AMRMToken, and similarly the 
corresponding TokenSelector and SecretManager. Contributed by Omkar Vinit 
Joshi. (Revision 1493889)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493889
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/SchedulerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/AMRMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ApplicationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMLaunchFailure.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 

[jira] [Commented] (YARN-759) Create Command enum in AllocateResponse

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686680#comment-13686680
 ] 

Hudson commented on YARN-759:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
Fix hadoop-yarn-project/CHANGES.txt for YARN-759 (Revision 1493842)

 Result = FAILURE
bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493842
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 Create Command enum in AllocateResponse
 ---

 Key: YARN-759
 URL: https://issues.apache.org/jira/browse/YARN-759
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 2.1.0-beta

 Attachments: YARN-759.1.patch, YARN-759.2.patch, YARN-759.3.patch


 Use command enums for shutdown/resync instead of booleans.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-841) Annotate and document AuxService APIs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686682#comment-13686682
 ] 

Hudson commented on YARN-841:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-841. Move Auxiliary service to yarn-api, annotate and document it. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1494031)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494031
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainerResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationInitializationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationTerminationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/AuxiliaryService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServicesEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java


 Annotate and document AuxService APIs
 -

 Key: YARN-841
 URL: https://issues.apache.org/jira/browse/YARN-841
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.5-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.1.0-beta

 Attachments: YARN-841-20130617.1.txt, YARN-841-20130617.2.txt, 
 YARN-841-20130617.3.txt, YARN-841-20130617.txt


 For users writing their own AuxServices, these APIs should be annotated and 
 need better documentation. Also, the classes may need to move out of the 
 NodeManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-799) CgroupsLCEResourcesHandler tries to write to cgroup.procs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686670#comment-13686670
 ] 

Hudson commented on YARN-799:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-799. Fix CgroupsLCEResourcesHandler to use /tasks instead of 
/cgroup.procs. Contributed by Chris Riccomini. (Revision 1494035)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494035
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java


 CgroupsLCEResourcesHandler tries to write to cgroup.procs
 -

 Key: YARN-799
 URL: https://issues.apache.org/jira/browse/YARN-799
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.4-alpha, 2.0.5-alpha
Reporter: Chris Riccomini
Assignee: Chris Riccomini
 Fix For: 2.1.0-beta

 Attachments: YARN-799.0.patch


 The implementation of
 bq. 
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
 Tells the container-executor to write PIDs to cgroup.procs:
 {code}
   public String getResourcesOption(ContainerId containerId) {
 String containerName = containerId.toString();
 StringBuilder sb = new StringBuilder(cgroups=);
 if (isCpuWeightEnabled()) {
   sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + 
 /cgroup.procs);
   sb.append(,);
 }
 if (sb.charAt(sb.length() - 1) == ',') {
   sb.deleteCharAt(sb.length() - 1);
 } 
 return sb.toString();
   }
 {code}
 Apparently, this file has not always been writeable:
 https://patchwork.kernel.org/patch/116146/
 http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
 https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
 The RHEL version of the Linux kernel that I'm using has a CGroup module that 
 has a non-writeable cgroup.procs file.
 {quote}
 $ uname -a
 Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 
 2011 x86_64 x86_64 x86_64 GNU/Linux
 {quote}
 As a result, when the container-executor tries to run, it fails with this 
 error message:
 bq.fprintf(LOGFILE, Failed to write pid %s (%d) to file %s - %s\n,
 This is because the executor is given a resource by the 
 CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
 {quote}
 $ pwd 
 /cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_01
 $ ls -l
 total 0
 -r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
 -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
 {quote}
 I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, 
 and this appears to have fixed the problem.
 I can think of several potential resolutions to this ticket:
 1. Ignore the problem, and make people patch YARN when they hit this issue.
 2. Write to /tasks instead of /cgroup.procs for everyone
 3. Check permissioning on /cgroup.procs prior to writing to it, and fall back 
 to /tasks.
 4. Add a config to yarn-site that lets admins specify which file to write to.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-840) Move ProtoUtils to yarn.api.records.pb.impl

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686683#comment-13686683
 ] 

Hudson commented on YARN-840:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-840. Moved ProtoUtils to yarn.api.records.pb.impl. Contributed by Jian 
He. (Revision 1494027)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494027
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestJHSSecurity.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/NodeReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/TokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/ProtoUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
* 

[jira] [Commented] (YARN-805) Fix yarn-api javadoc annotations

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686674#comment-13686674
 ] 

Hudson commented on YARN-805:
-

Integrated in Hadoop-Hdfs-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1434/])
YARN-805. Fix javadoc and annotations on classes in the yarn-api package. 
Contributed by Jian He. (Revision 1493992)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493992
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 

[jira] [Updated] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-553:
---

Summary: Have YarnClient generate a directly usable 
ApplicationSubmissionContext  (was: Have GetNewApplicationResponse generate a 
directly usable ApplicationSubmissionContext)

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686720#comment-13686720
 ] 

Arun C Murthy commented on YARN-553:


I edited jira description to reflect actual intent.

How about having just one api: createApplication which return 
ApplicationSubmissionContext and deleting getNewApplication? We don't need 
multiple flavours of getNewApplicationSubmission context, getNewApplication 
etc. Simplifies the end-user api. Thoughts?

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686737#comment-13686737
 ] 

Arun C Murthy commented on YARN-553:


[~kkambatl] Is there a chance you can provide a quick turnaround on this for 
2.1.0-beta? Tx!

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-805) Fix yarn-api javadoc annotations

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686745#comment-13686745
 ] 

Hudson commented on YARN-805:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-805. Fix javadoc and annotations on classes in the yarn-api package. 
Contributed by Jian He. (Revision 1493992)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493992
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocol.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ResourceManagerAdministrationProtocolPB.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 

[jira] [Commented] (YARN-841) Annotate and document AuxService APIs

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686753#comment-13686753
 ] 

Hudson commented on YARN-841:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-841. Move Auxiliary service to yarn-api, annotate and document it. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1494031)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494031
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainerResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationInitializationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ApplicationTerminationContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/AuxiliaryService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServices.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/AuxServicesEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java


 Annotate and document AuxService APIs
 -

 Key: YARN-841
 URL: https://issues.apache.org/jira/browse/YARN-841
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.5-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.1.0-beta

 Attachments: YARN-841-20130617.1.txt, YARN-841-20130617.2.txt, 
 YARN-841-20130617.3.txt, YARN-841-20130617.txt


 For users writing their own AuxServices, these APIs should be annotated and 
 need better documentation. Also, the classes may need to move out of the 
 NodeManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-822) Rename ApplicationToken to AMRMToken

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686752#comment-13686752
 ] 

Hudson commented on YARN-822:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-822. Renamed ApplicationToken to be AMRMToken, and similarly the 
corresponding TokenSelector and SecretManager. Contributed by Omkar Vinit 
Joshi. (Revision 1493889)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493889
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AMRMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ApplicationTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/SchedulerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/AMRMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ApplicationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMLaunchFailure.java
* 

[jira] [Commented] (YARN-839) TestContainerLaunch.testContainerEnvVariables fails on Windows

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686742#comment-13686742
 ] 

Hudson commented on YARN-839:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-839. TestContainerLaunch.testContainerEnvVariables fails on Windows. 
Contributed by Chuan Liu. (Revision 1493937)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493937
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 TestContainerLaunch.testContainerEnvVariables fails on Windows
 --

 Key: YARN-839
 URL: https://issues.apache.org/jira/browse/YARN-839
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-839-branch-2.2.patch, YARN-839-branch-2.patch, 
 YARN-839.patch


 The unit test case fails on Windows due to job id or container id was not 
 printed out as part of the container script. Later, the test tries to read 
 the pid from output of the file, and fails.
 Exception in trunk:
 {noformat}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.903 sec  
 FAILURE!
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 1307 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:278)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-759) Create Command enum in AllocateResponse

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686751#comment-13686751
 ] 

Hudson commented on YARN-759:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
Fix hadoop-yarn-project/CHANGES.txt for YARN-759 (Revision 1493842)

 Result = SUCCESS
bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493842
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 Create Command enum in AllocateResponse
 ---

 Key: YARN-759
 URL: https://issues.apache.org/jira/browse/YARN-759
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 2.1.0-beta

 Attachments: YARN-759.1.patch, YARN-759.2.patch, YARN-759.3.patch


 Use command enums for shutdown/resync instead of booleans.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-834) Review/fix annotations for yarn-client module and clearly differentiate *Async apis

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686744#comment-13686744
 ] 

Hudson commented on YARN-834:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-834. Fixed annotations for yarn-client module, reorganized packages 
and clearly differentiated *Async apis. Contributed by Arun C Murthy and Zhijie 
Shen. (Revision 1494017)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494017
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestResourceMgrDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestYARNRunner.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestYarnClientProtocolProvider.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/package-info.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl
* 

[jira] [Commented] (YARN-610) ClientToken (ClientToAMToken) should not be set in the environment

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686749#comment-13686749
 ] 

Hudson commented on YARN-610:
-

Integrated in Hadoop-Mapreduce-trunk #1461 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1461/])
YARN-610. ClientToken is no longer set in the environment of the 
Containers. Contributed by Omkar Vinit Joshi. (Revision 1493968)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493968
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRClientSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 

[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686961#comment-13686961
 ] 

Karthik Kambatla commented on YARN-553:
---

Hey [~acmurthy], thanks for taking a look. Having just one API makes sense - I 
remember considering doing it that way, but refrained for some reason. Will 
take another shot this afternoon, and post something soon - new patch or 
updated patch with reason why we can't have a single API. Thanks. 

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686972#comment-13686972
 ] 

Harsh J commented on YARN-553:
--

I'm fine with what Arun's proposed above - a single API call that does it all 
for you (since it has the relevant context) would be very nice for app writers.

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686994#comment-13686994
 ] 

Arun C Murthy commented on YARN-553:


We'll need to return a struct with both GetNewApplicationResponse and 
ApplicationSubmissionContext though since GetNewApplicationResponse has other 
pieces of info such as max-container etc. Thoughts?

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687052#comment-13687052
 ] 

Bikas Saha commented on YARN-569:
-

The correct place for the configs would depend on which config is being used to 
read them. So it could be a mix of yarn-site.xml and capacity-scheduler.xml. So 
we need to check the code to see which config file is being used to read them.

On a related note, I saw the following in the ApplicationMasterService code.

We are setting values on the allocateresponse after replacing lastResponse in 
the responseMap. This entire section is guarded by the lastResponse value 
obtained from this map (questionable effectiveness perhaps but orthogonal). So 
we should probably be setting everything in the new response (the preemption 
stuff) before the new response replaces the lastResponse in the responseMap.
{code}
  AllocateResponse oldResponse =
  responseMap.put(appAttemptId, allocateResponse);
  if (oldResponse == null) {
// appAttempt got unregistered, remove it back out
responseMap.remove(appAttemptId);
String message = App Attempt removed from the cache during allocate
+ appAttemptId;
LOG.error(message);
return resync;
  }
  
  allocateResponse.setNumClusterNodes(this.rScheduler.getNumClusterNodes());
   
  // add preemption to the allocateResponse message (if any)
  
allocateResponse.setPreemptionMessage(generatePreemptionMessage(allocation));
  
  // Adding NMTokens for allocated containers.
  if (!allocation.getContainers().isEmpty()) {
allocateResponse.setNMTokens(rmContext.getNMTokenSecretManager()
.getNMTokens(app.getUser(), appAttemptId,
allocation.getContainers()));
  }
  return allocateResponse;
{code}

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.patch, 
 YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations 

[jira] [Created] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-848:


 Summary: Nodemanager does not register with RM using the fully 
qualified hostname
 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah


If the hostname is misconfigured to be fully qualified ( i.e. hostname returns 
foo and hostname -f returns foo.bar.xyz ), the NM ends up registering with the 
RM using only foo. This can create problems if DNS cannot resolve the 
hostname properly. 

Furthermore, HDFS uses fully qualified hostnames which can end up reducing 
locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Description: 
If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
with the RM using only foo. This can create problems if DNS cannot resolve 
the hostname properly. 

Furthermore, HDFS uses fully qualified hostnames which can end up reducing 
locality matches when allocating containers based on block locations. 

  was:
If the hostname is misconfigured to be fully qualified ( i.e. hostname returns 
foo and hostname -f returns foo.bar.xyz ), the NM ends up registering with the 
RM using only foo. This can create problems if DNS cannot resolve the 
hostname properly. 

Furthermore, HDFS uses fully qualified hostnames which can end up reducing 
locality matches when allocating containers based on block locations. 


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah

 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up reducing 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-849) YARN should figure out the hostname instead of expecting it in the Application Register call

2013-06-18 Thread Jian He (JIRA)
Jian He created YARN-849:


 Summary: YARN should figure out the hostname instead of expecting 
it in the Application Register call 
 Key: YARN-849
 URL: https://issues.apache.org/jira/browse/YARN-849
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-849) YARN should figure out the App hostname instead of expecting it in the Application Register call

2013-06-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-849:
-

Summary: YARN should figure out the App hostname instead of expecting it in 
the Application Register call   (was: YARN should figure out the hostname 
instead of expecting it in the Application Register call )

 YARN should figure out the App hostname instead of expecting it in the 
 Application Register call 
 -

 Key: YARN-849
 URL: https://issues.apache.org/jira/browse/YARN-849
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Description: 
If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
with the RM using only foo. This can create problems if DNS cannot resolve 
the hostname properly. 

Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
locality matches when allocating containers based on block locations. 

  was:
If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
with the RM using only foo. This can create problems if DNS cannot resolve 
the hostname properly. 

Furthermore, HDFS uses fully qualified hostnames which can end up reducing 
locality matches when allocating containers based on block locations. 


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah

 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.2.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.3.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Attachment: YARN-848.1.patch

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Jian He (JIRA)
Jian He created YARN-850:


 Summary: Rename AMRMClient#getClusterAvailableResources, 
AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-850:
-

Attachment: YARN-850.patch

trivial rename patch

 Rename AMRMClient#getClusterAvailableResources, 
 AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 -

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.4.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687174#comment-13687174
 ] 

Bikas Saha commented on YARN-850:
-

Will commit when tests pass.

 Rename AMRMClient#getClusterAvailableResources, 
 AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 -

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687172#comment-13687172
 ] 

Bikas Saha commented on YARN-850:
-

Thanks Jian. This matches the method name with the method name used in yarn api 
allocateresponse. Makes more sense. +1.

 Rename AMRMClient#getClusterAvailableResources, 
 AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 -

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-850:


Affects Version/s: 2.1.0-beta

 Rename AMRMClient#getClusterAvailableResources, 
 AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 -

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687183#comment-13687183
 ] 

Hadoop QA commented on YARN-848:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588444/YARN-848.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1331//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1331//console

This message is automatically generated.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename AMRMClient#getClusterAvailableResources, AMRMClientAsync#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687196#comment-13687196
 ] 

Hadoop QA commented on YARN-850:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588448/YARN-850.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1332//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1332//console

This message is automatically generated.

 Rename AMRMClient#getClusterAvailableResources, 
 AMRMClientAsync#getClusterAvailableResources to getAvailableResources
 -

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-850) Rename getClusterAvailableResources to getAvailableResources in AMRMClients

2013-06-18 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-850:


Summary: Rename getClusterAvailableResources to getAvailableResources in 
AMRMClients  (was: Rename AMRMClient*#getClusterAvailableResources to 
getAvailableResources)

 Rename getClusterAvailableResources to getAvailableResources in AMRMClients
 ---

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-850) Rename AMRMClient*#getClusterAvailableResources to getAvailableResources

2013-06-18 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-850:


Summary: Rename AMRMClient*#getClusterAvailableResources to 
getAvailableResources  (was: Rename AMRMClient#getClusterAvailableResources, 
AMRMClientAsync#getClusterAvailableResources to getAvailableResources)

 Rename AMRMClient*#getClusterAvailableResources to getAvailableResources
 

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename getClusterAvailableResources to getAvailableResources in AMRMClients

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687218#comment-13687218
 ] 

Hudson commented on YARN-850:
-

Integrated in Hadoop-trunk-Commit #3969 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3969/])
YARN-850. Rename getClusterAvailableResources to getAvailableResources in 
AMRMClients (Jian He via bikas) (Revision 1494309)

 Result = SUCCESS
bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494309
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java


 Rename getClusterAvailableResources to getAvailableResources in AMRMClients
 ---

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687248#comment-13687248
 ] 

Siddharth Seth commented on YARN-848:
-

+1. Simple enough patch. Hitesh, could you please rebase this on top of 
YARN-694 - which should go in soon.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687252#comment-13687252
 ] 

Hadoop QA commented on YARN-694:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588449/YARN-694-20130618.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1333//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1333//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1333//console

This message is automatically generated.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687251#comment-13687251
 ] 

Vinod Kumar Vavilapalli commented on YARN-848:
--

Also, any manual tests to validate it works? Or atleast run 
TestContainerManager.java where hostname != resolved hostname.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687256#comment-13687256
 ] 

Vinod Kumar Vavilapalli commented on YARN-694:
--

This looks good, +1 except for the findbugs warning. The tests seems to be 
failing only on Jenkins, can you check?

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.5.patch

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-739) NM startContainer should validate the NodeId

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved YARN-739.


   Resolution: Fixed
Fix Version/s: 2.1.0-beta

 NM startContainer should validate the NodeId
 

 Key: YARN-739
 URL: https://issues.apache.org/jira/browse/YARN-739
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta


 The NM validates certain fields from the ContainerToken on a startContainer 
 call. It shoudl also validate the NodeId (which needs to be added to the 
 ContianerToken).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-739) NM startContainer should validate the NodeId

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687272#comment-13687272
 ] 

Omkar Vinit Joshi commented on YARN-739:


YARN-694 fixes this. closing this

 NM startContainer should validate the NodeId
 

 Key: YARN-739
 URL: https://issues.apache.org/jira/browse/YARN-739
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Omkar Vinit Joshi

 The NM validates certain fields from the ContainerToken on a startContainer 
 call. It shoudl also validate the NodeId (which needs to be added to the 
 ContianerToken).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-739) NM startContainer should validate the NodeId

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687275#comment-13687275
 ] 

Omkar Vinit Joshi commented on YARN-739:


This same check is added for NMToken too.

 NM startContainer should validate the NodeId
 

 Key: YARN-739
 URL: https://issues.apache.org/jira/browse/YARN-739
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta


 The NM validates certain fields from the ContainerToken on a startContainer 
 call. It shoudl also validate the NodeId (which needs to be added to the 
 ContianerToken).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-818) YARN application classpath should add $PWD/* in addition to $PWD

2013-06-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-818:
-

Attachment: YARN-818.patch

Add $PWD/* to YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH   and 
yarn-default.xml, also add a test case

 YARN application classpath should add $PWD/* in addition to $PWD
 

 Key: YARN-818
 URL: https://issues.apache.org/jira/browse/YARN-818
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Jian He
 Attachments: YARN-818.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Attachment: YARN-848.2.patch

Rebased against YARN-694.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.2.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-575) ContainerManager APIs should be user accessible

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687306#comment-13687306
 ] 

Omkar Vinit Joshi commented on YARN-575:


I guess this can be closed now. after YARN-694 using NMToken we can communicate 
with NodeManager.

 ContainerManager APIs should be user accessible
 ---

 Key: YARN-575
 URL: https://issues.apache.org/jira/browse/YARN-575
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
Priority: Critical

 Auth for ContainerManager is based on the containerId being accessed - since 
 this is what is used to launch containers (There's likely another jira 
 somewhere to change this to not be containerId based).
 What this also means is the API is effectively not usable with kerberos 
 credentials.
 Also, it should be possible to use this API with some generic tokens 
 (RMDelegation?), instead of with Container specific tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687311#comment-13687311
 ] 

Hadoop QA commented on YARN-848:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588471/YARN-848.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1336//console

This message is automatically generated.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.2.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-818) YARN application classpath should add $PWD/* in addition to $PWD

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687313#comment-13687313
 ] 

Hadoop QA commented on YARN-818:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588468/YARN-818.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

  org.apache.hadoop.mapreduce.v2.util.TestMRApps

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1335//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1335//console

This message is automatically generated.

 YARN application classpath should add $PWD/* in addition to $PWD
 

 Key: YARN-818
 URL: https://issues.apache.org/jira/browse/YARN-818
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Jian He
 Attachments: YARN-818.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Attachment: YARN-848.3.patch

Correct patch rebased on YARN-649

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-848:
-

Attachment: (was: YARN-848.2.patch)

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687350#comment-13687350
 ] 

Hadoop QA commented on YARN-694:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588465/YARN-694-20130618.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 23 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1334//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1334//console

This message is automatically generated.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687353#comment-13687353
 ] 

Hadoop QA commented on YARN-848:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588472/YARN-848.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1337//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1337//console

This message is automatically generated.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687370#comment-13687370
 ] 

Hudson commented on YARN-694:
-

Integrated in Hadoop-trunk-Commit #3973 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3973/])
YARN-694. Starting to use NMTokens to authenticate all communication with 
NodeManagers. Contributed by Omkar Vinit Joshi. (Revision 1494369)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494369
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java
* 

[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.patch.yarn-694-branch-2.1-beta

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch, YARN-694-20130618.patch.branch-2, 
 YARN-694-20130618.patch.yarn-694-branch-2.1-beta


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-694:
---

Attachment: YARN-694-20130618.patch.branch-2

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch, YARN-694-20130618.patch.branch-2, 
 YARN-694-20130618.patch.yarn-694-branch-2.1-beta


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-845) RM crash with NPE on NODE_UPDATE

2013-06-18 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687382#comment-13687382
 ] 

Mayank Bansal commented on YARN-845:


[~arpitgupta]

I couldn't reproduce this issue, and didn't get any scenarios where this can 
happen.

I just found out one race scenario which could trigger this in NODE_UPDATE.

Attaching draft patch but not sure if thats solving this issue but worth of 
adding the improvement.

If you can get me more info how did you reproduce this issue that would be 
great help.

Thanks,
Mayank

 RM crash with NPE on NODE_UPDATE
 

 Key: YARN-845
 URL: https://issues.apache.org/jira/browse/YARN-845
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Arpit Gupta
 Attachments: rm.log


 the following stack trace is generated in rm
 {code}
 n, service: 68.142.246.147:45454 }, ] resource=memory:1536, vCores:1 
 queue=default: capacity=1.0, absoluteCapacity=1.0, 
 usedResources=memory:44544, vCores:29usedCapacity=0.90625, 
 absoluteUsedCapacity=0.90625, numApps=1, numContainers=29 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,655 INFO  capacity.ParentQueue 
 (ParentQueue.java:completedContainer(696)) - completedContainer queue=root 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:completedContainer(832)) - Application 
 appattempt_1371448527090_0844_01 released container 
 container_1371448527090_0844_01_05 on node: host: hostXX:45454 
 #containers=4 available=2048 used=6144 with event: FINISHED
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:nodeUpdate(661)) - Trying to fulfill reservation for 
 application application_1371448527090_0844 on node: hostXX:45454
 2013-06-17 12:43:53,656 INFO  fica.FiCaSchedulerApp 
 (FiCaSchedulerApp.java:unreserve(435)) - Application 
 application_1371448527090_0844 unreserved  on node host: hostXX:45454 
 #containers=4 available=2048 used=6144, currently has 4 at priority 20; 
 currentReservation memory:6144, vCores:4
 2013-06-17 12:43:53,656 INFO  scheduler.AppSchedulingInfo 
 (AppSchedulingInfo.java:updateResourceRequests(168)) - checking for 
 deactivate...
 2013-06-17 12:43:53,657 FATAL resourcemanager.ResourceManager 
 (ResourceManager.java:run(422)) - Error in handling event type NODE_UPDATE to 
 the scheduler
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.unreserve(FiCaSchedulerApp.java:432)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.unreserve(LeafQueue.java:1416)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainer(LeafQueue.java:1346)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignOffSwitchContainers(LeafQueue.java:1221)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainersOnNode(LeafQueue.java:1180)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignReservedContainer(LeafQueue.java:939)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:803)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:665)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:727)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:83)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:413)
 at java.lang.Thread.run(Thread.java:662)
 2013-06-17 12:43:53,659 INFO  resourcemanager.ResourceManager 
 (ResourceManager.java:run(426)) - Exiting, bbye..
 2013-06-17 12:43:53,665 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 SelectChannelConnector@hostXX:8088
 2013-06-17 12:43:53,765 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(513)) - InterruptedExcpetion 
 recieved for ExpiredTokenRemover thread java.lang.InterruptedException: sleep 
 interrupted
 2013-06-17 12:43:53,766 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:stop(200)) - Stopping ResourceManager metrics 
 system...
 2013-06-17 12:43:53,767 INFO  

[jira] [Updated] (YARN-845) RM crash with NPE on NODE_UPDATE

2013-06-18 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-845:
---

Attachment: YARN-845-trunk-draft.patch

 RM crash with NPE on NODE_UPDATE
 

 Key: YARN-845
 URL: https://issues.apache.org/jira/browse/YARN-845
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Arpit Gupta
 Attachments: rm.log, YARN-845-trunk-draft.patch


 the following stack trace is generated in rm
 {code}
 n, service: 68.142.246.147:45454 }, ] resource=memory:1536, vCores:1 
 queue=default: capacity=1.0, absoluteCapacity=1.0, 
 usedResources=memory:44544, vCores:29usedCapacity=0.90625, 
 absoluteUsedCapacity=0.90625, numApps=1, numContainers=29 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,655 INFO  capacity.ParentQueue 
 (ParentQueue.java:completedContainer(696)) - completedContainer queue=root 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:completedContainer(832)) - Application 
 appattempt_1371448527090_0844_01 released container 
 container_1371448527090_0844_01_05 on node: host: hostXX:45454 
 #containers=4 available=2048 used=6144 with event: FINISHED
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:nodeUpdate(661)) - Trying to fulfill reservation for 
 application application_1371448527090_0844 on node: hostXX:45454
 2013-06-17 12:43:53,656 INFO  fica.FiCaSchedulerApp 
 (FiCaSchedulerApp.java:unreserve(435)) - Application 
 application_1371448527090_0844 unreserved  on node host: hostXX:45454 
 #containers=4 available=2048 used=6144, currently has 4 at priority 20; 
 currentReservation memory:6144, vCores:4
 2013-06-17 12:43:53,656 INFO  scheduler.AppSchedulingInfo 
 (AppSchedulingInfo.java:updateResourceRequests(168)) - checking for 
 deactivate...
 2013-06-17 12:43:53,657 FATAL resourcemanager.ResourceManager 
 (ResourceManager.java:run(422)) - Error in handling event type NODE_UPDATE to 
 the scheduler
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.unreserve(FiCaSchedulerApp.java:432)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.unreserve(LeafQueue.java:1416)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainer(LeafQueue.java:1346)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignOffSwitchContainers(LeafQueue.java:1221)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainersOnNode(LeafQueue.java:1180)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignReservedContainer(LeafQueue.java:939)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:803)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:665)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:727)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:83)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:413)
 at java.lang.Thread.run(Thread.java:662)
 2013-06-17 12:43:53,659 INFO  resourcemanager.ResourceManager 
 (ResourceManager.java:run(426)) - Exiting, bbye..
 2013-06-17 12:43:53,665 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 SelectChannelConnector@hostXX:8088
 2013-06-17 12:43:53,765 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(513)) - InterruptedExcpetion 
 recieved for ExpiredTokenRemover thread java.lang.InterruptedException: sleep 
 interrupted
 2013-06-17 12:43:53,766 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:stop(200)) - Stopping ResourceManager metrics 
 system...
 2013-06-17 12:43:53,767 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:stop(206)) - ResourceManager metrics system stopped.
 2013-06-17 12:43:53,767 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:shutdown(572)) - ResourceManager metrics system 
 shutdown complete.
 2013-06-17 12:43:53,768 WARN  amlauncher.ApplicationMasterLauncher 
 (ApplicationMasterLauncher.java:run(98)) - 
 

[jira] [Resolved] (YARN-820) NodeManager has invalid state transition after error in resource localization

2013-06-18 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal resolved YARN-820.


Resolution: Duplicate

 NodeManager has invalid state transition after error in resource localization
 -

 Key: YARN-820
 URL: https://issues.apache.org/jira/browse/YARN-820
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Mayank Bansal
 Attachments: yarn-user-nodemanager-localhost.localdomain.log




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687388#comment-13687388
 ] 

Siddharth Seth commented on YARN-848:
-

+1. Thanks Hitesh. Committing this.

 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687400#comment-13687400
 ] 

Hudson commented on YARN-848:
-

Integrated in Hadoop-trunk-Commit #3974 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3974/])
YARN-848. Fix NodeManager to register with RM using the fully qualified 
hostname. Contributed by Hitesh Shah. (Revision 1494385)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494385
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Fix For: 2.1.0-beta

 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-553:
--

Attachment: yarn-553-3.patch

Patch yarn-553-3.patch that:
# removes getNewApplication calls from YarnClient
# adds createApplication that returns ApplicationSubmissionContext
# adds get/set MaximimResourceCapability to ApplicationSubmissionContext 

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-18 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-851:
--

 Summary: Share NMTokens using NMTokenCache (api-based) instead of 
memory based approach which is used currently.
 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi


It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687409#comment-13687409
 ] 

Omkar Vinit Joshi commented on YARN-851:


Thanks [~bikassaha] for identifying this.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi

 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-613) Create NM proxy per NM instead of per container

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687411#comment-13687411
 ] 

Omkar Vinit Joshi commented on YARN-613:


As a part of YARN-694 ContainerManagementProtocolProxy was added to support per 
node manager proxy. As all the things in here are fixed closing it.

 Create NM proxy per NM instead of per container
 ---

 Key: YARN-613
 URL: https://issues.apache.org/jira/browse/YARN-613
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Omkar Vinit Joshi
 Attachments: AMNMToken.docx


 Currently a new NM proxy has to be created per container since the secure 
 authentication is using a containertoken from the container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-613) Create NM proxy per NM instead of per container

2013-06-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved YARN-613.


   Resolution: Fixed
Fix Version/s: 2.1.0-beta

 Create NM proxy per NM instead of per container
 ---

 Key: YARN-613
 URL: https://issues.apache.org/jira/browse/YARN-613
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: AMNMToken.docx


 Currently a new NM proxy has to be created per container since the secure 
 authentication is using a containertoken from the container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687421#comment-13687421
 ] 

Vinod Kumar Vavilapalli commented on YARN-791:
--

bq. I suggested this on YARN-642, but others didn't think it was a good idea. 
If we want to revisit this, I'd be happy to make the change, but I probably 
won't be able to post a completed patch until late tomorrow.
Yeah, I didn't see the point of multiple states at YARN-642, but from YARN-727, 
I can see how having a set is a more general API. I think we should do this.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-845) RM crash with NPE on NODE_UPDATE

2013-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687420#comment-13687420
 ] 

Hadoop QA commented on YARN-845:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588484/YARN-845-trunk-draft.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1338//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1338//console

This message is automatically generated.

 RM crash with NPE on NODE_UPDATE
 

 Key: YARN-845
 URL: https://issues.apache.org/jira/browse/YARN-845
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Mayank Bansal
 Attachments: rm.log, YARN-845-trunk-draft.patch


 the following stack trace is generated in rm
 {code}
 n, service: 68.142.246.147:45454 }, ] resource=memory:1536, vCores:1 
 queue=default: capacity=1.0, absoluteCapacity=1.0, 
 usedResources=memory:44544, vCores:29usedCapacity=0.90625, 
 absoluteUsedCapacity=0.90625, numApps=1, numContainers=29 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,655 INFO  capacity.ParentQueue 
 (ParentQueue.java:completedContainer(696)) - completedContainer queue=root 
 usedCapacity=0.90625 absoluteUsedCapacity=0.90625 used=memory:44544, 
 vCores:29 cluster=memory:49152, vCores:48
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:completedContainer(832)) - Application 
 appattempt_1371448527090_0844_01 released container 
 container_1371448527090_0844_01_05 on node: host: hostXX:45454 
 #containers=4 available=2048 used=6144 with event: FINISHED
 2013-06-17 12:43:53,656 INFO  capacity.CapacityScheduler 
 (CapacityScheduler.java:nodeUpdate(661)) - Trying to fulfill reservation for 
 application application_1371448527090_0844 on node: hostXX:45454
 2013-06-17 12:43:53,656 INFO  fica.FiCaSchedulerApp 
 (FiCaSchedulerApp.java:unreserve(435)) - Application 
 application_1371448527090_0844 unreserved  on node host: hostXX:45454 
 #containers=4 available=2048 used=6144, currently has 4 at priority 20; 
 currentReservation memory:6144, vCores:4
 2013-06-17 12:43:53,656 INFO  scheduler.AppSchedulingInfo 
 (AppSchedulingInfo.java:updateResourceRequests(168)) - checking for 
 deactivate...
 2013-06-17 12:43:53,657 FATAL resourcemanager.ResourceManager 
 (ResourceManager.java:run(422)) - Error in handling event type NODE_UPDATE to 
 the scheduler
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.unreserve(FiCaSchedulerApp.java:432)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.unreserve(LeafQueue.java:1416)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainer(LeafQueue.java:1346)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignOffSwitchContainers(LeafQueue.java:1221)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainersOnNode(LeafQueue.java:1180)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignReservedContainer(LeafQueue.java:939)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:803)
 at 
 

[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-18 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.13.patch

change listApplications api to take in a list of application types

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
 YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, 
 YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-18 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687443#comment-13687443
 ] 

Sandy Ryza commented on YARN-791:
-

I'll work on a patch for multiple states.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687466#comment-13687466
 ] 

Vinod Kumar Vavilapalli commented on YARN-727:
--

bq. Please take a look at the GnuParser options api in more detail. Possible 
solutions could be to use --list --apptype apptype1 --apptype apptype2 or 
--list list of comma separate app types. In both cases, args.length is not 
needed
Yeah. On first look, I like {code}yarn application -list --appTypes 
YARN,MAPREDUCE{code} though it has the obvious problem of splitting on comma. 
Long term we can support both -appTypes as well as -appType YARN -appType 
MAPREDUCE. So how about just doing -appTypes for now?

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
 YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, 
 YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-18 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-553:
--

Attachment: (was: yarn-553-3.patch)

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >