[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684103#comment-13684103
 ] 

Hadoop QA commented on YARN-727:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587957/YARN-727.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.TestMRJobClient
  org.apache.hadoop.mapreduce.v2.TestMRJobs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1259//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1259//console

This message is automatically generated.

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-06-15 Thread Maysam Yabandeh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684104#comment-13684104
 ] 

Maysam Yabandeh commented on YARN-779:
--

Thanks [~sandyr]. Let me run by you my understanding of the problem, to ensure 
that we are on the same page. The reported erroneous scenario could be 
addressed by reseting the outstanding requests at RM, whenever ANY gets 0. The 
actual problem, however, still remains since the AMRMClient receives a 
ContainerRequest and decomposes it into independent ResourceRequests. The 
information about the disjunction between the requested resources is, thus, not 
available at RM to properly maintain the list of outstanding requests. Building 
on top of the original example, here is the erroneous scenario:

{code}
@AMRMClient
ContainerRequest(..., {node1, node2}, ..., 10)
ContainerRequest(..., {node3}, ..., 5)
{code}

The internal state at RM will be:

{code}
@AppSchedulingInfo
Resource  #
-
node110
node210
node35
ANY  15
{code}

In other words, the original request of "(10*(node1 or node2)) and 5*node3"  
could be interpreted in different way such as "10*node1 and (5*(node2 or 
node3))". If my understanding is correct, then solution lies in changing the 
API between AM and RM, to also send the original disjunction between the 
requested resources. We then need to change the AppSchedulingInfo to properly 
maintain the added information. Does this makes sense?



> AMRMClient should clean up dangling unsatisfied request
> ---
>
> Key: YARN-779
> URL: https://issues.apache.org/jira/browse/YARN-779
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Maysam Yabandeh
>Priority: Critical
>
> If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
> node2 is placed (assuming a single rack) the resulting ResourceRequests will 
> be
> {code}
> location - containers
> -
> node1- 10
> node2- 10
> rack - 10
> ANY  - 10
> {code}
> Assuming 5 containers are allocated in node1 and 5 containers are allocated 
> in node2, the following ResourceRequests will be outstanding on the RM.
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> {code}
> If the AMMRClient does a new ContainerRequest allocation, this time for 5 
> containers in node3, the resulting outstanding ResourceRequests on the RM 
> will be:
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> node3- 5
> rack - 5
> ANY  - 5
> {code}
> At this point, the scheduler may assign 5 containers to node1 and it will 
> never assign the 5 containers node3 asked for.
> AMRMClient should keep track of the outstanding allocations counts per 
> ContainerRequest and when gets to zero it should update the the RACK/ANY 
> decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-659) RMStateStore's removeApplication APIs should just take an applicationId

2013-06-15 Thread Maysam Yabandeh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684136#comment-13684136
 ] 

Maysam Yabandeh commented on YARN-659:
--

But, it seems that FileSystemRMStateStore uses the "attempts" field of 
ApplicationState:
{code:java}
  @Override
  public synchronized void removeApplicationState(ApplicationState appState)
  throws Exception {
String appId = appState.getAppId().toString();
Path nodeRemovePath = getNodePath(rmAppRoot, appId);
LOG.info("Removing info for app: " + appId + " at: " + nodeRemovePath);
deleteFile(nodeRemovePath);
for(ApplicationAttemptId attemptId : appState.attempts.keySet()) {
  removeApplicationAttemptState(attemptId.toString());
}
  }
{code}

> RMStateStore's removeApplication APIs should just take an applicationId
> ---
>
> Key: YARN-659
> URL: https://issues.apache.org/jira/browse/YARN-659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Vinod Kumar Vavilapalli
>
> There is no need to give in the whole state for removal - just an ID should 
> be enough when an app finishes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-639) Make AM of Distributed Shell Use NMClient

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684146#comment-13684146
 ] 

Hudson commented on YARN-639:
-

Integrated in Hadoop-Yarn-trunk #241 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/241/])
YARN-639. Modified Distributed Shell application to start using the new 
NMClient library. Contributed by Zhijie Shen. (Revision 1493280)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493280
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java


> Make AM of Distributed Shell Use NMClient
> -
>
> Key: YARN-639
> URL: https://issues.apache.org/jira/browse/YARN-639
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications/distributed-shell
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.1.0-beta
>
> Attachments: YARN-639.1.patch, YARN-639.2.patch, YARN-639.2.patch
>
>
> YARN-422 adds NMClient. AM of Distributed Shell should use it instead of 
> using ContainerManager directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-821) Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684140#comment-13684140
 ] 

Hudson commented on YARN-821:
-

Integrated in Hadoop-Yarn-trunk #241 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/241/])
YARN-821. Renamed setFinishApplicationStatus to setFinalApplicationStatus 
in FinishApplicationMasterRequest for consistency. Contributed by Jian He. 
(Revision 1493315)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493315
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestApplicationTokens.java


> Rename FinishApplicationMasterRequest.setFinishApplicationStatus to 
> setFinalApplicationStatus to be consistent with getter
> --
>
> Key: YARN-821
> URL: https://issues.apache.org/jira/browse/YARN-821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-821.1.patch, YARN-821.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-806) Move ContainerExitStatus from yarn.api to yarn.api.records

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684141#comment-13684141
 ] 

Hudson commented on YARN-806:
-

Integrated in Hadoop-Yarn-trunk #241 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/241/])
YARN-806. Moved ContainerExitStatus from yarn.api to yarn.api.records. 
Contributed by Jian He. (Revision 1493138)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493138
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java


> Move ContainerExitStatus from yarn.api to yarn.api.records
> --
>
> Key: YARN-806
> URL: https://issues.apache.org/jira/browse/YARN-806
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-806.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-782) vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684143#comment-13684143
 ] 

Hudson commented on YARN-782:
-

Integrated in Hadoop-Yarn-trunk #241 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/241/])
YARN-782. vcores-pcores ratio functions differently from vmem-pmem ratio in 
misleading way. (sandyr via tucu) (Revision 1493064)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493064
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> vcores-pcores ratio functions differently from vmem-pmem ratio in misleading 
> way 
> -
>
> Key: YARN-782
> URL: https://issues.apache.org/jira/browse/YARN-782
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Critical
> Fix For: 2.1.0-beta
>
> Attachments: YARN-782-1.patch, YARN-782.patch
>
>
> The vcores-pcores ratio functions differently from the vmem-pmem ratio in the 
> sense that the vcores-pcores ratio has an impact on allocations and the 
> vmem-pmem ratio does not.
> If I double my vmem-pmem ratio, the only change that occurs is that my 
> containers, after being scheduled, are less likely to be killed for using too 
> much virtual memory.  But if I double my vcore-pcore ratio, my nodes will 
> appear to the ResourceManager to contain double the amount of CPU space, 
> which will affect scheduling decisions.
> The lack of consistency will exacerbate the already difficult problem of 
> resource configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-789) Enable zero capabilities resource requests in fair scheduler

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684145#comment-13684145
 ] 

Hudson commented on YARN-789:
-

Integrated in Hadoop-Yarn-trunk #241 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/241/])
YARN-789. Enable zero capabilities resource requests in fair scheduler. 
(tucu) (Revision 1493219)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493219
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> Enable zero capabilities resource requests in fair scheduler
> 
>
> Key: YARN-789
> URL: https://issues.apache.org/jira/browse/YARN-789
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.1.0-beta
>
> Attachments: YARN-789.patch, YARN-789.patch, YARN-789.patch, 
> YARN-789.patch
>
>
> Per discussion in YARN-689, reposting updated use case:
> 1. I have a set of services co-existing with a Yarn cluster.
> 2. These services run out of band from Yarn. They are not started as yarn 
> containers and they don't use Yarn containers for processing.
> 3. These services use, dynamically, different amounts of CPU and memory based 
> on their load. They manage their CPU and memory requirements independently. 
> In other words, depending on their load, they may require more CPU but not 
> memory or vice-versa.
> By using YARN as RM for these services I'm able share and utilize the 
> resources of the cluster appropriately and in a dynamic way. Yarn keeps tab 
> of all the resources.
> These services run an AM that reserves resources on their behalf. When this 
> AM gets the requested resources, the services bump up their CPU/memory 
> utilization out of band from Yarn. If the Yarn allocations are 
> released/preempted, the services back off on their resources utilization. By 
> doing this, Yarn and these service correctly share the cluster resources, 
> being Yarn RM the only one that does the overall resource bookkeeping.
> The services AM, not to break the lifecycle of containers, start containers 
> in the corresponding NMs. These container processes do basically a sleep 
> forever (i.e. sleep 1d). They are almost not using any CPU nor memory 
> (less than 1MB). Thus it is reasonable to assume their required CPU and 
> memory utilization is NIL (more on hard enforcement later). Because of this 
> almost NIL utilization of CPU and memory, it is possible to specify, when 
> doing a request, zero as one of the dimensions (CPU or memory).
> The current limitation is that the increment is also the minimum. 
> If we set the memory increment to 1MB. When doing a pure CPU request, we 
> would have to specify 1MB of memory. That would work. However it would allow 
> discretionary memory requests without a desired normalization (increments of 
> 256, 512, etc).
> If we set the CPU increment to 1CPU. When doing a pure memory request, we 
> would have to specify 1CPU. CPU amounts a much smaller than memory amou

[jira] [Commented] (YARN-811) Add a set of final _init/_start/_stop methods to CompositeService

2013-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684165#comment-13684165
 ] 

Steve Loughran commented on YARN-811:
-

Siddarth -I see. I went through all the subclasses and when moving them to the 
{{service[init()|start()|stop()] methods made sure they delegated to the 
superclasses {{serviceStart() &c}} methods. 

As of right now, there are not -AFAIK- any calls to {{super.start()}} in the 
{{serviceStart()}} startup code of any of the service implementations. If there 
are, thats something that I should have caught and fixed -so file a JIRA as a 
bug and assign it to me.

> Add a set of final _init/_start/_stop methods to CompositeService
> -
>
> Key: YARN-811
> URL: https://issues.apache.org/jira/browse/YARN-811
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
> Fix For: 2.1.0-beta
>
>
> Classes which implement AbstractService no longer need to make a super.init, 
> start, stop call. The same could be done for CompositeService as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-804) mark AbstractService init/start/stop methods as final

2013-06-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684173#comment-13684173
 ] 

Steve Loughran commented on YARN-804:
-

# code calls init() or start() on a composite service (or subclass)
# {{CompositeService}} inits all of its children in {{serviceInit()}}, starts 
in {{serviceStart()}}
# in {{serviceStop()}} it does a best-effort teardown of all services in the 
states INITED or higher (but not NOTINITED) -in reverse order. This policy 
provides the teardown experience needed by the mapreduce code.
# if init or startup fails, then {{CompositeService}} will throw the exception 
straight out of the serviceInit/Start method, leaving {{AbstractService}} to 
handle it by calling {{AbstractService.stop}}, hence to 
{{CompositeService.stop()}} or that of the subclass.

As before, subclasses can be clever about when they propagate the 
serviceStart() operation up to the CompositeService, allowing them to create 
and init services, add them as new services to manage -then call 
{{super.serviceStart()}} to have the Composite service manage the rest of their 
lifecycle -both start and stop.

If there is one optional change that subclasses of composite services don't 
need to do, it is catch any exceptions in the init/start code, then wrap them 
in RuntimeException to get them past the {{Service}} lifecycle method 
signatures. Exceptions can be left to be thrown up from the {{serviceInit()}} 
and {{serviceStart()}} methods, where they are caught and (if need be) wrapped 
by RuntimeExceptions and rethrown after the {{Service.stop()}} operation is 
invoked. I didn't go through all the code and remove that catch and rethrow, as 
would have increased the size of the patch for little real benefit. A few of 
the tests did fail from some of the exception strings changing (by the 
AbstractService-level wrapping) -I had to change their assertions from 
{{exception.getMessage().startsWith("some text")}}  to  
{{exception.getMessage().contains("some text")}}. 

There's one other thing that we catch up on is if, during an {{serviceInit()}} 
operation, the instance of {{Configuration}} passed down is changed. That's 
logged at debug level for the curious, people trying to track down why an 
attempt to use a shared config instance between peer services in a 
{{CompositeService}} isn't working for one of the services in the set. 
Sometimes it happens, primarily when a service wants to convert it to some 
{{YarnConfig}}, {{JobConfig}} etc, purely for the role-specific helper methods.

> mark AbstractService init/start/stop methods as final
> -
>
> Key: YARN-804
> URL: https://issues.apache.org/jira/browse/YARN-804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Vinod Kumar Vavilapalli
> Attachments: YARN-804-001.patch
>
>
> Now that YARN-117 and MAPREDUCE-5298 are checked in, we can mark the public 
> AbstractService init/start/stop methods as final.
> Why? It puts the lifecycle check and error handling around the subclass code, 
> ensuring no lifecycle method gets called in the wrong state or gets called 
> more than once.When a {{serviceInit(), serviceStart()  & serviceStop()}} 
> method throws an exception, it's caught and auto-triggers stop. 
> Marking the methods as final forces service implementations to move to the 
> stricter lifecycle. It has one side effect: some of the mocking tests play up 
> -I'll need some assistance here

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-821) Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684178#comment-13684178
 ] 

Hudson commented on YARN-821:
-

Integrated in Hadoop-Hdfs-trunk #1431 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1431/])
YARN-821. Renamed setFinishApplicationStatus to setFinalApplicationStatus 
in FinishApplicationMasterRequest for consistency. Contributed by Jian He. 
(Revision 1493315)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493315
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestApplicationTokens.java


> Rename FinishApplicationMasterRequest.setFinishApplicationStatus to 
> setFinalApplicationStatus to be consistent with getter
> --
>
> Key: YARN-821
> URL: https://issues.apache.org/jira/browse/YARN-821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-821.1.patch, YARN-821.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-806) Move ContainerExitStatus from yarn.api to yarn.api.records

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684179#comment-13684179
 ] 

Hudson commented on YARN-806:
-

Integrated in Hadoop-Hdfs-trunk #1431 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1431/])
YARN-806. Moved ContainerExitStatus from yarn.api to yarn.api.records. 
Contributed by Jian He. (Revision 1493138)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493138
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java


> Move ContainerExitStatus from yarn.api to yarn.api.records
> --
>
> Key: YARN-806
> URL: https://issues.apache.org/jira/browse/YARN-806
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-806.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-782) vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684181#comment-13684181
 ] 

Hudson commented on YARN-782:
-

Integrated in Hadoop-Hdfs-trunk #1431 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1431/])
YARN-782. vcores-pcores ratio functions differently from vmem-pmem ratio in 
misleading way. (sandyr via tucu) (Revision 1493064)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493064
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> vcores-pcores ratio functions differently from vmem-pmem ratio in misleading 
> way 
> -
>
> Key: YARN-782
> URL: https://issues.apache.org/jira/browse/YARN-782
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Critical
> Fix For: 2.1.0-beta
>
> Attachments: YARN-782-1.patch, YARN-782.patch
>
>
> The vcores-pcores ratio functions differently from the vmem-pmem ratio in the 
> sense that the vcores-pcores ratio has an impact on allocations and the 
> vmem-pmem ratio does not.
> If I double my vmem-pmem ratio, the only change that occurs is that my 
> containers, after being scheduled, are less likely to be killed for using too 
> much virtual memory.  But if I double my vcore-pcore ratio, my nodes will 
> appear to the ResourceManager to contain double the amount of CPU space, 
> which will affect scheduling decisions.
> The lack of consistency will exacerbate the already difficult problem of 
> resource configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-639) Make AM of Distributed Shell Use NMClient

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684184#comment-13684184
 ] 

Hudson commented on YARN-639:
-

Integrated in Hadoop-Hdfs-trunk #1431 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1431/])
YARN-639. Modified Distributed Shell application to start using the new 
NMClient library. Contributed by Zhijie Shen. (Revision 1493280)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493280
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java


> Make AM of Distributed Shell Use NMClient
> -
>
> Key: YARN-639
> URL: https://issues.apache.org/jira/browse/YARN-639
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications/distributed-shell
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.1.0-beta
>
> Attachments: YARN-639.1.patch, YARN-639.2.patch, YARN-639.2.patch
>
>
> YARN-422 adds NMClient. AM of Distributed Shell should use it instead of 
> using ContainerManager directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-789) Enable zero capabilities resource requests in fair scheduler

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684183#comment-13684183
 ] 

Hudson commented on YARN-789:
-

Integrated in Hadoop-Hdfs-trunk #1431 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1431/])
YARN-789. Enable zero capabilities resource requests in fair scheduler. 
(tucu) (Revision 1493219)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493219
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> Enable zero capabilities resource requests in fair scheduler
> 
>
> Key: YARN-789
> URL: https://issues.apache.org/jira/browse/YARN-789
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.1.0-beta
>
> Attachments: YARN-789.patch, YARN-789.patch, YARN-789.patch, 
> YARN-789.patch
>
>
> Per discussion in YARN-689, reposting updated use case:
> 1. I have a set of services co-existing with a Yarn cluster.
> 2. These services run out of band from Yarn. They are not started as yarn 
> containers and they don't use Yarn containers for processing.
> 3. These services use, dynamically, different amounts of CPU and memory based 
> on their load. They manage their CPU and memory requirements independently. 
> In other words, depending on their load, they may require more CPU but not 
> memory or vice-versa.
> By using YARN as RM for these services I'm able share and utilize the 
> resources of the cluster appropriately and in a dynamic way. Yarn keeps tab 
> of all the resources.
> These services run an AM that reserves resources on their behalf. When this 
> AM gets the requested resources, the services bump up their CPU/memory 
> utilization out of band from Yarn. If the Yarn allocations are 
> released/preempted, the services back off on their resources utilization. By 
> doing this, Yarn and these service correctly share the cluster resources, 
> being Yarn RM the only one that does the overall resource bookkeeping.
> The services AM, not to break the lifecycle of containers, start containers 
> in the corresponding NMs. These container processes do basically a sleep 
> forever (i.e. sleep 1d). They are almost not using any CPU nor memory 
> (less than 1MB). Thus it is reasonable to assume their required CPU and 
> memory utilization is NIL (more on hard enforcement later). Because of this 
> almost NIL utilization of CPU and memory, it is possible to specify, when 
> doing a request, zero as one of the dimensions (CPU or memory).
> The current limitation is that the increment is also the minimum. 
> If we set the memory increment to 1MB. When doing a pure CPU request, we 
> would have to specify 1MB of memory. That would work. However it would allow 
> discretionary memory requests without a desired normalization (increments of 
> 256, 512, etc).
> If we set the CPU increment to 1CPU. When doing a pure memory request, we 
> would have to specify 1CPU. CPU amounts a much smaller than memory am

[jira] [Commented] (YARN-789) Enable zero capabilities resource requests in fair scheduler

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684208#comment-13684208
 ] 

Hudson commented on YARN-789:
-

Integrated in Hadoop-Mapreduce-trunk #1458 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1458/])
YARN-789. Enable zero capabilities resource requests in fair scheduler. 
(tucu) (Revision 1493219)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493219
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> Enable zero capabilities resource requests in fair scheduler
> 
>
> Key: YARN-789
> URL: https://issues.apache.org/jira/browse/YARN-789
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.1.0-beta
>
> Attachments: YARN-789.patch, YARN-789.patch, YARN-789.patch, 
> YARN-789.patch
>
>
> Per discussion in YARN-689, reposting updated use case:
> 1. I have a set of services co-existing with a Yarn cluster.
> 2. These services run out of band from Yarn. They are not started as yarn 
> containers and they don't use Yarn containers for processing.
> 3. These services use, dynamically, different amounts of CPU and memory based 
> on their load. They manage their CPU and memory requirements independently. 
> In other words, depending on their load, they may require more CPU but not 
> memory or vice-versa.
> By using YARN as RM for these services I'm able share and utilize the 
> resources of the cluster appropriately and in a dynamic way. Yarn keeps tab 
> of all the resources.
> These services run an AM that reserves resources on their behalf. When this 
> AM gets the requested resources, the services bump up their CPU/memory 
> utilization out of band from Yarn. If the Yarn allocations are 
> released/preempted, the services back off on their resources utilization. By 
> doing this, Yarn and these service correctly share the cluster resources, 
> being Yarn RM the only one that does the overall resource bookkeeping.
> The services AM, not to break the lifecycle of containers, start containers 
> in the corresponding NMs. These container processes do basically a sleep 
> forever (i.e. sleep 1d). They are almost not using any CPU nor memory 
> (less than 1MB). Thus it is reasonable to assume their required CPU and 
> memory utilization is NIL (more on hard enforcement later). Because of this 
> almost NIL utilization of CPU and memory, it is possible to specify, when 
> doing a request, zero as one of the dimensions (CPU or memory).
> The current limitation is that the increment is also the minimum. 
> If we set the memory increment to 1MB. When doing a pure CPU request, we 
> would have to specify 1MB of memory. That would work. However it would allow 
> discretionary memory requests without a desired normalization (increments of 
> 256, 512, etc).
> If we set the CPU increment to 1CPU. When doing a pure memory request, we 
> would have to specify 1CPU. CPU amounts a much smaller than

[jira] [Commented] (YARN-821) Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684203#comment-13684203
 ] 

Hudson commented on YARN-821:
-

Integrated in Hadoop-Mapreduce-trunk #1458 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1458/])
YARN-821. Renamed setFinishApplicationStatus to setFinalApplicationStatus 
in FinishApplicationMasterRequest for consistency. Contributed by Jian He. 
(Revision 1493315)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493315
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMCommunicator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMasterRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestApplicationTokens.java


> Rename FinishApplicationMasterRequest.setFinishApplicationStatus to 
> setFinalApplicationStatus to be consistent with getter
> --
>
> Key: YARN-821
> URL: https://issues.apache.org/jira/browse/YARN-821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-821.1.patch, YARN-821.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-806) Move ContainerExitStatus from yarn.api to yarn.api.records

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684204#comment-13684204
 ] 

Hudson commented on YARN-806:
-

Integrated in Hadoop-Mapreduce-trunk #1458 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1458/])
YARN-806. Moved ContainerExitStatus from yarn.api to yarn.api.records. 
Contributed by Jian He. (Revision 1493138)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493138
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerExitStatus.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java


> Move ContainerExitStatus from yarn.api to yarn.api.records
> --
>
> Key: YARN-806
> URL: https://issues.apache.org/jira/browse/YARN-806
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-806.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-639) Make AM of Distributed Shell Use NMClient

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684210#comment-13684210
 ] 

Hudson commented on YARN-639:
-

Integrated in Hadoop-Mapreduce-trunk #1458 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1458/])
YARN-639. Modified Distributed Shell application to start using the new 
NMClient library. Contributed by Zhijie Shen. (Revision 1493280)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493280
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java


> Make AM of Distributed Shell Use NMClient
> -
>
> Key: YARN-639
> URL: https://issues.apache.org/jira/browse/YARN-639
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications/distributed-shell
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.1.0-beta
>
> Attachments: YARN-639.1.patch, YARN-639.2.patch, YARN-639.2.patch
>
>
> YARN-422 adds NMClient. AM of Distributed Shell should use it instead of 
> using ContainerManager directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-782) vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684206#comment-13684206
 ] 

Hudson commented on YARN-782:
-

Integrated in Hadoop-Mapreduce-trunk #1458 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1458/])
YARN-782. vcores-pcores ratio functions differently from vmem-pmem ratio in 
misleading way. (sandyr via tucu) (Revision 1493064)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493064
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> vcores-pcores ratio functions differently from vmem-pmem ratio in misleading 
> way 
> -
>
> Key: YARN-782
> URL: https://issues.apache.org/jira/browse/YARN-782
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Critical
> Fix For: 2.1.0-beta
>
> Attachments: YARN-782-1.patch, YARN-782.patch
>
>
> The vcores-pcores ratio functions differently from the vmem-pmem ratio in the 
> sense that the vcores-pcores ratio has an impact on allocations and the 
> vmem-pmem ratio does not.
> If I double my vmem-pmem ratio, the only change that occurs is that my 
> containers, after being scheduled, are less likely to be killed for using too 
> much virtual memory.  But if I double my vcore-pcore ratio, my nodes will 
> appear to the ResourceManager to contain double the amount of CPU space, 
> which will affect scheduling decisions.
> The lack of consistency will exacerbate the already difficult problem of 
> resource configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-06-15 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684249#comment-13684249
 ] 

Sandy Ryza commented on YARN-779:
-

[~maysamyabandeh], I follow you until the end.  What API changes do you have in 
mind? i.e. what would be required to send the disjunction between requested 
resources that is not available now?

> AMRMClient should clean up dangling unsatisfied request
> ---
>
> Key: YARN-779
> URL: https://issues.apache.org/jira/browse/YARN-779
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Maysam Yabandeh
>Priority: Critical
>
> If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
> node2 is placed (assuming a single rack) the resulting ResourceRequests will 
> be
> {code}
> location - containers
> -
> node1- 10
> node2- 10
> rack - 10
> ANY  - 10
> {code}
> Assuming 5 containers are allocated in node1 and 5 containers are allocated 
> in node2, the following ResourceRequests will be outstanding on the RM.
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> {code}
> If the AMMRClient does a new ContainerRequest allocation, this time for 5 
> containers in node3, the resulting outstanding ResourceRequests on the RM 
> will be:
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> node3- 5
> rack - 5
> ANY  - 5
> {code}
> At this point, the scheduler may assign 5 containers to node1 and it will 
> never assign the 5 containers node3 asked for.
> AMRMClient should keep track of the outstanding allocations counts per 
> ContainerRequest and when gets to zero it should update the the RACK/ANY 
> decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.4.patch

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684251#comment-13684251
 ] 

Xuan Gong commented on YARN-727:


Add YarnConfiguration.All_APPLICATION_TYPE to list all applications.

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-802) APPLICATION_INIT is never sent to AuxServices other than the builtin ShuffleHandler

2013-06-15 Thread Avner BenHanoch (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684258#comment-13684258
 ] 

Avner BenHanoch commented on YARN-802:
--

Thanks for the explanation about YARN.  Still this is not enough, for two 
reasons:

1. It is true that *usually* "A shuffle consumer instance will only contact one 
of the shuffle providers". Still, as written in the quote I pasted , "it should 
be possible to fallback to another shuffle on the fly".  This means that one 
consumer can load another consumer and serve as proxy to the real consumer that 
will contact another provider.

2. In a single job there are multiple reducers each with its own shuffle 
consumer instance; hence, we have *multiple shuffle consumers per job*.  It 
should be possible for each consumer to choose its preffered provider based on 
memory/network/... condition on its machine regardless of other consumers in 
the same job.  


> APPLICATION_INIT is never sent to AuxServices other than the builtin 
> ShuffleHandler
> ---
>
> Key: YARN-802
> URL: https://issues.apache.org/jira/browse/YARN-802
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Avner BenHanoch
>
> APPLICATION_INIT is never sent to AuxServices other than the built-in 
> ShuffleHandler.  This means that 3rd party ShuffleProvider(s) will not be 
> able to function, because APPLICATION_INIT enables the AuxiliaryService to 
> map jobId->userId. This is needed for properly finding the MOFs of a job per 
> reducers' requests.
> NOTE: The built-in ShuffleHandler does get APPLICATION_INIT events due to 
> hard-coded expression in hadoop code. The current TaskAttemptImpl.java code 
> explicitly call: serviceData.put (ShuffleHandler.MAPREDUCE_SHUFFLE_SERVICEID, 
> ...) and ignores any additional AuxiliaryService. As a result, only the 
> built-in ShuffleHandler will get APPLICATION_INIT events.  Any 3rd party 
> AuxillaryService will never get APPLICATION_INIT events.
> I think a solution can be in one of two ways:
> 1. Change TaskAttemptImpl.java to loop on all Auxiliary Services and register 
> each of them, by calling serviceData.put (…) in loop.
> 2. Change AuxServices.java similar to the fix in: MAPREDUCE-2668  
> "APPLICATION_STOP is never sent to AuxServices".  This means that in case the 
> 'handle' method gets APPLICATION_INIT event it will demultiplex it to all Aux 
> Services regardless of the value in event.getServiceID().
> I prefer the 2nd solution.  I am welcoming any ideas.  I can provide the 
> needed patch for any option that people like.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684263#comment-13684263
 ] 

Hadoop QA commented on YARN-727:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587986/YARN-727.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.v2.TestMRJobs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1260//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1260//console

This message is automatically generated.

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-06-15 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684413#comment-13684413
 ] 

Sandy Ryza commented on YARN-779:
-

The MR AM has solved this problem purely on the AM side (can't remember the 
JIRA number, but I'll post it when I find it), so I think it should be possible 
to do this without changing the RM or the AMRM protocol.  The basic issue is 
that, when a container is given to the app, we need to associate it with a 
ContainerRequest so that we can cancel the right resource requests.  In 
general, the AMRMClient cannot automatically perform this association.  
Consider a situation where an app needs two tasks, one on node1 or node2, and 
one on node2 or node3.  When the app receives a container on node2, it will 
assign it to one of these tasks, but only the app knows which task it is 
assigning it to.  So we need some sort of API for the app to communicate this 
knowledge to the AMRMClient.

> AMRMClient should clean up dangling unsatisfied request
> ---
>
> Key: YARN-779
> URL: https://issues.apache.org/jira/browse/YARN-779
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Maysam Yabandeh
>Priority: Critical
>
> If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
> node2 is placed (assuming a single rack) the resulting ResourceRequests will 
> be
> {code}
> location - containers
> -
> node1- 10
> node2- 10
> rack - 10
> ANY  - 10
> {code}
> Assuming 5 containers are allocated in node1 and 5 containers are allocated 
> in node2, the following ResourceRequests will be outstanding on the RM.
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> {code}
> If the AMMRClient does a new ContainerRequest allocation, this time for 5 
> containers in node3, the resulting outstanding ResourceRequests on the RM 
> will be:
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> node3- 5
> rack - 5
> ANY  - 5
> {code}
> At this point, the scheduler may assign 5 containers to node1 and it will 
> never assign the 5 containers node3 asked for.
> AMRMClient should keep track of the outstanding allocations counts per 
> ContainerRequest and when gets to zero it should update the the RACK/ANY 
> decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-781) Expose LOGDIR that containers should use for logging

2013-06-15 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-781:
-

Attachment: YARN-781.3.patch

new patch exposed the containerLogDirs instead of the log base dirs.

> Expose LOGDIR that containers should use for logging
> 
>
> Key: YARN-781
> URL: https://issues.apache.org/jira/browse/YARN-781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Devaraj Das
>Assignee: Jian He
> Attachments: YARN-781.1.patch, YARN-781.2.patch, YARN-781.3.patch, 
> YARN-781.patch
>
>
> The LOGDIR is known. We should expose this to the container's environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-781) Expose LOGDIR that containers should use for logging

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684418#comment-13684418
 ] 

Hadoop QA commented on YARN-781:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587992/YARN-781.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1261//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1261//console

This message is automatically generated.

> Expose LOGDIR that containers should use for logging
> 
>
> Key: YARN-781
> URL: https://issues.apache.org/jira/browse/YARN-781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Devaraj Das
>Assignee: Jian He
> Attachments: YARN-781.1.patch, YARN-781.2.patch, YARN-781.3.patch, 
> YARN-781.patch
>
>
> The LOGDIR is known. We should expose this to the container's environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-823) Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI

2013-06-15 Thread Jian He (JIRA)
Jian He created YARN-823:


 Summary: Move RMAdmin from yarn.client to yarn.client.cli and 
rename as RMAdminCLI
 Key: YARN-823
 URL: https://issues.apache.org/jira/browse/YARN-823
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-823.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-823) Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI

2013-06-15 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-823:
-

Attachment: YARN-823.patch

simple eclipse move and rename

> Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI
> -
>
> Key: YARN-823
> URL: https://issues.apache.org/jira/browse/YARN-823
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-823.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-823) Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684421#comment-13684421
 ] 

Hadoop QA commented on YARN-823:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587993/YARN-823.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1262//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1262//console

This message is automatically generated.

> Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI
> -
>
> Key: YARN-823
> URL: https://issues.apache.org/jira/browse/YARN-823
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-823.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-781) Expose LOGDIR that containers should use for logging

2013-06-15 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684431#comment-13684431
 ] 

Vinod Kumar Vavilapalli commented on YARN-781:
--

Looks good, +1. Checking this in.

> Expose LOGDIR that containers should use for logging
> 
>
> Key: YARN-781
> URL: https://issues.apache.org/jira/browse/YARN-781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Devaraj Das
>Assignee: Jian He
> Attachments: YARN-781.1.patch, YARN-781.2.patch, YARN-781.3.patch, 
> YARN-781.patch
>
>
> The LOGDIR is known. We should expose this to the container's environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-781) Expose LOGDIR that containers should use for logging

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684437#comment-13684437
 ] 

Hudson commented on YARN-781:
-

Integrated in Hadoop-trunk-Commit #3933 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3933/])
YARN-781. Exposing LOGDIR in all containers' environment which should be 
used by containers for logging purposes. Contributed by Jian He. (Revision 
1493428)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493428
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


> Expose LOGDIR that containers should use for logging
> 
>
> Key: YARN-781
> URL: https://issues.apache.org/jira/browse/YARN-781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Devaraj Das
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-781.1.patch, YARN-781.2.patch, YARN-781.3.patch, 
> YARN-781.patch
>
>
> The LOGDIR is known. We should expose this to the container's environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-823) Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI

2013-06-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684443#comment-13684443
 ] 

Karthik Kambatla commented on YARN-823:
---

To preserve history, an "svn mv" followed by the class name change might be 
better.

> Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI
> -
>
> Key: YARN-823
> URL: https://issues.apache.org/jira/browse/YARN-823
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-823.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-693) Sending NMToken to AM on allocate call

2013-06-15 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-693:
---

Attachment: YARN-693-20130615.patch

> Sending NMToken to AM on allocate call
> --
>
> Key: YARN-693
> URL: https://issues.apache.org/jira/browse/YARN-693
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
> YARN-693-20130614.1.patch, YARN-693-20130615.patch
>
>
> This is part of YARN-613.
> As per the updated design, AM will receive per NM, NMToken in following 
> scenarios
> * AM is receiving first container on underlying NM.
> * AM is receiving container on underlying NM after either NM or RM rebooted.
> ** After RM reboot, as RM doesn't remember (persist) the information about 
> keys issued per AM per NM, it will reissue tokens in case AM gets new 
> container on underlying NM. However on NM side NM will still retain older 
> token until it receives new token to support long running jobs (in work 
> preserving environment).
> ** After NM reboot, RM will delete the token information corresponding to 
> that AM for all AMs.
> * AM is receiving container on underlying NM after NMToken master key is 
> rolled over on RM side.
> In all the cases if AM receives new NMToken then it is suppose to store it 
> for future NM communication until it receives a new one.
> AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-693) Sending NMToken to AM on allocate call

2013-06-15 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684532#comment-13684532
 ] 

Omkar Vinit Joshi commented on YARN-693:


Fixed TestCases and find bug warnings..

bq. I meant creating a newInstance method in NMToken.java. Once you do that, 
NMTokenSecretManagerInRM.getNMTokens can use that API. We shouldn't be directly 
using new NMTokenPBImpl().

Fixed it.

bq. Also, NMTokenIdentifier.newNMToken() is better placed in 
NMTokenSecretManagerInRM as RM is where NMTokens are created.

fixed


> Sending NMToken to AM on allocate call
> --
>
> Key: YARN-693
> URL: https://issues.apache.org/jira/browse/YARN-693
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
> YARN-693-20130614.1.patch, YARN-693-20130615.patch
>
>
> This is part of YARN-613.
> As per the updated design, AM will receive per NM, NMToken in following 
> scenarios
> * AM is receiving first container on underlying NM.
> * AM is receiving container on underlying NM after either NM or RM rebooted.
> ** After RM reboot, as RM doesn't remember (persist) the information about 
> keys issued per AM per NM, it will reissue tokens in case AM gets new 
> container on underlying NM. However on NM side NM will still retain older 
> token until it receives new token to support long running jobs (in work 
> preserving environment).
> ** After NM reboot, RM will delete the token information corresponding to 
> that AM for all AMs.
> * AM is receiving container on underlying NM after NMToken master key is 
> rolled over on RM side.
> In all the cases if AM receives new NMToken then it is suppose to store it 
> for future NM communication until it receives a new one.
> AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-610) ClientToken should not be set in the environment

2013-06-15 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684533#comment-13684533
 ] 

Omkar Vinit Joshi commented on YARN-610:


patch submitted should be applied on top of YARN-693 which also involves 
AMRMProtocol changes.

> ClientToken should not be set in the environment
> 
>
> Key: YARN-610
> URL: https://issues.apache.org/jira/browse/YARN-610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-610-20130614.patch
>
>
> Similar to YARN-579, this can be set via ContainerTokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-824) Add static factory to yarn client lib interface and change it to abstract class

2013-06-15 Thread Jian He (JIRA)
Jian He created YARN-824:


 Summary: Add  static factory to yarn client lib interface and 
change it to abstract class
 Key: YARN-824
 URL: https://issues.apache.org/jira/browse/YARN-824
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He


Do this for AMRMClient, NMClient, YarnClient. and annotate its impl as private.
The purpose is not to expose impl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-35) Move to per-node RM-NM secrets

2013-06-15 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-35?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684537#comment-13684537
 ] 

Omkar Vinit Joshi commented on YARN-35:
---

* I think we should make it configurable if user wants he can enable it (like 
for unsecured environment it will unnecessarily increase RM memory).
* This would be tricky when we are going to support work preserving mode. As 
how long we will key per node SecretManager (key) in RM? say for example if NM 
reconnects then it should be given same key or else nm will reject all the 
connections with older tokens.

> Move to per-node RM-NM secrets
> --
>
> Key: YARN-35
> URL: https://issues.apache.org/jira/browse/YARN-35
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>
> We should move over to per node secret (RM-NM shared secrets) for security 
> sake. It was what I had in my mind while designing the whole security 
> architecture, but somehow it got lost in all the storm of security patches.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-693) Sending NMToken to AM on allocate call

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684538#comment-13684538
 ] 

Hadoop QA commented on YARN-693:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588000/YARN-693-20130615.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1263//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1263//console

This message is automatically generated.

> Sending NMToken to AM on allocate call
> --
>
> Key: YARN-693
> URL: https://issues.apache.org/jira/browse/YARN-693
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
> YARN-693-20130614.1.patch, YARN-693-20130615.patch
>
>
> This is part of YARN-613.
> As per the updated design, AM will receive per NM, NMToken in following 
> scenarios
> * AM is receiving first container on underlying NM.
> * AM is receiving container on underlying NM after either NM or RM rebooted.
> ** After RM reboot, as RM doesn't remember (persist) the information about 
> keys issued per AM per NM, it will reissue tokens in case AM gets new 
> container on underlying NM. However on NM side NM will still retain older 
> token until it receives new token to support long running jobs (in work 
> preserving environment).
> ** After NM reboot, RM will delete the token information corresponding to 
> that AM for all AMs.
> * AM is receiving container on underlying NM after NMToken master key is 
> rolled over on RM side.
> In all the cases if AM receives new NMToken then it is suppose to store it 
> for future NM communication until it receives a new one.
> AMRMClient should expose these NMToken to client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684539#comment-13684539
 ] 

Hitesh Shah commented on YARN-727:
--

@Xuan, shouldn't the absence of application type ( i.e. null ) be considered a 
wild-card? i.e. if type not specified, return applications for all types? Why 
is there a need for an explicit value? What if someone sets the application 
type to "ALL" when they submit an application? 

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684541#comment-13684541
 ] 

Xuan Gong commented on YARN-727:


Yes, that is the better way to do it. Thanks for the suggestion

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.5.patch

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch, YARN-727.5.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684546#comment-13684546
 ] 

Xuan Gong commented on YARN-727:


If applicationType is null or empty, applications for all types will return.

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch, YARN-727.5.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-825) Fix yarn-common javadoc annotations

2013-06-15 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-825:


 Summary: Fix yarn-common javadoc annotations
 Key: YARN-825
 URL: https://issues.apache.org/jira/browse/YARN-825
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-06-15 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684548#comment-13684548
 ] 

Bikas Saha commented on YARN-779:
-

That API is AMRMClient.removeContainerRequest().

> AMRMClient should clean up dangling unsatisfied request
> ---
>
> Key: YARN-779
> URL: https://issues.apache.org/jira/browse/YARN-779
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Maysam Yabandeh
>Priority: Critical
>
> If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
> node2 is placed (assuming a single rack) the resulting ResourceRequests will 
> be
> {code}
> location - containers
> -
> node1- 10
> node2- 10
> rack - 10
> ANY  - 10
> {code}
> Assuming 5 containers are allocated in node1 and 5 containers are allocated 
> in node2, the following ResourceRequests will be outstanding on the RM.
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> {code}
> If the AMMRClient does a new ContainerRequest allocation, this time for 5 
> containers in node3, the resulting outstanding ResourceRequests on the RM 
> will be:
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> node3- 5
> rack - 5
> ANY  - 5
> {code}
> At this point, the scheduler may assign 5 containers to node1 and it will 
> never assign the 5 containers node3 asked for.
> AMRMClient should keep track of the outstanding allocations counts per 
> ContainerRequest and when gets to zero it should update the the RACK/ANY 
> decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-825) Fix yarn-common javadoc annotations

2013-06-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-825:
-

Attachment: YARN-825-20130615.txt

Attaching a patch that I worked on with help from [~zjshen]. Other than setting 
audience annotations, it does the following
 - Added package-info where missing.
 - Moved Lock to server-common
 - Removed Self.java and FilterService.java (Unsed)

Overall, I felt that it isn't enough to just mark the package as private where 
applicable, so I went ahead and added annotations for individual classes. 

To limit scope, I didn't do more cleanups which we should do separately:
 - ClusterInfo.java doesn't seem to belong to org.apache.hadoop.yarn?
 - Move Clock/SystemClock to util package?
 - Remove YarnVersionAnnotation completely?
 - Rename RMTokenSelector to be RMDelegationTokenSelector.
 - Move service stuff to Hadoop common. Along with state-machine impl.
 - Cleanup Apps & ConverterUtils & StringHelper & Times to avoid duplicate APIs
 - Move Graph and VisualizeStateMachine into yarn.state package?


> Fix yarn-common javadoc annotations
> ---
>
> Key: YARN-825
> URL: https://issues.apache.org/jira/browse/YARN-825
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Attachments: YARN-825-20130615.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-825) Fix yarn-common javadoc annotations

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684554#comment-13684554
 ] 

Hadoop QA commented on YARN-825:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588004/YARN-825-20130615.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1265//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1265//console

This message is automatically generated.

> Fix yarn-common javadoc annotations
> ---
>
> Key: YARN-825
> URL: https://issues.apache.org/jira/browse/YARN-825
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Attachments: YARN-825-20130615.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-693) Sending NMToken to AM on allocate call

2013-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684555#comment-13684555
 ] 

Hudson commented on YARN-693:
-

Integrated in Hadoop-trunk-Commit #3935 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3935/])
YARN-693. Modified RM to send NMTokens on allocate call so that AMs can 
then use them for authentication with NMs. Contributed by Omkar Vinit Joshi. 
(Revision 1493448)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1493448
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NMToken.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/NMTokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestAMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseContainerTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/NMTokenSecretManagerInRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java


> Sending NMToken to AM on allocate call
> --
>
> Key: YARN-693
> URL: https://issues.apache.org/jira/browse/YARN-693
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
> Fix For: 2.1.0-beta
>
> Attachments: YARN-693-20130610.patch, YARN-693-20130613.patch, 
> YARN-693-20130614.1.patch, YARN-693-20130615.patch
>
>
> This is part of YARN-613.
> As per the updated design, AM will receive per NM, NMToken in following 
> scenarios
> * AM is receiving first container on underlying NM.
> * AM is receiving container on underlying NM after either NM or RM rebooted.
> ** After RM reboot, as RM doesn't remember (persist) the information about 
> keys issued per AM per NM, it will reissue tokens in case AM gets new 
> container on underlying NM. However on NM side NM will still retain older 
> token until it receives new token to support long running jobs (in work 
> preserving environment).
> ** After NM reboot, RM will delete the token information corresponding to 
> that AM for all AMs.
> * AM is receiving container on underlying NM after NMToken master key is 
> rolled over on RM side.
> In all 

[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684560#comment-13684560
 ] 

Hadoop QA commented on YARN-727:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588003/YARN-727.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.v2.TestMRJobs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1264//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1264//console

This message is automatically generated.

> ClientRMProtocol.getAllApplications should accept ApplicationType as a 
> parameter
> 
>
> Key: YARN-727
> URL: https://issues.apache.org/jira/browse/YARN-727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, 
> YARN-727.4.patch, YARN-727.5.patch
>
>
> Now that an ApplicationType is registered on ApplicationSubmission, 
> getAllApplications should be able to use this string to query for a specific 
> application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-610) ClientToken should not be set in the environment

2013-06-15 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684562#comment-13684562
 ] 

Vinod Kumar Vavilapalli commented on YARN-610:
--

bq. Now instead of sharing it here in environment we should send it on 
AMRMProtocol as a part of AMRegistration. Thereby AM will get secret key after 
am registration and will be able to authenticate thereafter. any thoughts?
That is reasonable.

Seems like the patch has gone stale, can you rebase?

File a MR ticket too as a followup tracking MR changes.

Patch looks mostly good. W.r.t TestClientTokens
 - TestClientTokens should also be renamed to TestClientToAMTokens
 - CustomNM is no longer needed?
 - There is commented out code that should be removed.

> ClientToken should not be set in the environment
> 
>
> Key: YARN-610
> URL: https://issues.apache.org/jira/browse/YARN-610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-610-20130614.patch
>
>
> Similar to YARN-579, this can be set via ContainerTokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-694:
-

Summary: Start using NMTokens to authenticate all communication with NM  
(was: AM uses the NMToken to authenticate all communication with NM. NM 
remembers and updates token across RM restart)

> Start using NMTokens to authenticate all communication with NM
> --
>
> Key: YARN-694
> URL: https://issues.apache.org/jira/browse/YARN-694
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-694-20130613.patch
>
>
> AM uses the NMToken to authenticate all the AM-NM communication.
> NM will validate NMToken in below manner
> * If NMToken is using current or previous master key then the NMToken is 
> valid. In this case it will update its cache with this key corresponding to 
> appId.
> * If NMToken is using the master key which is present in NM's cache 
> corresponding to AM's appId then it will be validated based on this.
> * If NMToken is invalid then NM will reject AM calls.
> Modification for ContainerToken
> * At present RPC validates AM-NM communication based on ContainerToken. It 
> will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
> (replacing earlier behavior of ContainerToken per container per NM).
> * startContainer in case of Secured environment is using ContainerToken from 
> UGI YARN-617; however after this it will use it from the payload (Container).
> * ContainerToken will exist and it will only be used to validate the AM's 
> container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-826) Move Clock/SystemClock to util package

2013-06-15 Thread Zhijie Shen (JIRA)
Zhijie Shen created YARN-826:


 Summary: Move Clock/SystemClock to util package
 Key: YARN-826
 URL: https://issues.apache.org/jira/browse/YARN-826
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen


Clock/SystemClock should belong to util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-826) Move Clock/SystemClock to util package

2013-06-15 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-826:
-

Attachment: YARN-826.1.patch

Move Clock/SystemClock, patch needs to rebase when YARN-825 is checked in

> Move Clock/SystemClock to util package
> --
>
> Key: YARN-826
> URL: https://issues.apache.org/jira/browse/YARN-826
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: YARN-826.1.patch
>
>
> Clock/SystemClock should belong to util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-826) Move Clock/SystemClock to util package

2013-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684583#comment-13684583
 ] 

Hadoop QA commented on YARN-826:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588007/YARN-826.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1266//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1266//console

This message is automatically generated.

> Move Clock/SystemClock to util package
> --
>
> Key: YARN-826
> URL: https://issues.apache.org/jira/browse/YARN-826
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: YARN-826.1.patch
>
>
> Clock/SystemClock should belong to util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-827) Need to make Resource arithmetic methods public

2013-06-15 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-827:
---

 Summary: Need to make Resource arithmetic methods public
 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Priority: Critical


org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like Resources 
and Calculators that help compare/add resources etc. Without these users will 
be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-827) Need to make Resource arithmetic methods public

2013-06-15 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-827:


Assignee: Zhijie Shen

> Need to make Resource arithmetic methods public
> ---
>
> Key: YARN-827
> URL: https://issues.apache.org/jira/browse/YARN-827
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.1.0-beta
>Reporter: Bikas Saha
>Assignee: Zhijie Shen
>Priority: Critical
>
> org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
> Resources and Calculators that help compare/add resources etc. Without these 
> users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira